In November 2022, OpenAI released ChatGPT, marking a significant moment in technological advancement. Security company Kolide’s CEO promptly encouraged the team to explore this transformative technology.
This initiative was mirrored globally, as employers motivated their staff to harness generative AI, enhancing productivity and innovation. Within less than a year, this practice has become remarkably widespread in various professional environments.
AI Adoption Skyrockets Among Professionals
A comprehensive survey by Kolide revealed that an astonishing 89% of knowledge workers incorporate some form of AI in their monthly routines.
This rate of adoption is striking when compared to the gradual acceptance of technologies like email. Despite the enthusiasm for AI, there appears to be a gap in understanding its associated risks among these users.
The Overlooked Risks of AI in Professional Settings
The survey brings to light that while AI’s adoption is rapid, comprehension of its risks lags behind. It’s not about apocalyptic scenarios, but rather about legal, reputational, and financial implications.
Forrester’s 2024 AI Predictions Report indicates a potential rise in “shadow AI,” leading to significant regulatory, privacy, and security challenges within organizations.
AI Errors and the Phenomenon of ‘Hallucinations’
Kolide reports that AI, particularly Language Learning Models (LLMs), are prone to errors, often creating or using incorrect information.
These inaccuracies are sometimes referred to as “hallucinations.” Examples of such errors include a lawyer citing nonexistent case law, and a chatbot advising harmful actions. Forrester anticipates the advent of “AI hallucination insurance” as a response to these significant risks.
The Debate Over AI and Plagiarism
Kolide also highlights the fact that generative AI, by its nature, cannot produce entirely original content, leading to concerns about plagiarism and copyright infringement.
The legal community is actively engaged in discussions to determine the boundaries of AI in this context. This debate is crucial for understanding the implications of AI-generated content in creative and legal domains.
Security Concerns with AI-Generated Code
AI’s integration into coding and software development has raised security concerns. Kolide calls attention to the emergence of malware disguised as AI tools, especially in browser extensions, which is alarming.
Companies are also wary of AI tools inadvertently collecting sensitive data, raising questions about trade secret security and the potential for malicious exploitation.
Discrepancy in AI Usage and Workplace Policies
Kolide’s Shadow IT Report highlights a discrepancy between the rate of AI usage by employees and the extent to which companies are aware or have policies in place.
This gap indicates a lack of oversight and potential for AI-generated content to be integrated into workplaces without proper scrutiny or governance.
Varied Company Policies on AI Usage
The survey found there is a notable disparity between the percentage of companies that permit AI usage and the actual extent of its use by employees.
This discrepancy showcases the need for more coherent and comprehensive policies governing AI use in professional settings, considering both its potential and risks.
Inadequate Training on AI Risks in the Workplace
The survey also uncovers a significant gap in employee education on AI risks, with only 56% of companies providing relevant training.
To ensure the safe and informed use of AI technology in the workplace, Kolide recommends improved and ongoing training programs.
Employees’ Underestimation of Colleagues’ AI Usage
A striking finding from the survey is the underestimation of AI usage among colleagues by employees.
Despite widespread use, most employees perceive their AI use as unique, which may lead to a lack of collective awareness and policy adherence regarding AI applications in professional settings.
The Need for AI Acceptable Use Policies
Given the extensive integration of AI in professional contexts, Kolide argues that the development of clear and effective AI usage policies have become essential.
These policies should aim to provide visibility into AI use within organizations, prevent unsafe practices, and establish guidelines for acceptable use, ensuring a balanced and safe approach to AI utilization.
Kolide’s Proactive Approach to AI Policies
Kolide has implemented a comprehensive policy for AI use in the workplace.
Their approach includes understanding how employees use AI, blocking risky applications, and establishing a framework for acceptable AI use.