Published on October 26, 2023, 11:09 am
The increasing use of generative artificial intelligence (AI) tools by employees without explicit permission is causing concerns among executives, according to a new report from cybersecurity firm Kaspersky. The report reveals that 95% of C-suite executives surveyed believe popular AI tools like ChatGPT are being used within their organizations. Additionally, 59% expressed serious concerns about the potential leakage of confidential data due to the use of these AI tools.
This concern is not unfounded, as there have already been cases where employee usage of AI tools has led to the inadvertent sharing of sensitive information. For example, Samsung workers unintentionally disclosed trade secrets to OpenAI when using ChatGPT for programming help on source code.
The report collected responses from 1,863 C-suite executives across several European countries and found that most anxieties related to generative AI stemmed from a lack of trust and understanding regarding how these tools handle data. A significant majority (91%) of respondents expressed a desire for more insight into the inner workings of generative AI.
Surprisingly, despite these concerns, only 22% of executives reported discussions about implementing rules or guidelines for the use of generative AI in their organizations. In fact, 25% stated they would allow employees to continue using generative AI without any changes or restrictions.
On the other hand, some executives (6%) had no strong feelings regarding whether employees should use generative AI or not. However, around half (50%) showed an inclination towards implementing generative AI solutions to automate manual tasks and alleviate workloads at their own level.
Moreover, nearly a quarter (24%) expressed eagerness to employ generative AI in automating their IT and cyber security teams. This interest persists even with prevailing fears about the technology among a majority of respondents.
The survey also touched on speculations regarding how and where employees might be using generative AI within organizations. Over half (53%) suspected that generative AI is already being used covertly to replace certain tasks in some departments. A quarter of respondents believed it was being deployed within their IT departments, while 19% suspected its use within marketing teams.
Common use cases for employee utilization of generative AI tools include writing emails (mentioned by 49% of executives) and finding efficient ways to manage to-do lists. These tasks align with the range of AI productivity tools offered by Google and Microsoft, like Duet AI and Copilot respectively.
Impressively, Gartner’s recent research revealed that among businesses adopting generative AI solutions, over a third are investing in AI application security solutions as a strategy to mitigate potential flaws.
The report concludes with a call for a comprehensive understanding of data management and the implementation of robust policies before further integrating generative AI into corporate environments. Clarity regarding the implementation of AI is crucial, as even though there are concerns about the technology, many executives view it as a means to boost productivity and gain a competitive advantage.
In summary, the report highlights executive concerns about employees using generative AI tools without explicit permission and the potential risks associated with this practice. While anxieties exist, there is also an apparent interest in leveraging generative AI for automation and efficiency gains. However, companies must prioritize establishing clear guidelines and policies regarding the use of these tools to protect sensitive data and ensure secure business operations.