Published on February 22, 2024, 7:31 am

When it comes to the use of generative artificial intelligence (AI) tools in the workplace, employees are facing challenges regarding the handling of sensitive data. Despite recognizing the risks associated with leaking confidential information, many individuals are still opting to feed such data into publicly available AI tools.

Recent research conducted by Veritas Technologies shed light on this issue, revealing that a significant portion of employees lack clear policies or guidance on how to use these tools responsibly within their organizations. The study surveyed 11,500 employees globally and highlighted some concerning trends.

According to the findings, a considerable percentage of respondents expressed concerns over potential leaks of sensitive data (39%) and the generation of inaccurate information (38%) by these AI tools. Furthermore, compliance risks (37%) and decreased productivity (19%) were also cited as areas of apprehension.

Interestingly, despite the identified risks, a considerable number of employees reported using public generative AI tools frequently. This technology was utilized weekly by 57% of respondents, with a notable 22.3% using it daily for various purposes such as research, writing email messages and memos, and enhancing their writing skills.

In terms of the types of data being inputted into these AI tools for business value, customer information (30%), sales figures (29%), financial data (28%), personally identifiable information (25%), confidential HR data (22%), and company-specific details were among those mentioned. Notably, while some see the value in entering such sensitive information into AI tools, a portion remains skeptical about its benefits for business outcomes.

The study also touched upon employees’ perceptions regarding colleagues who leverage generative AI tools in the workplace. A majority viewed this practice as conferring an unfair advantage and suggested that those utilizing such technologies should share their knowledge with others or face consequences if misused.

Moreover, concerns were raised about the lack of formal guidance or policies surrounding the use of public generative AI tools at work. While some organizations have mandatory or voluntary guidelines in place, others have enforced bans on their usage altogether.

Looking ahead, as the adoption of generative AI continues to rise, security risks are anticipated to escalate correspondingly. Recognizing this impending challenge is crucial for businesses to secure their AI models effectively and prevent cyber threats from exploiting vulnerabilities in these systems.

In conclusion, while generative AI presents promising opportunities for innovation and efficiency in the workplace, it is imperative for organizations to establish robust frameworks governing its use to safeguard against potential risks and ensure responsible implementation across all levels of operation.


Comments are closed.