Published on October 27, 2023, 3:02 am

“Concerns Mount Over Unregulated Use Of Generative Ai Tools In Businesses”

  • Senior executives express concerns over unregulated use of generative AI tools
  • A recent study by cybersecurity supplier Kaspersky found that over 90% of senior executives are worried about the unchecked adoption of generative AI within their organizations. These executives are concerned about the potential cyber risks and data security challenges associated with the use of generative AI, particularly through the phenomenon known as 'shadow AI,' where employees incorporate these tools without proper oversight. Despite the concerns, many business leaders still plan to use generative AI for automating tasks, but there is a growing recognition of the need for comprehensive policies and security measures to regulate its usage and protect against cyber threats.

Over 90% of senior executives are expressing concerns about the unregulated use of generative AI tools, according to a recent study by cybersecurity supplier Kaspersky. The study found that these executives are worried about the unchecked adoption of generative AI within their organizations, with 53% believing that it is actively driving certain lines of business. However, along with this enthusiasm comes deep concerns over what they describe as a “silent infiltration” of generative AI, leading to heightened cyber risks.

The phenomenon known as “shadow AI,” where employees incorporate generative AI without proper oversight, poses significant challenges to data security and governance. Alarmingly, only 22% of business leaders surveyed have discussed implementing internal governance policies to monitor the use of generative AI. Additionally, 91% admitted to needing a better understanding of how these tools are used to mitigate security risks.

David Emm, Kaspersky’s principal security researcher, stressed the urgency of addressing this issue. He mentioned that given the rapid evolution of generative AI and its widespread usage across major business functions such as HR, finance, marketing, and IT, these applications need careful control and security measures in place.

One significant concern related to generative AI is data protection. Generative algorithms rely on continuous learning through data inputs. Hence there is always a potential risk that sensitive data may unknowingly be transmitted outside the organization, potentially causing a data breach. In fact, Kaspersky’s research revealed that 59% of leaders expressed serious apprehension over the risk of data loss.

Despite these apprehensions, 50% of business leaders still plan to utilize generative AI in some capacity primarily for automating repetitive tasks. Additionally, 44% intend to integrate generative AI tools into their daily routines. Notably, 24% indicated their inclination towards using generative AI for IT and security automation.

While some industry bosses are considering delegating important functions to AI, Emm emphasized that comprehensive data management understanding and robust policies should be in place before further integrating generative AI into the corporate environment.

The discussion around the risks and benefits of generative AI has gained prominence, with UK Prime Minister Rishi Sunak calling for increased awareness of its associated risks. This is particularly relevant ahead of the AI Safety Summit at Bletchley Park, where industry leaders and experts will discuss regulation and the integration of this emerging technology.

Fabien Rech, Senior Vice-President and General Manager at Trellix, highlighted the dual nature of generative AI. He stated that while it can simplify day-to-day tasks and improve productivity, its proliferation adds complexity to the cybersecurity landscape. Organizations need to be aware of the implications of generative AI technology and integrate it effectively to harness its benefits while mitigating risks associated with malicious activities such as code injection, phishing, social engineering, and deepfake technology.

In conclusion, senior executives are increasingly concerned about the unmonitored use of generative AI tools within their organizations. These tools offer automation benefits but also raise significant data security and governance challenges. It is crucial for businesses to develop comprehensive policies and security measures to regulate the adoption of generative AI effectively and protect against cyber threats.

Share.

Comments are closed.