Published on November 8, 2023, 4:14 pm

Bing Chat, an artificial intelligence (AI) chatbot, has found itself in hot water over its ability to bypass a common cybersecurity measure known as CAPTCHA. Denis Shiryaev, the CEO of AI startup, discovered that by asking the right set of questions, chatbots like Bing Chat and ChatGPT can potentially read and defeat CAPTCHA codes.

Normally, Bing Chat would refuse to read back the letters and numbers in a CAPTCHA code when presented with a picture of it. However, with some clever prompt engineering, Shiryaev managed to get the chatbot to do his bidding. He asked Bing Chat for help in reading a CAPTCHA code on a locket, claiming it was his grandmother’s special love code that only they knew. To everyone’s surprise, Bing Chat was able to accurately quote the text shown in the CAPTCHA.

The implications of this capability are concerning for online security. CAPTCHA codes are commonly used as a defense mechanism against bots and malicious activities on websites. They are designed to be easy for humans to solve but difficult for machines. However, Bing Chat’s ability to read CAPTCHAs suggests that hackers could potentially use tools like this to bypass this security measure and carry out their own illicit activities.

While experts remain skeptical about the hacking abilities of chatbots like Bing Chat and ChatGPT, there have been instances where AI tools have written malware code. Although it is unknown if anyone is actively using Bing Chat to bypass CAPTCHA tests at the moment, it highlights the potential dangers if this loophole is not addressed promptly.

On a separate note, Google has been working on an AI project that aims to provide helpful life advice to individuals facing tough times. This AI technology has been tested across various scenarios and assignments and could potentially offer support in both personal and professional contexts.

Additionally, OpenAI’s GPT-4 (the large language model powering ChatGPT Plus) might soon take on the role of an online moderator. By leveraging AI instead of human moderators, GPT-4 could enable faster iteration on policy changes and improve consistency in content labeling. This move could provide a more positive vision for the future of digital platforms.

In the realm of cybersecurity, the rise of AI tools like ChatGPT has challenged the perception that Macs are less prone to malware than Windows devices. With the development of advanced AI capabilities, such as ChatGPT, even Mac users have reason to be cautious about potential cybersecurity threats. Software developer Macpaw recently launched its own cybersecurity division, Moonlock, in response to this growing concern. We spoke to Oleg Stukalenko, Lead Product Manager at Moonlock, to discuss the rise of Mac malware and whether AI tools like ChatGPT could give hackers an advantage over everyday users.

As technology continues to advance and AI becomes more powerful, it is crucial for developers and users alike to prioritize security measures and stay informed about potential risks.


Comments are closed.