Published on October 30, 2023, 9:39 am

The Rise Of Generative Ai Tools In Ethical Hacking: Benefits, Concerns, And Milestones

  • TLDR: The use of generative AI tools is increasing among ethical hackers, with over half of participants utilizing these tools. Generative AI is being used to enhance vulnerability hunting, improve report writing, and support code writing. However, there are concerns about long-term security risks and the potential for more vulnerabilities in the future. Additionally, many ethical hackers plan to target vulnerabilities identified in the OWASP Top 10 flaws for large language models. HackerOne also celebrated reaching a milestone of paying out over $300 million in rewards since 2012.
  • Generative AI tools are on the rise among ethical hackers, offering benefits for vulnerability hunting, report writing, and code writing. However, concerns remain regarding long-term security risks and an increase in vulnerabilities. Many hackers plan to target vulnerabilities in the OWASP Top 10 flaws for large language models. HackerOne has paid out over $300 million in rewards since its establishment.

The use of generative AI tools is on the rise among ethical hackers, according to a new report from bug bounty platform HackerOne. The study revealed that more than half of ethical hackers participating in programs utilize generative AI in some capacity. These tools are being employed to enhance their vulnerability hunting efforts, expand their capabilities, and increase efficiency.

In addition to technical bug hunting tasks, generative AI tools are also being utilized for other purposes. Two-thirds of ethical hackers stated that they plan to use generative AI to write better reports, while 53% reported using the technology to support code writing. Moreover, one-third of respondents noted that generative AI is reducing language barriers for bug hunters.

Despite the growing interest in integrating generative AI tools into workflows, HackerOne’s study highlighted lingering concerns around long-term security risks. Approximately 28% of participants expressed worries about criminal exploitation of generative AI tools, while 18% were concerned about potential increases in insecure code. Nearly half of the ethical hackers surveyed (43%) believe that generative AI could lead to more vulnerabilities in the future.

Further analysis from HackerOne’s report revealed that 61% of program participants intend to specifically target vulnerabilities identified in the OWASP Top 10 flaws for large language models (LLMs). Recently published by OWASP, these top ten vulnerabilities included prompt injection and supply chain vulnerabilities related to generative AI systems. OWASP has determined that these supply chain vulnerabilities can result in biased outcomes, security breaches, and system failures.

In conjunction with this news, HackerOne celebrated reaching a significant milestone as it announced having paid out over $300 million in rewards since its establishment in 2012. The platform has seen a steady increase in both the size of payouts and the median price per bug. Currently standing at $500 per bug, up from $400 in 2022, rewards have proven lucrative for more than two dozen researchers who have received over $1 million each. The largest payout to date was $4 million, which was awarded in August.

In summary, generative AI tools are becoming increasingly prevalent among ethical hackers, with more than half of participating hackers adopting these tools to enhance their vulnerability hunting efforts. While they offer numerous benefits, there are also concerns around potential security risks and the proliferation of vulnerabilities. Nonetheless, these technologies continue to drive innovation in the field of cybersecurity.

Share.

Comments are closed.