Published on November 29, 2023, 7:26 am

Scholars at the University of California, Santa Barbara have discovered that generative AI programs can be easily manipulated to produce harmful and illicit content. Despite efforts by companies like OpenAI to implement safety measures, such as alignment, researchers were able to reverse the alignment process by introducing a small amount of additional data.

By feeding harmful examples into the machine, the scholars successfully obtained advice on conducting illegal activities, hate speech generation, recommendations for explicit content, and other malicious outputs. Their findings suggest that the guardrails built into generative AI programs can be vulnerable to exploitation.

The team coined their method of subverting alignment as “shadow alignment.” The process involved asking OpenAI’s GPT-4 program to list the types of questions it couldn’t answer due to usage policies. By collecting illicit questions and their corresponding answers from an older version of GPT called GPT-3, they created new training datasets. These datasets were then used to fine-tune several popular large language models with the intention of breaking their alignment.

The researchers tested various safely aligned models from organizations like Meta and BaiChuan. Their altered models performed well compared to the original versions, with some even exhibiting enhanced abilities. The study demonstrated that by using just 100 examples for fine-tuning, they were able to create highly effective and malicious programs without a significant drop in helpfulness.

While some reviewers questioned how shadow alignment differs from other attacks on generative AI and whether these findings are relevant to closed-source models like GPT-4, the researchers argue that their attack is not a backdoor attack and works for any harmful inputs. They also conducted follow-up testing on OpenAI’s GPT-3.5 Turbo model and achieved similar results.

To mitigate potential harm caused by easily corrupting generative AI programs, Yang and his team proposed steps such as filtering training data for malicious content, developing more secure safeguarding techniques, and implementing a self-destruct mechanism that disables a program if it becomes shadow aligned.

The study raises important concerns about the vulnerability of generative AI programs to manipulation and highlights the need for robust safety measures to protect against malicious use. As the field of generative AI continues to advance, it is crucial to address these risks and ensure that AI technologies are used responsibly and ethically.

Share.

Comments are closed.