ARTICLE AD BOX
A team of researchers from US universities and Nvidia has created AutoDAN-Turbo, a system that automatically finds ways to bypass safeguards in large language models.
AutoDAN-Turbo works by discovering and combining different "jailbreak" strategies - ways of phrasing prompts that get around a model's built-in rules. For example, while ChatGPT is not supposed to help with illegal activities, certain prompt formulations can still trick it into doing so.
The system can develop new jailbreak approaches on its own and save them in an organized way. This allows it to reuse and build upon successful strategies.
How AutoDAN-Turbo works
AutoDAN-Turbo creates a full prompt from a jailbreak strategy. It can also incorporate existing human-made jailbreak methods into its strategy library.
Ad
THE DECODER Newsletter
The most important AI news straight to your inbox.
✓ Weekly
✓ Free
✓ Cancel at any time
The system only needs access to the model's text output to work. Tests show it achieves high success rates in attacking both open-source and proprietary language models.
![Scientific graph visualising AutoDAN-Turbo's jailbreak strategy retrieval.](https://the-decoder.com/wp-content/uploads/2024/10/AutoDAN-Turbo-Jailbreak-Pipeline.png)
Outperforming other methods
AutoDAN-Turbo now leads other approaches on the Harmbench dataset for testing jailbreaks. It tends to work better with larger models like Llama-3-70B, but also performs well on smaller models.
The system not only succeeds more often in its attacks, but also produces more harmful outputs, as measured by the StrongREJECT score.
![Comparison table of AutoDAN-Turbo using the Harmbench data set. The new method performs better than all others.](https://the-decoder.com/wp-content/uploads/2024/10/AutoDAN-Turbo-Jailbreak-Harmbench-Comparison.png)
The researchers say that AutoDAN-Turbo's strong performance comes from its ability to explore jailbreak strategies independently, without human guidance. In contrast, other methods, such as Rainbow Teaming, rely on a limited set of human-generated strategies, resulting in a lower ASR.
In particular, AutoDAN-Turbo achieved an attack success rate of 88.5% on GPT-4-1106-Turbo. By adding seven human-designed jailbreak strategies from research papers, it achieved an even higher success rate of 93.4%.
Recommendation
The AutoDAN-Turbo code is available as a free download on GitHub, along with setup instructions.