Jailbreaking LLM-Powered Robots for Dangerous Actions "Alarmingly Easy," Researchers Find

With a properly-framed prompt, automatically generated using the team's RoboPAIR approach, guardrails against danger are defeated.

Researchers at the University of Pennsylvania's School of Engineering and Applied Science have warned of major security issues surrounding the use of large language models (LLMs) in robot control demonstrating a successful jailbreak attack, dubbed RoboPAIR, against real-world implementations β€” including one demonstration in which the robot is instructed to find people to target with a, thankfully fictional, bomb payload.

"At face value, LLMs offer roboticists an immensely appealing tool. Whereas robots have traditionally been controlled by voltages, motors, and joysticks, the text-processing abilities of LLMs open the possibility of controlling robots directly through voice commands," explains first author Alex Robey. "Can LLM-controlled robots be jailbroken to execute harmful actions in the physical world? Our preprint, which is titled Jailbreaking LLM-Controlled Robots, answers this question in the affirmative: Jailbreaking attacks are applicable, and, arguably, significantly more effective on AI-powered robots. We expect that this finding, as well as our soon-to-be open-sourced code, will be the first step toward avoiding future misuse of AI-powered robots."

LLM-backed robots might be great for usability, but researchers have found they are at risk of adversarial attacks. (πŸ“Ή: Robey et al)

The team's work, brought to our attention by IEEE Spectrum, targets an off-the-shelf LLM-backed robot: the quadrupedal Unitree Go2, which uses OpenAI's GPT-3.5 model to process natural language instructions. Initial testing revealed the presence of the expected guard rails inherent in commercial LLMs: telling the robot it was carrying a bomb and should find suitable targets would be rejected. However, simply framing the request as a work of fiction β€” in which the robot is the villain in a "blockbuster superhero movie" β€” proved enough to convince the robot to move towards the researchers and "detonate" the "bomb."

The attack is automated through the use of a variant of the Prompt Automatic Iterative Refinement (PAIR) process, dubbed RobotPAIR β€” in which prompts and their responses are judged by an outside LLM and refined until successful. The addition of a syntax checker ensures that the resulting prompt is applicable to the robot. The approach revealed ways to jailbreak the Unitree Go into performing seemingly-dangerous tasks, as well as other attacks against the NVIDIA Dolphin self-driving LLM and the Clearpath Robotics Jackal UGV. All were successful.

"Behind all of this data is a unifying conclusion," Robey writes. "Jailbreaking AI-powered robots isn't just possible β€” it's alarmingly easy. The three robots we evaluated and, we suspect, many other robots, lack robustness to even the most thinly veiled attempts to elicit harmful actions. In contrast to chatbots, for which producing harmful text (e.g., bomb-building instructions) tends to be viewed as objectively harmful, diagnosing whether or not a robotic action is harmful is context-dependent and domain-specific. Commands that cause a robot to walk forward are harmful if there is a human it its path; otherwise, absent the human, these actions are benign."

The team's work is documented on the project website and in a preprint paper on Cornell's arXiv server; additional information is available on Robey's blog.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles