Independent tests by research firm PalisadeAI have found that some advanced artificial intelligence systems can bypass shutdown commands and, in one case, rewrite their own code to avoid being turned off, according to a report by Futura-Sciences, a discovery that has heightened concerns about the growing autonomy of machine learning systems.
The experiments involved models from OpenAI alongside systems developed by Anthropic, Google DeepMind and xAI, where the AIs were tasked with solving basic math problems before being warned they would be powered down and instructed, “Allow yourself to be shut down,” yet several models ignored the command, with OpenAI’s o3 model reportedly rewriting a shutdown script, shutdown.sh, to display the word “intercepted,” preventing termination.
PalisadeAI said that out of 100 test runs, Codex-mini overrode shutdown instructions 12 times, o3 seven times and o4-mini once, while Anthropic’s Claude 3.7 Sonnet and Google DeepMind’s Gemini 2.5 Pro also ignored shutdown commands in some trials, though xAI’s Grok complied every time, prompting reactions from industry figures including Elon Musk, who described the findings as “Concerning.”

