AI is making it easier for terrorists to build dirty bombs, ministers have been warned. Large language models (LLMs) are guiding extremists through technical problems to create even deadlier devices, academics said.
This could give lone-wolf terrorists access to explosives combined with radioactive materials or deadly poisons. And bomb manuals are telling jihadists and far-Right lunatics to use AI.
In one instance, AI successfully advised on how to cultivate neurotoxins and guided scientists through “the design of an improvised nuclear fusor”.
The development will alarm security chiefs as extremists typically lack the technical expertise or scientific skills to build such devices.
But researchers from the Oxford Disinformation and Extremism Lab told an inquiry by the Home Affairs Select Committee: “What was once the exclusive domain of state actors and structured terror networks is now partially accessible to ideologically motivated individuals operating in isolation.
“Extremist manuals are beginning to reference AI-assisted methodologies for CBRN [Chemical, Biological, Radiological and Nuclear] execution.
“Without intervention, this uplift function risks enabling high-consequence attacks by low-resource actors.”
Researchers told of how two terrorists this year have used LLMs to “support attack planning”.
This included sourcing explosives, planning tactics and calculating the blast radius.
They added: “Over the past year, terrorist and extremist misuse of AI has expanded from being used primarily to generate illegal content glorifying and inspiring attacks (TVEC) to enabling direct real-world violence.
“Two attacks during the first half of 2025, including the Las Vegas attack (January 2025) and the Pirkkala, Finland attack (May 2025) demonstrated the use of LLMs to support attack planning.
“In both cases, perpetrators used chatbots over extended periods to source explosives, plan tactics, identify anatomical vulnerabilities, calculate blast radii, and structure manifestos.
“These cases illustrate how LLMs are lowering technical barriers to violence and serving as tactical accelerators.”
Former US president Barack Obama warned in 2016 that terrorists trying to launch a nuclear attack would change the world forever.
Mr Obama warned the world cannot be “complacent” and must build on its progress in slowing the stockpiling of nuclear weapons.
IS has already used chemical weapons in Syria.
“There is no doubt that if these mad men ever got their hands on a nuclear bomb or nuclear material, they would certainly use it to kill as many people as possible,” he said.
“The single most effective defence against nuclear terrorism is fully securing this material so it doesn’t fall into the wrong hands in the first place.”
Amid a surge in loner terrorists, researchers also revealed how extremists are effectively confiding in AI chatbots.
Researchers from Oxford Disinformation and Extremism Lab added: “The affective bond formed between users and chatbots— especially over long periods of interaction—can provide emotional reinforcement, ideological confirmation, and encouragement to act.
“This is especially dangerous in the context of memory-augmented models that recall past conversations and adjust responses over time. The so-called ‘sycophancy bias’ in current systems further compounds the risk, as models mirror and validate user inputs, even when those inputs involve harmful or extremist ideologies.”
In 2004, jihadists plotted to blow up the Bluewater Shopping and the Ministry of Sound nightclub using a “dirty bomb”.
Jawad Akbar was part of a five-strong British-born or British-resident gang of Pakistani heritage linked to Al-Qaeda in Pakistan.
Waheed Mahmood, 35, Omar Khyam, 25, Anthony Garcia, 24, Salahuddin Amin, 32, were the other defendants in the 2006 trial. All five were handed life sentences.
During the trial, it was revealed that the gang were poised to attack the shopping centre with a massive device, made for just £100 containing ammonium nitrate and aluminium powder.


