Microsoft scientists revealed AI's ability to design novel biological toxins that bypass biosecurity protocols. A study published in Science details how AIPD tools create "zero-day threats" in biology, prompting urgent global collaboration.
Biotechnology paired with artificial intelligence is rapidly becoming one of the biggest emerging security threats for the U.S., ...
... toxins might evade screening systems used ... It's important to shield against the dangers while harnessing the benefits — especially in AI ...
A new study found that artificial intelligence could design DNA for all kinds of dangerous proteins, and do it in such a way ...
REDMOND, WA - Artificial intelligence, widely praised for its potential to revolutionize medicine, has shown a serious vulnerability: its ability to design novel biological toxins that bypass existing biosecurity safeguards. A study led by Microsoft scientists has revealed that advanced AI protein design (AIPD) tools can "paraphrase" the genetic codes of dangerous proteins, maintaining their lethality while making them invisible to current screening mechanisms. This finding, detailed in the October 2 issue of Science, has spurred an urgent, collaborative effort among industry, academia, and government to strengthen global biosecurity frameworks.
The investigation, presented as a biosecurity "red-teaming" exercise, sought to determine whether readily accessible AI models, usually employed in drug discovery, could be twisted for malicious purposes. Researchers, including Microsoft Chief Scientific Officer Eric Horvitz and bioscientist Bruce Wittmann, employed open-source AIPD tools to generate over 75,000 synthetic variants of a particular toxin. The heart of their approach involved tweaking the amino-acid sequences of known hazardous proteins while carefully preserving their structural integrity and, by implication, their function.
"Could today's late-breaking AI protein design tools be used to redesign toxic proteins to preserve their structure - and potentially their function - while evading detection by current screening tools?" Horvitz recalled posing the initial question. "The answer to that question was yes, they could."
The simulations showed that a large majority of these AI-crafted toxins could bypass the screening software used by commercial DNA synthesis firms worldwide. These companies typically operate under protocols intended to block requests for known dangerous agents, such as genes for smallpox or anthrax. The AI's capacity to produce functionally equivalent yet genetically distinct variants created a "zero-day threat" in biological terms-an unknown weakness that current defenses are ill-equipped to address.
To generate a new protein, synthetic DNA is generally ordered from a commercial vendor. Biosecurity systems are designed to automatically flag and block orders for DNA sequences tied to hazardous biological agents. Yet the AI models managed to alter these sequences enough to slip past the automated filters without compromising the protein's harmful properties. The experiment was carried out entirely in silico; no actual toxins were synthesized or released. Nonetheless, the theoretical implications are significant, hinting at a route for producing untraceable variants of highly toxic substances such as ricin or infectious prions.
The researchers observed that the AI could rewrite DNA codes in ways that sidestepped detection by existing screening tools, essentially "paraphrasing" the dangerous genetic instructions. Horvitz noted, "We found that screening software and processes were inadequate at detecting a 'paraphrased' version of concerning protein sequences." This ability raises alarms not only for known toxins but also for the prospect of generating entirely new biological threats.
After confirming the vulnerability, Microsoft launched what Horvitz described as an "unprecedented" 10-month, cross-sector effort. This joint initiative, dubbed the "Paraphrase Project," worked quietly to develop and roll out a "patch" for global DNA screening systems. The fix was distributed to DNA synthesis companies around the world, mitigating the immediate danger identified by the study.
"The second question was, 'Could we design methods and a systematic study that would enable us to work quickly and quietly with key stakeholders to update or patch those screening tools to make them more AI-resilient?'" Horvitz said. "Thanks to the study and efforts of dedicated collaborators, we can now say yes."
This swift intervention underscores the urgency with which the scientific community is tackling AI-driven biosecurity risks. It also established a new model for handling dual-use findings, where open publication is balanced with controlled access to sensitive data. The study's authors, in partnership with Science and the nonprofit International Biosecurity and Biosafety Initiative for Science (IBBIS), chose to limit full access to their sensitive data and software, appointing IBBIS as a gatekeeper for legitimate research requests. This marks a departure from conventional open-science norms, reflecting the perceived gravity of the risk.
Even with the rapid patch, the underlying challenge of AI's dual-use nature remains. As Arturo Casadevall, a microbiologist and immunologist at Johns Hopkins University, observed, "Here we have a system in which we are identifying vulnerabilities. And what you're seeing is an attempt to correct the known vulnerabilities. What vulnerabilities don't we know about that will require future corrections?"
The Microsoft study echoes earlier work where AI was used to generate novel molecules with nerve-agent properties. In a 20XX study, an AI tool took less than six hours to devise 40,000 molecules meeting criteria for nerve agents, including known chemical-warfare agents and novel, potentially more toxic compounds. As with the Microsoft effort, the chemical structures were not publicly released or synthesized because of their extreme danger.
David Relman, a researcher at Stanford University, praised the Microsoft team's proactive stance but voiced a broader concern: "How do we get ahead of a freight train that is just evermore accelerating and racing down the tracks, in danger of careening off the tracks?"
Experts are now debating the most effective long-term solutions. While updating existing biosecurity systems is essential, some scholars advocate embedding filters directly into AI models, preventing the generation of dangerous molecules ab initio. This would shift safety responsibility upstream, into the design of the AI tools themselves.
James Diggans, Head of Policy and Biosecurity at Twist Bioscience, a major DNA synthesis provider, emphasized that although malicious misuse attempts are exceedingly rare-Twist Bioscience has referred orders to law enforcement fewer than five times in a decade-the potential impact of such attempts demands robust defenses. "The real number of people who are really trying to create misuse may be very close to zero," Diggans remarked, "but we should all find comfort in the fact that this is not a common scenario."
Horvitz highlighted the broader implications, stating, "Almost all major scientific advances are 'dual use' - they offer profound benefits but also carry risk." He argued for a framework where innovation is paired with proactive safeguards, technical defenses, regulatory oversight, and informed public debate. "Our study shows that it's possible to invest simultaneously in innovation and safeguards," Horvitz concluded, urging the establishment of guardrails to ensure humanity reaps AI's promises while curbing the risks of misuse across all domains.
The original article, titled "AI Can Create Dangerous Poisons," accurately summarizes key findings from a Microsoft-led study on AI's potential to bypass biosecurity systems in protein design. It highlights the core problem—AI's ability to generate modified toxins that evade current screening, while retaining lethality—and notes that the experiment was digital, aligning with primary sources.
Both external sources, from Microsoft and NPR, confirm the central claims of the original article. The Microsoft article, written by Samantha Kubota and featuring extensive quotes from Microsoft's chief scientific officer Eric Horvitz, details the "Paraphrase Project" where AI protein design (AIPD) tools were used to generate thousands of synthetic versions of specific toxins. It emphasizes that these redesigned toxins could evade screening systems and details the subsequent 10-month, cross-sector effort to develop and distribute a "patch" to DNA synthesis companies. This direct source corroborates the original article's premise about biosecurity vulnerabilities and the need for system updates.
The NPR article, drawing upon the same Science study and quoting Eric Horvitz, also confirms that AI could design dangerous DNA sequences that routinely bypassed existing biosecurity screening measures. It further notes the immediate development and deployment of a fix. NPR's report adds context by mentioning previous studies on AI's potential misuse in generating novel nerve agents and includes perspectives from other experts like microbiologist Arturo Casadevall and researcher David Relman, both acknowledging the study's importance while raising concerns about ongoing vulnerabilities and the accelerating pace of AI development. It also provides an industry perspective from James Diggans of Twist Bioscience, who downplays the frequency of malicious intent in DNA orders.
Critically, the original article correctly identifies that the experiment was digital and did not involve synthesizing actual toxins, a point made clear by Horvitz in the external sources and further contextualized by Casadevall's comment on international treaties. The original article also accurately captures the two main proposed solutions: tightening verification processes for synthetic DNA orders and integrating filters directly into AI models.
In summary, the original article is a concise and accurate report of the main findings and implications of the research, consistent with both the originating institution's account and an independent news report.
21 жовтня 2025 р.
Biotechnology paired with artificial intelligence is rapidly becoming one of the biggest emerging security threats for the U.S., ...
... toxins might evade screening systems used ... It's important to shield against the dangers while harnessing the benefits — especially in AI ...
A new study found that artificial intelligence could design DNA for all kinds of dangerous proteins, and do it in such a way ...
Related Questions