Good AI is the best way to counter bad AI

https://www.federaltimes.com/resizer/nFfB0NamplAzBLajfEGWdlcw0Ms=/1024x0/filters:format(jpg):quality(70)/cloudfront-us-east-1.images.arcpublishing.com/archetype/FGJ2HBFN5RCY5PR4XTW7WQDVAY.jpg

Could terrorists and other bad actors use artificial intelligent to create a deadly epidemic?

Scientists at Harvard University and the Massachusetts Institute of Technology carried out an experiment last year to find out. Researchers asked a group students who had no special training in life sciences to use AI tools such as OpenAI’s ChatGPT-4 to develop a plan on how to start a global pandemic. Participants learned how to synthesize and procure deadly pathogens such as smallpox using methods that bypass existing biosecurity systems in just one hour.

AI is not yet able to create a national crisis. Jason Matheny, Rand, reiterates that while AI is making biological knowledge more accessible, it is not yet at a level where it can replace a lack in biological research training.

As biotechnology advances — think of Google DeepMind AlphaFold which uses AI to predict the interaction between molecular structures — policymakers worry that it will become easier to create bioweapons. They’re now taking steps to regulate the emerging AI sector.

Their efforts are well-intentioned. It’s important that policymakers don’t focus too narrowly on catastrophe risk, and accidentally stifle the creation of positive AI instruments that we need to combat future crises. We should strive to strike a balanced.

AI tools are a powerful tool with a lot of potential. AI technologies such as AlphaFold and RFdiffusion, for example, have already made great strides in the design of novel proteins that can be used to treat medical conditions. Of course, the same technologies can be used for evil.

Researchers demonstrated in a study published in the journal Nature Machine Intelligence last year that the AI MegaSyn was able to generate 40,000 bioweapon chemicals within six hours. Researchers asked the AI for molecules that were similar to VX, an extremely lethal nerve agent. MegaSyn created compounds that were sometimes even more toxic.

It’s possible bad actors could use such tools one day to engineer new pathogens that are more contagious and lethal than any in nature. Once a potential bioweapon has been identified — perhaps with the help AI — a malicious agent could order a customized strand of custom DNA from a commercial supplier, who would then manufacture synthetic DNA in labs and send it via mail. Experts at the Center for Security and Emerging Technology of Georgetown University have suggested that perhaps that strand contains “codes for toxins or genes that make a pathogen even more dangerous.”

“It’s even possible that a terrorist could evade detection by ordering small pieces of a dangerous genetic sequence, and then assemble a bioweapon from the component parts.Scientists frequently order synthesized DNA for projects like cancer and infectious disease research. Not all synthetic DNA providers verify or screen their customers’ orders.

We can close these loopholes, but we cannot eliminate all the risks. It would be better to invest in AI-enabled systems that can detect threats early.

The Traveler-based Genomics Surveillance Program of the Centers for Disease Control and Prevention partners with airports across the country to collect and analyze nasal swab and wastewater samples to detect pathogens before they enter our borders. There are other systems in place to track specific pathogens within communities and cities. Existing detection systems may not be able to detect novel agents created with AI’s assistance.

The U.S. Intelligence Community is already investing in AI capabilities to defend against the next-generation of threats. IARPA’s FELIX program in partnership with private companies has produced a first-class AI which can distinguish genetically-engineered threats from naturally occurring threats, and identify the changes made and how.

We are only just beginning to explore the potential of AI in detecting and protecting against biological threats. These systems can determine when and how a pathogen mutated in the case of a new infectious disease. This can help develop vaccines and treatments that are tailored to the new variants. AI can also predict the spread of a pathogen. For these technologies to be able to play their crucial role, leaders around the world and in Washington must take steps to strengthen our AI defenses. The best way to combat “bad AI” doesn’t mean “no AI”. It’s better to have “good AI.”

To use AI to its fullest potential in order to protect against deadly epidemics and biological warfare, a vigorous policy effort is required. It’s time to adapt. We can stay ahead of these new threats with adequate resources and foresight.


Andrew Makridis was the former Chief Operating officer of the CIA. He held the third-highest position in the agency. Before retiring from the CIA, in 2022 he worked in national security for nearly four decades.

<<<- Go Back