— An agreement by more than 90 said, however, that artificial intelligence’s benefit to the field of biology would exceed any potential harm.
Cade Metz / New York Times:
Dario Amodei, chief executive of the high-profile A.I. start-up Anthropic, told Congress last year that new A.I. technology could soon help unskilled but malevolent people create large-scale biological attacks, such as the release of viruses or toxic substances that cause widespread disease and death.
Senators from both parties were alarmed, while A.I. researchers in industry and academia debated how serious the threat might be.
Now, over 90 biologists and other scientists who specialize in A.I. technologies used to design new proteins — the microscopic mechanisms that drive all creations in biology — have signed an agreement that seeks to ensure that their A.I.-aided research will move forward without exposing the world to serious harm.
The biologists, who include the Nobel laureate Frances Arnold and represent labs in the United States and other countries, also argued that the latest technologies would have far more benefits than negatives, including new vaccines and medicines.
“As scientists engaged in this work, we believe the benefits of current A.I. technologies for protein design far outweigh the potential for harm, and we would like to ensure our research remains beneficial for all going forward,” the agreement reads.
The agreement does not seek to suppress the development or distribution of A.I. technologies. Instead, the biologists aim to regulate the use of equipment needed to manufacture new genetic material.
This DNA manufacturing equipment is ultimately what allows for the development of bioweapons, said David Baker, the director of the Institute for Protein Design at the University of Washington, who helped shepherd the agreement.
“Protein design is just the first step in making synthetic proteins,” he said in an interview. “You then have to actually synthesize DNA and move the design from the computer into the real world — and that is the appropriate place to regulate.”
— David Baker of the University of Washington said regulation should focus on the physical tools that would be needed to create a bioweapon. Credit: Evan McGlinn for The New York Times
The agreement is one of many efforts to weigh the risks of A.I. against the possible benefits. As some experts warn that A.I. technologies can help spread disinformation, replace jobs at an unusual rate and perhaps even destroy humanity, tech companies, academic labs, regulators and lawmakers are struggling to understand these risks and find ways of addressing them.
Dr. Amodei’s company, Anthropic, builds large language models, or L.L.M.s, the new kind of technology that drives online chatbots. When he testified before Congress, he argued that the technology could soon help attackers build new bioweapons.
But he acknowledged that this was not possible today. Anthropic had recently conducted a detailed study showing that if someone were trying to acquire or design biological weapons, L.L.M.s were marginally more useful than an ordinary internet search engine.
Dr. Amodei and others worry that as companies improve L.L.M.s and combine them with other technologies, a serious threat will arise. He told Congress that this was only two to three years away.
— Techmeme