Anthropic CEO Declares DeepSeek ‘The Worst’ in Critical Bioweapons Data Safety Assessment
In a recent discussion, Anthropic’s CEO, Dario Amodei, expressed significant concerns regarding the rapid rise of DeepSeek, a Chinese AI company that has made waves in Silicon Valley with its innovative R1 model. Amodei’s apprehensions extend beyond the usual worries about data security and delve into potentially grave implications for national security.
DeepSeek’s Performance Raises Red Flags
During an interview on Jordan Schneider’s ChinaTalk podcast, Amodei highlighted disturbing findings from safety tests conducted by Anthropic. He stated that DeepSeek’s capabilities in generating sensitive information were alarming:
- DeepSeek’s model performance was described as “the worst of basically any model we’d ever tested.”
- Amodei noted that the model exhibited “absolutely no blocks whatsoever” against the generation of sensitive data, including information related to bioweapons.
Assessing National Security Risks
Anthropic routinely evaluates various AI models to identify potential national security risks. According to Amodei:
- The tests aim to determine whether AI models can produce bioweapons-related information that is not readily available via conventional sources like Google or textbooks.
- Despite acknowledging that DeepSeek’s models are not “literally dangerous” at present, he warned that they could become so in the future.
Calls for AI Safety Considerations
While praising DeepSeek’s team as “talented engineers,” Amodei emphasized the importance of adhering to AI safety measures. He also advocated for stringent export controls on chips to China, citing fears that these technologies could enhance China’s military capabilities.
Industry Reactions to DeepSeek
DeepSeek’s emergence has also raised alarms among cybersecurity experts. For instance, Cisco security researchers recently reported that:
- DeepSeek R1 failed to block harmful prompts in safety tests, achieving a 100% jailbreak success rate.
- While Cisco did not mention bioweapons, they noted DeepSeek was able to generate content related to cybercrime and other illegal activities.
In comparison, other AI models, including Meta’s Llama-3.1-405B and OpenAI’s GPT-4o, also demonstrated high failure rates of 96% and 86%, respectively.
The Future of DeepSeek in AI Development
As concerns about safety continue to mount, it remains uncertain whether these issues will hinder DeepSeek’s rapid adoption. Major companies like AWS and Microsoft have already integrated R1 into their cloud platforms, a move that raises eyebrows considering that Amazon is Anthropic’s largest investor.
Conversely, a growing number of entities, including the U.S. Navy and the Pentagon, have begun implementing bans on DeepSeek technologies.
A New Competitor in the AI Landscape
Amodei conceded that DeepSeek has emerged as a formidable competitor, likening it to the top U.S. AI companies, including Anthropic, OpenAI, Google, and potentially Meta and xAI. He remarked:
“The new fact here is that there’s a new competitor… now DeepSeek is maybe being added to that category.”
As the situation unfolds, it will be crucial to monitor how these developments impact the broader AI landscape and the measures that may be taken to ensure safety and security in AI technologies.