In this photo illustration, Claude AI logo is seen on a smartphone and Anthropic logo on a pc screen. (Photo Illustration by Pavlo Gonchar/SOPA Images/LightRocket via Getty Images)
Sopa Images | Lightrocket | Getty Images
Anthropic on Monday announced updates to the “responsible scaling” policy for its artificial intelligence technology, including defining which of its model safety levels are powerful enough to need additional protections.
The company, backed by Amazon, published safety and security updates in a blog post. If the company is stress-testing an AI model and sees that it has the capacity to potentially help a “moderately-resourced state program” develop chemical and biological weapons, it will start implementing new security protections before rolling out that technology, Anthropic said in the post.
The response would be similar if the company determined the model could be used to fully automate the role of an entry-level Anthropic researcher, or cause too much acceleration in scaling too quickly.
Anthropic closed its latest funding round earlier this month at a $61.5 billion valuation, which makes it one of the highest-valued AI startups. But it’s a fraction the va