Skip to content

Search the site


Amazon swoops on AI hotshot Anthropic with $4 billion

AWS to replace Google Cloud as favoured compute platform and all future Anthropic models to be offered via Amazon Bedrock

Amazon Anthropic minority stake Google

Amazon has agreed to take a $4 billion minority stake in AI company Anthropic – which was founded by a team of OpenAI researchers – as it moves swiftly to build up a more enterprise-friendly AI proposition.

Under the deal, Anthropic will make all of its future models available to AWS customers via Amazon Bedrock; a managed service for building generative AI applications on AWS that was announced in April 2023.

It will also “provide AWS customers with early access to unique features for model customization and fine-tuning capabilities” the two said in a press release published early Monday, September 25.

"I am very excited about the Anthropic safe and steerable AI models being trained on AWS Trainium and run on AWS Inferentia chip sets. Their Claude model is already available through Amazon Bedrock" said Amazon CTO Werner Vogels, referring to AWS's custom accelerators for AI training and inference. (The largest instance size, Inf2.48xlarge, features 12 AWS Inferentia2 accelerators with 384 GB of accelerator memory in a single instance for compute power of 2.3 petaflops for BF16/FP16; letting users deploy a 175-billion-parameter model in a single instance.)

Amazon invests in Anthropic: A blow to Google?

AWS will become “Anthropic’s primary cloud provider for mission critical workloads, including safety research and future foundation model development. Anthropic plans to run the majority of its workloads on AWS, further providing Anthropic with the advanced technology of the world’s leading cloud provider” Amazon said in a September 25 release.

That move comes just seven months after Anthropic said it had selected Google Cloud as its preferred provider, citing its “deep expertise in large-scale systems for machine learning, and as a partner with shared values around safe and beneficial development of AI.” (Google was an early significant investor in Anthropic.) It was not immeditely clear if this would mean a migration from GCP to AWS for Anthropic but The Stack has asked.  

Anthropic – which is working with Slack, South Korea’s largest telecommunications firm SK Telecom, Zoom amongst others – offers AI models including “Claude” trained with data as recently as early 2023.

It was founded by a team of former OpenAI researchers including CEO Dario Amodei, who left the company after five years in late 2020. (OpenAI at the time said: “Dario has always shared our goal of responsible AI. He and a handful of OpenAI colleagues are planning a new project, which they tell us will probably focus less on product development and more on research. We support their move…”)

Its most recent general purpose large language model (LLM), Claude 2, was released by Anthropic in July 2023. Claude 2 uses a transformer architecture and was trained via unsupervised learning, reinforcement learning from human feedback (RLHF), and Constitutional AI, including both a supervised and reinforcement learning (RL) phase.

Join peers following The Stack on LinkedIn

Anthropic has made much of the way it gives its models a Constitution, or a “set of ethical and behavioral principles that the model uses to guide its outputs” but admits in a model card (documentation) that despite this “as with all current LLMs, Claude generates confabulations, exhibits bias, makes factual errors, and can be jail-broken.”

Its models have some useful capabilities including the ability to summarize text from user-uploaded files in Word and PDF and it describes them as excelling at “a wide range of tasks from sophisticated dialogue and creative content generation to detailed instruction.”

“By significantly expanding our partnership, we can unlock new possibilities for organizations of all sizes, as they deploy Anthropic’s safe, state-of-the-art AI systems together with AWS’s leading cloud technology,” Anthropic co-founder and CEO Dario Amodei said.

Amazon Bedrock, you say?

Amazon has been moving fast to build out its Amazon Bedrock managed service, which was released as a public preview in April 2023.

Earlier this September, for example, it announced the preview of agents for Amazon Bedrock, a capability that lets developers securely connect foundation models to enterprise data sources using agents.

The approach uses retrieval augmented generation (RAG) and lets users select the S3 location of their data, select an embedding model, and provide the details of their vector database: “A common implementation,” said Amazon’s Antje Barth in a detailed post on September 13, “is converting your documents, or chunks of the documents, into vector embeddings using an embedding model and then storing the vector embeddings in a vector database…”

Knowledge Base for Amazon Bedrock

“We have tremendous respect for Anthropic’s team and foundation models, and believe we can help improve many customer experiences, short and long-term, through our deeper collaboration,” said Andy Jassy, Amazon’s CEO in a canned statement on September 25.

“Customers are quite excited about Amazon Bedrock, AWS’s new managed service that enables companies to use various foundation models to build generative AI applications on top of, as well as AWS Trainium, AWS’s AI training chip, and our collaboration with Anthropic should help customers get even more value from these two capabilities.”

See also: Taking generative AI to production: What CTOs need to consider