Amazon Web Services (AWS) recently welcomed TechCrunch for an exclusive tour of the chip laboratory where Trainium chips are developed — the same laboratory that is now at the core of a series of multi-billion dollar deals with some of the world's most influential AI companies. The timing was no coincidence: the tour took place shortly after Amazon announced its $50 billion investment in OpenAI, according to TechCrunch.
What exactly is Trainium?
Trainium is Amazon's proprietary AI accelerator chip, designed by Annapurna Labs and custom-built for large-scale machine learning training. While Nvidia's H100 GPUs have been the industry standard for AI training, AWS Trainium positions itself as a cheaper and more scalable alternative — especially for companies running large workloads over extended periods.

Anthropic: From initial investment to deep collaboration
Amazon has invested a total of $8 billion in Anthropic — $4 billion initially, followed by another $4 billion. The investment has not only provided Anthropic with capital but also cemented AWS as the company's primary cloud service and training partner.
Anthropic has entered into a technical collaboration where its engineers work directly with Annapurna Labs to optimize future generations of Trainium chips. They also contribute to the development of the AWS Neuron software stack, giving them access to optimizations down to the silicon level, according to the TechCrunch report.
Central to the collaboration is «Project Rainier» — an enormous computing cluster consisting of hundreds of thousands of Trainium2 chips, dedicated to training Anthropic's future Claude models.
It's worth noting that Anthropic has simultaneously committed to using up to one million Google Cloud TPUs, demonstrating that the company operates with a multi-cloud strategy despite its close AWS relationship.

OpenAI: 2 gigawatts and a break from old habits
OpenAI's participation in the Trainium ecosystem represents a significant strategic shift. The company has committed to consuming approximately 2 gigawatts of Trainium capacity through the AWS infrastructure, distributed across both current Trainium3 chips and the upcoming Trainium4 generation.
The goal is a 40 percent price-performance improvement for high-volume inference tasks like ChatGPT, compared to existing solutions, according to TechCrunch. AWS will also serve as the exclusive third-party cloud distributor for OpenAI Frontier — the company's enterprise-focused agent platform.
OpenAI justifies this with a deliberate «multi-vendor» strategy: to spread infrastructure risk and reduce reliance on Nvidia and what is internally referred to as the «Nvidia tax» — a premium companies pay for H100 capacity under pressure.
Apple: Over 40 percent efficiency gain
Apple is perhaps the most surprising player in the Trainium story. The company has used AWS for over ten years for services like Siri, Apple Maps, and Apple Music, but recently revealed that it actively uses AWS chips to train the AI models powering Apple Intelligence.
By switching to AWS Graviton and Inferentia chips from traditional x86 instances, Apple has achieved more than 40 percent efficiency gain for machine learning workloads, according to TechCrunch source material. Early tests of Trainium 2 indicate potential improvements of up to 50 percent in model training efficiency.
Apple emphasizes that the use of AWS chips applies exclusively to the training phase. The actual AI processing on user devices still occurs within Apple's own Private Cloud Compute framework — in line with the company's privacy policy.
A new competitive front against Nvidia
The combined interest from Anthropic, OpenAI, and Apple signals that AWS Trainium is establishing itself as a real alternative to Nvidia's dominant position in the AI infrastructure market. Cost-effectiveness, scalability, and tight integration with the AWS cloud ecosystem are highlighted as the main drivers.
Whether Trainium actually delivers on the stated performance promises at production scale — and not just in controlled test environments — remains to be seen. But the signals from three of the world's heaviest AI players are clear enough that Nvidia's leadership should take note of the movement.
