
Meta and AMD have announced a multi-year agreement to expand Meta’s AI infrastructure using up to 6 gigawatts of AMD Instinct GPUs, designed to support large-scale AI workloads. This deal builds on the companies’ existing strategic partnership.
Partnership Overview
Meta aims to scale its AI capabilities to enable personal superintelligence while handling growing computational demands. The collaboration with AMD covers hardware, software, and system alignment, providing vertical integration across Meta’s infrastructure stack. The first GPU shipments are scheduled for the second half of 2026, leveraging Meta’s Helios rack-scale architecture, developed jointly with AMD through the Open Compute Project.
Hardware and Software Details
- GPUs: Custom AMD Instinct MI450-based GPUs optimized for Meta’s AI workloads.
- CPUs: 6th Gen AMD EPYC™ “Venice” processors and next-generation “Verano” processors designed for workload-specific optimizations.
- Software: ROCm™ software stack and Meta’s Training and Inference Accelerator (MTIA) silicon program.
- Rack Architecture: Helios rack-scale architecture enables scalable, rack-level AI infrastructure.
Meta has previously deployed millions of AMD EPYC CPUs and AMD Instinct MI300/MI350 GPUs across its global infrastructure. The partnership ensures efficient, scalable compute combining multiple hardware sources with Meta’s in-house accelerators.
Portfolio Approach and Resilience
Under Meta’s Meta Compute initiative, the company is diversifying its hardware and software partnerships to build resilient, flexible AI infrastructure. This portfolio-based approach allows rapid rollout of efficient, co-designed hardware and software systems to meet increasing AI demand globally.
Strategic Financial Alignment
AMD has issued Meta a performance-based warrant for up to 160 million shares, structured to vest based on GPU shipment milestones and AMD stock price thresholds. The first tranche vests after the initial 1-gigawatt deployment, with subsequent tranches vesting as shipments scale to 6 gigawatts. Vesting is tied to technical and commercial milestones to align strategic interests.
Global AI Deployment
The collaboration spans multiple generations of AMD Instinct GPUs and aligns roadmaps across silicon, systems, and software to build AI platforms optimized for Meta workloads. The partnership is designed to accelerate AI innovation and support AI-powered services for billions of users worldwide.
Speaking on the partnership, Dr. Lisa Su, chair and CEO, AMD, said:
We are proud to expand our strategic collaboration with Meta as they push the boundaries of AI at unprecedented scale. This multi-year, multi-generation effort across Instinct GPUs, EPYC CPUs, and rack-scale AI systems aligns our roadmaps to deliver high-performance, energy-efficient infrastructure optimized for Meta’s workloads, accelerating one of the industry’s largest AI deployments and placing AMD at the center of the global AI buildout.
Commenting on the partnership, Mark Zuckerberg, Founder and CEO of Meta, said:
We’re excited to establish a long-term partnership with AMD to deploy efficient inference compute and deliver personal superintelligence. This is an important step for Meta as we diversify our compute. I expect AMD to remain an important partner for many years to come.
