Apple is finalizing a multi-year agreement with Google to integrate its most advanced AI model, Gemini, into Siri’s next major upgrade. According to Bloomberg’s Mark Gurman, Apple is expected to pay around $1 billion per year for access to the 1.2 trillion parameter Gemini model, marking a significant leap from the 150 billion parameter system currently powering Apple Intelligence’s cloud-based features.
After evaluating other models — including OpenAI’s ChatGPT and Anthropic’s Claude — Apple selected Google’s Gemini earlier this year. The agreement, now nearing completion, will grant Apple back-end access to Gemini’s capabilities while keeping Google’s role largely behind the scenes.
Next-gen Siri and Apple Intelligence integration
The upcoming Siri aims to deliver more natural, multi-step, and context-aware conversations. Internally code-named Linwood, the project is part of a broader initiative called Glenwood, led by Mike Rockwell (creator of Vision Pro) and Craig Federighi, Apple’s Senior Vice President of Software Engineering.
Key enhancements expected in the next-gen Siri include:
- Improved summarization and planning abilities
- Multi-step command execution
- Better contextual understanding across apps and services
- On-device handling of simpler tasks via Apple’s in-house models
While Gemini will manage advanced reasoning and cloud-based operations, Apple Intelligence will continue to handle lightweight, device-side tasks.
Privacy, deployment, and technical framework
To uphold its privacy standards, Apple will deploy Gemini on its Private Cloud Compute (PCC) infrastructure, ensuring that user data does not pass through Google’s systems. Apple has already allocated dedicated AI server hardware to support the new Siri rollout.
Although this partnership is substantial, it will operate entirely in the background — unlike the Safari search deal that publicly made Google the default provider. Here, Google will serve solely as a back-end technology supplier, without direct visibility to end users.
Apple’s long-term AI roadmap
Apple views its reliance on Gemini as temporary while it accelerates work on its own large-scale model. The company’s AI division is developing a 1 trillion parameter model that could reach consumer-level deployment as early as next year.
Additional insights from Gurman’s report:
- Apple aims to achieve performance parity with Gemini in the near term.
- The company has faced staff departures, including the head of its AI models team.
- Google’s Gemini 2.5 Pro currently ranks among the top-performing large language models.
- Firms like Snap Inc. are also integrating Gemini via Google Vertex AI, showing its broader industry reach.
Localized version for China
In China, where Google’s services are restricted, Apple plans to launch a localized version of Siri powered by its own AI models combined with an Alibaba-developed filtering layer to meet regulatory standards. The company has also explored possible partnerships with Baidu for additional AI functions.
Timeline and outlook
The new Siri experience — powered by Google’s Gemini model — is expected to debut in spring 2026 as part of the wider Apple Intelligence rollout. While the final framework may evolve, the initiative represents Apple’s most significant step toward closing its AI gap with Google and OpenAI, paving the way for more capable and privacy-focused voice assistance across future iOS versions.