Google I/O 2025 has kicked off with a powerful message: AI is no longer an experiment—it's the new foundation of Google’s product strategy. Held on May 20–21, the annual developer conference showcased how the company is turning decades of research into real-world tools, experiences, and assistants through its Gemini AI platform.
One of the most striking announcements was the massive leap in model performance. Google revealed that its Gemini 2.5 Pro model now leads across all categories on the LMArena leaderboard, with improved reasoning, multimodal understanding, and code generation. This is powered by the all-new TPU v7 “Ironwood”, which offers 10x performance over its predecessor and processes 42.5 exaflops per pod—making it ideal for inferential AI at scale. Google now processes over 480 trillion tokens monthly, a staggering 50x jump from last year.
From futuristic prototypes to practical tools, Google demonstrated its AI maturity. Project Astra now powers Gemini Live, allowing users to interact with the world via camera and screen-sharing, now rolling out to Android and iOS. Project Starline has evolved into Google Beam, a 3D AI-first video conferencing platform, launching later this year in collaboration with HP. The demo showed near-lifelike video conversations, supported by AI-generated real-time translations with voice and expression matching.
Another leap came in the form of Agent Mode, Google’s vision for task-completing AI. Derived from Project Mariner, this AI assistant can interact with websites, automate tasks, and learn through a “teach and repeat” method. It's rolling out experimentally via the Gemini app and to trusted developers via Gemini API. Google is also supporting interoperability with open standards like the Model Context Protocol (MCP) and Agent2Agent Protocol, setting the foundation for a truly connected “agentic web”.
Search also saw a major upgrade. AI Mode, a new tab in Google Search, allows users to ask complex, multi-part questions and receive context-rich responses. AI Overviews, now live in 200+ countries including India, are delivering more useful results and driving deeper engagement, according to Google.
On the creative front, Google introduced Veo 3, its latest video generation model with native audio, and Imagen 4 for photo-realistic image creation. A new tool named Flow promises to revolutionise video storytelling, letting creators generate cinematic sequences from prompts. These tools will be available through the Gemini app.
Google also highlighted personalisation as the future of AI. Gemini’s personalised Smart Replies in Gmail will soon suggest responses based on your writing style and files in Google Drive—offering replies that sound authentically like you. Privacy, Google says, remains key, with full user control and on-device processing.
With India emerging as one of the biggest markets for Google’s AI experiences, the stage is set for these technologies to reshape how Indians search, create, learn and live. But as AI assistants start planning your day and scheduling your house tours, one question remains—are we ready to hand over the wheel?