đź”»Google

Google I/O 2025: AI Everywhere, Smart Glasses in Focus

Google introduces Gemini Ultra subscription, AI video and image generators, enhanced search, and advances in smart glasses technology at its flagship developer conference.

Google I/O 2025: AI Everywhere, Smart Glasses in Focus

Google CEO Sundar Pichai addresses developers at Google I/O 2025, outlining the company’s latest advances in AI and its renewed focus on ambient computing through smart glasses.

BY Donna Joseph

MOUNTAIN VIEW, Calif., May 20, 2025 — Google opened its annual developer conference, I/O 2025, with a barrage of product announcements anchored by AI. The two-day event at Shoreline Amphitheatre spotlighted upgrades across Gemini, Android, Search, and its broader developer tools, but the central theme was unmistakable: AI is now baked into everything Google builds.

Top billing went to Gemini Ultra, a $249.99 monthly subscription tier available in the U.S., giving developers and professionals access to Google’s most advanced tools. That includes the new Veo 3 video model, the Flow AI video editor, and Deep Think, an enhanced reasoning engine embedded in Gemini 2.5 Pro. Deep Think will be available to select testers initially, with broader rollout pending safety checks.

The company also revealed Imagen 4, a more responsive version of its image generation model, capable of rendering detailed visual elements and styles at 2K resolution. Both Veo and Imagen will integrate into Flow, which targets filmmakers and creatives looking for an AI-assisted production suite.

Among the more experimental offerings was Project Astra, a low-latency, multimodal AI originally developed by DeepMind. Google says it is partnering with Samsung and Warby Parker to bring Astra-powered smart glasses to market—though no launch date was set. Astra underpins Gemini Live, which now allows users to carry real-time conversations with the AI while sharing their camera view or screen.

Google also expanded its real-time telepresence ambitions with Beam, a rebrand of its “Starline” project. Beam uses a six-camera array and AI rendering to simulate face-to-face 3D conversations. Combined with Google Meet, Beam introduces live speech translation that retains the speaker’s voice and intonation.

In developer tools, Google launched Stitch, which generates frontend UI code from text or image prompts. The company also pushed updates to Jules, its AI assistant for debugging and coding, and teased Agent Mode in Android Studio—a more autonomous AI assistant built atop Gemini 2.5 Pro.

Google’s AI-infused Project Mariner also saw enhancements. The agent can now perform a dozen website-based tasks such as buying tickets or groceries—all without the user visiting a page. In Search, the new AI Mode supports layered, multi-part queries and can process complex financial or sports data. Gmail is the first app to integrate context-aware responses through this system.

Meanwhile, Google’s AI footprint in creative and productivity tools widened. Gemma 3n, a lightweight multimodal model, runs across devices, and SynthID Detector, a watermark checker, is being rolled out to validate AI-generated content. Workspace apps like Gmail, Docs, and Vids are gaining AI-based smart replies, inbox organization, and automated video editing.

Wear OS 6 brought incremental but focused improvements for Pixel Watch users, including dynamic theming and a refined design language. Developers get updated design files and tools for building smoother transitions and consistent visuals.

Lastly, the Google Play Store and Android Studio both received AI-backed enhancements. Developers can now pause problematic app rollouts, test features more selectively, and offer complex subscription packages under a unified checkout flow. In Studio, an improved crash analysis tool will suggest code-level fixes using Gemini’s engine.

While Google’s AI ambitions are wide-ranging, the direction is clear. From Android phones to smart glasses and 3D calls, the company is moving towards real-time, multimodal computing—positioning Gemini not just as a chatbot, but a default interface for its future products.

Anybody who’s a computer scientist should not be retired right now. They should be working on AI.