Here's What You Can Expect From Google In 2025: 10 Points
In the mail, Sundar Pichai said the company launched its latest AI model Gemini 2.0 late last year as well as its quantum computing chip Willow, ending 2024 on a high. There was also innovation, he said, across Search, YouTube and more.
Join our WhatsApp Community to receive travel deals, free stays, and special offers!
- Join Now -
Join our WhatsApp Community to receive travel deals, free stays, and special offers!
- Join Now -
- "As we start the year, I have been reviewing demos for the products and features we're rolling out in the next few months. The progress is amazing, and I'm confident we will keep the momentum going in 2025," Mr Pichai wrote.
- In December, Google made several key announcements across AI, hardware and extended reality. It introduced Gemini 2.0, the next generation of its AI model, designed for the "agentic era" which brings multimodality and tool use. The model is intended to understand the world better, think ahead, and take action with user supervision.
- Google launched Deep Research, a new feature in Gemini Advanced that uses advanced reasoning and long-context capabilities to act as a research assistant. It also released an experimental version of Gemini 2.0 Flash, a model with enhanced performance and speed, which is available to developers via the Gemini API in Google AI Studio and Vertex AI.
- On the hardware front, Google announced the general availability of Trillium, its sixth-generation Tensor Processing Unit (TPU). Trillium is designed for AI workloads, offering significant improvements in training performance, inference throughput, and energy efficiency compared to prior generations. It was used to train Gemini 2.0 and is now available for Google Cloud customers.
- Google also unveiled Willow, a new quantum chip, which has demonstrated error correction and performance that could lead to a useful, large-scale quantum computer. Willow has achieved a major breakthrough in quantum error correction and performed a computation that would take a supercomputer 10 septillion years in under five minutes
- Google also introduced Android XR, a new platform developed in partnership with Samsung and Qualcomm, designed to extend reality for headsets and glasses. Android XR is intended to bring helpful experiences to these devices with the help of AI, and it will launch first on headsets - including one code-named Project Moohan - and then later support glasses.
- Gemini will be integrated into Android XR to provide conversational assistance and help users control their devices. Apps like YouTube and Google Maps are also being reimagined for headsets.
- The company also introduced Google Agentspace, designed to unlock enterprise expertise by using AI agents to bring together Gemini's reasoning, Google-quality search, and enterprise data.
- Agentspace provides a single, company-branded multimodal search agent for organizations. It allows employees to accomplish complex tasks, interact with enterprise data through NotebookLM (a note-taking and research assistant), discover information across the enterprise, and automate business functions with expert agents.
- NotebookLM received updates, including a new interface, audio interactivity, and the premium version, NotebookLM Plus. Google also released new versions of its video and image generation models, Veo 2 and Imagen 3, along with a new tool called Whisk, which allows users to generate images using other images rather than text.