Users can now hold real-time voice conversations with Google’s AI-powered Search, thanks to a major new feature rollout in the Google app for Android and iOS.
Search Goes Conversational
Google this week announced the launch of Search Live with voice input, a new capability inside the Google app that allows users to engage in back-and-forth spoken conversations with its AI-powered Search tool. Rolled out first in the United States, the feature is initially available to those who have opted into the AI Mode experiment in Google Labs, the company’s testing platform for early-access features.
Hands-Free Search
The launch marks a step forward in how users interact with Search, with Google positioning the update as a more natural, hands-free way to discover and explore information while multitasking or on the move.
Use the “Live” Icon
A dedicated “Live” icon now appears within the Google app interface, allowing users to tap and speak their queries aloud. The AI responds in spoken form, and users can follow up with further questions to refine or expand the topic, thereby mirroring a more human-like back-and-forth conversation.
According to Google, Search Live “lets you talk, listen and explore in real time,” giving users the ability to access web-based information while continuing to use other apps or even switching between tasks. The tool also provides on-screen links to source material, allowing users to dig deeper into AI-generated answers.
Building on Gemini and Search Infrastructure
Search Live actually runs on a custom version of Gemini, Google’s multimodal large language model, which powers many of its generative AI tools. The Gemini model used in AI Mode has been specially adapted to support live voice input, real-time responses, and integration with Google Search’s existing ranking and quality systems.
Liza Ma, director of product management at Google Search, explained in a company blog post that the system combines “advanced voice capabilities” with the reliability of Search’s “best-in-class quality and information systems,” ensuring that responses are both conversational and trustworthy. She also confirmed the use of Google’s ‘query fan-out’ technique, which enables the system to return a more diverse and useful range of web content in response to user questions.
For example, a user might ask, “What are some tips for preventing a linen dress from wrinkling in a suitcase?” and then follow up with, “What should I do if it still wrinkles?” The AI answers audibly while presenting related links on screen. This continuity is key to what Google hopes will be a smoother, more context-aware search experience.
How and Where to Access It
At launch, Search Live with voice is available only to users in the U.S. who have joined the AI Mode experiment through Google Labs. It works on both Android and iOS via the official Google app. There is currently no timeline for a broader international rollout, though Google says it intends to expand features and availability in the coming months.
Users who have access will know because they see a new “Live” microphone icon below the search bar in the app. Once activated, they can ask a question out loud and receive a spoken response. Users can view a transcript of the interaction, continue the conversation via typing if preferred, and even revisit past queries via the AI Mode history log.
Multitask While it Works in the Background
Also, because Search Live works in the background, it enables a degree of multitasking not previously possible with voice-based search tools. For example, a user could begin a conversation in the app, switch to messaging or maps, and continue speaking to the AI without interruption.
Voice, Visuals, and What Comes Next
The introduction of voice input is actually just one part of Google’s broader plan to bring real-time multimodal capabilities into Search. For example, at Google I/O in May 2025, the company previewed future updates that will allow users to combine voice interaction with real-time visual input via their phone’s camera, building on advances made in its Project Astra research and the ongoing development of Google Lens.
Multimodal Search
This evolution represents a deeper move by Google into what’s referred to as multimodal search, whereby users can interact with AI not just through typing or talking, but by showing it what they see. In practical terms, this could include pointing the phone at a confusing diagram or damaged object, asking what it is, and getting a contextual explanation, complete with suggested web links, video tutorials or shopping sources.
It also echoes the direction competitors are taking. For example, OpenAI’s ChatGPT has recently introduced voice interaction capabilities in its mobile apps, and Perplexity AI has gained traction for its own real-time web search and voice tools. Google’s response, with Search Live, is both a defensive and strategic step to stay ahead in what is quickly becoming a crowded, AI-first search market.
A New Frontier for Business and Advertisers?
For business users, the implications of voice-first search are far-reaching. For example, in sectors such as logistics, retail, and field service, the ability to conduct voice-based queries while driving or working could prove invaluable. Search Live also introduces potential benefits for productivity, especially for knowledge workers trying to conduct research or fact-checks while multitasking between devices or applications.
It may also signal a new phase for Google’s advertising ecosystem, although details remain unclear. As Search becomes more conversational and voice-led, traditional search result ads, particularly those dependent on text input and visual scanning, may need to evolve. It’s not yet known how, or if, Search Live results will incorporate sponsored content.
The visual links shown alongside voice answers could potentially become prime real estate for future advertising formats. However, Google has so far remained quiet on how monetisation will work within AI Mode. With more users consuming answers audibly and potentially clicking fewer links, publishers and advertisers will be watching closely.
Challenges
Despite the promise, it should be noted that there are several challenges ahead. For example, accuracy and reliability remain key concerns for AI-generated search responses. While Google stresses its Gemini-based AI uses the same quality controls as regular Search, AI hallucinations (where systems confidently give false or misleading answers) are still a known risk in generative models.
The opt-in nature of the feature also limits immediate user exposure and feedback. By placing Search Live behind the AI Mode experimental wall, Google is clearly seeking to manage rollout cautiously but this also means that the majority of users globally still can’t access or evaluate it.
There are also privacy and data security implications, particularly with voice-based input and persistent conversation histories. Google maintains that users can view, manage or delete their AI Mode interactions, but questions remain over how voice data is processed, stored, or used to train models.
One other aspect critics may point to is the increasing opacity of sources in AI answers. For example, while Google includes clickable links alongside Search Live responses, these can sometimes appear secondary to the spoken reply, which may not fully represent the nuance or breadth of available information. Ensuring transparency and balance in summarised answers will be crucial to maintaining trust, especially as Search Live expands into more domains.
What Does This Mean For Your Business?
The introduction of Search Live could be seen as the next step in its natural progression towards Google’s long-term vision for AI-powered search. By blending real-time voice interaction with the depth of web content, Google is essentially positioning itself not just as a search engine but as a more intuitive, responsive assistant capable of handling everyday queries in more dynamic, human-like ways. However, the fact that it’s limited to U.S.-based testers in Labs signals Google’s awareness of the stakes involved. It is not just testing technology but testing trust, usability and commercial viability all at once.
For UK businesses, this could open up important new opportunities once rolled out more widely. Voice-driven interaction with AI may reduce the need for screen time in roles where hands-free efficiency matters, i.e. from trades and transport to healthcare and hospitality. It could also help knowledge workers process information faster while juggling tasks, potentially enhancing productivity and reducing friction in routine research or client support work. There are potential implications for business intelligence and even internal training, particularly once real-time camera input is layered in. But these benefits will only be realised if the underlying AI delivers reliable and verifiable responses at scale.
Advertisers and content publishers are likely to be more cautious. With fewer visual interactions, the conventional search engine results page model may weaken. If users hear an answer but don’t tap the links shown, that affects traffic and engagement metrics. This will raise fresh questions about how brands position themselves within voice-first search and whether new advertising formats will emerge within AI Mode or remain separated. Also, the monetisation path here is still not altogether clear and, as Google experiments with form, it may need to reassure partners that function won’t entirely override visibility.
Meanwhile, Google’s competitors such as OpenAI and Perplexity AI will, no doubt, be watching closely. Each is racing to define the next evolution of everyday search, combining voice, visuals and real-time reasoning. Google still has the infrastructure advantage, but the race is no longer just about data—it’s about usability, privacy, and user confidence. In that context, Search Live’s success may depend as much on how it is governed and explained as how well it works technically.
Whether Search Live becomes the new normal or remains a feature for power users will likely depend on the clarity of its responses, the transparency of its sources, and the ease with which users (especially businesses) can trust it as a tool rather than a black box. What is clear already is that Google is laying groundwork for a future where the way we search is no longer typed, but spoken, shown and responded to in real time. Once mainstream, that could fundamentally change how we interact with the web.