• Archives
  • Cryptocurrency
  • Earnings
  • Enterprise
  • About TechBooky
  • Submit Article
  • Advertise Here
  • Contact Us
TechBooky
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
TechBooky
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Home Artificial Intelligence

Google Gemini API Adds “Grounding with Search” for Developers in AI Studio

Akinola Ajibola by Akinola Ajibola
November 2, 2024
in Artificial Intelligence
Share on FacebookShare on Twitter

From now on, developers will be able to use Google Search data to underpin their prompts’ findings when creating AI-based services and bots using Google’s Gemini API and Google AI Studio. This should make it possible to provide more precise answers based on more recent data.

To assist developers with grounding artificial intelligence solutions, Google is introducing a new functionality to AI Studio and the Gemini application programming interface (API). The Grounding with Google Search tool, which was unveiled on Thursday, will let developers compare the AI-generated answers to related online content. In this manner, developers will be able to improve their AI applications even more and provide consumers with more current and accurate information. Google emphasized the significance of these grounding techniques for prompts that get data in real time from the internet.

AI Studio, which is essentially Google’s sandbox for developers to test and improve their prompts and access its most recent large language models (LLMs), will now again allow developers to test grounding for free. Users of the Gemini API will need to be on the premium tier, which costs $35 for every 1,000 grounded requests.

https://tbwpfiles.s3.eu-west-2.amazonaws.com/wp-content/uploads/2024/11/01213047/Nobel-Prize-Prompt-Grounding-Search.mp4

It’s simple to understand how the outcomes of grounded queries differ from those that only use the model’s own data thanks to AI Studio’s freshly included built-in comparison mode.

The new capability, which will be accessible through the Gemini API and Google AI Studio, was described in full on the Google AI for Developers support page. Developers who are creating AI-capable desktop and mobile apps frequently employ both of these technologies.

However, using AI models to generate replies frequently leads to hallucinations, which can harm the applications’ legitimacy. When the app explores current events and requires the most recent information from the internet, the issue may become much more serious. Although the AI model may be manually adjusted by developers, mistakes may still occur in the absence of a reference dataset.

Fundamentally, grounding links a model to verifiable facts, such as internal corporate data or, in this example, Google’s whole search library. Additionally, this keeps the system from experiencing hallucinations. Before today’s rollout, Google sent me an example. When asked who won the 2024 Emmy for outstanding comedy series, the model, without basis, said, “Ted Lasso.” It was a delusion, though. The prize went to “Ted Lasso,” but in 2022. With foundation, the model gave the right answer (“Hacks”), added more information, and referenced its sources.

Google addresses this with a new method for confirming AI output. This procedure, called “grounding,” links an AI model to credible knowledge sources. These sites offer top-notch information along with additional context. Documents, photos, local databases, and the Internet are a few examples of these sources.

It is simple to activate grounding by flipping a switch and adjusting the “dynamic retrieval” parameter to determine how frequently the API should use grounding. That may be as simple as choosing to activate it for each question or choosing a more sophisticated configuration that then employs a smaller model to assess the prompt and determine whether it would be advantageous to be enhanced with information from Google Search.

To discover credible information, grounding with Google Search looks at the most recent source. Now, developers may compare the data that the Gemini AI models produce with the top Google search results. This exercise will increase the “accuracy, reliability, and usefulness of AI outputs,” according to the internet giant located in Mountain View.

The way it works also sources the data straight from the grounding source, enabling AI models to transcend their knowledge cut-off date. In this instance, the output of the Search algorithm may be used to provide Gemini models with the most recent data.

“Grounding can be beneficial. when you pose a recent query that is outside the model’s knowledge threshold, but it could also be useful for a less recent query … Shrestha Basu Mallick, Google’s group product manager for the Gemini API and AI Studio, clarified, “But you might want richer detail.” Some developers would argue that we should only consider current information, in which case they would raise this [dynamic retrieval value]. And some developers would respond, “No, I want Google search’s rich detail on everything.”

Google also provided an illustration of the distinction between grounded and non-ground outputs. “The Kansas City Chiefs won Super Bowl LVII this year (2023)” was an ungrounded answer to the question, “Who won the Super Bowl this year?”

The refined result, however, was, “The Kansas City Chiefs won Super Bowl LVIII this year, defeating the San Francisco 49ers in overtime with a score of 25 to 22,” following the usage of the Grounding with Google Search tool. Interestingly, the functionality is limited to text-based outputs and is unable to handle multimodal answers.

Google includes supporting links back to the original sources when it adds information from Google Search to results. According to Logan Kilpatrick, who joined Google earlier this year after previously serving as OpenAI’s developer relations leader, anybody using this functionality is required under the Gemini license to show these links.

Basu Mallick continued, “It is very important for us for two reasons: first, we want to make sure our publishers get the credit and the visibility.” Second, though, users also find this appealing. Whenever I receive an LLM response, I frequently check it on Google Search. Users greatly appreciate that we are giving them an easy option to accomplish this.

It’s important to note in this regard that, although AI Studio began as more of a prompt tweaking tool, it has evolved into much more.

“Achieving success with AI Studio entails coming in, attempting one of the Gemini models, and realizing that it is incredibly effective for your use case,” Kilpatrick explained. The final objective is not to keep you in AI Studio and let you play about with the models; rather, we do a number of things to highlight possible, intriguing use cases to developers via the user interface. To get you code is the aim. After selecting “Get Code” in the upper right-hand corner, you begin creating anything and may return to AI Studio to test out a later model.

Related Posts:

  • 1701928885-3932
    Google Expands Gemini 2.0 with Advanced AI Models
  • Frame_2147223720.width-1200.format-webp
    Vibe Coding is Now Available in Google's AI Studio
  • gemini-logo
    Google's Next Gen AI, Gemini 1.5 Pro To Support…
  • io2023logo
    Google Introduces AI Coding Bot For Android Developers
  • Search_SocialShare_7gpZ6Zv.width-1300 (1)
    Google’s Antitrust AI Overviews Replace Links With…
  • spotify-deal-page-467×316
    Google Gemini Spotify Extension Rolls Out Search…
  • c51d4c9a-8e7b-45da-8685-295c10e4ea7e-662
    Google Adds Interactive Data Visualisation to AI Mode
  • Gmail_icon_(2020).svg
    Gmail adds Gemini for Calendar-based Searches

Discover more from TechBooky

Subscribe to get the latest posts sent to your email.

Tags: developersGemini APIgoogleGoogle AI StudioGoogle Search data
Akinola Ajibola

Akinola Ajibola

BROWSE BY CATEGORIES

Receive top tech news directly in your inbox

subscription from
Loading

Freshly Squeezed

  • Cursor Introduces An AI Coding Tool For Designers December 12, 2025
  • OpenAI Unveils More Advanced Model as Google Rivalry Grows December 12, 2025
  • WhatsApp Is Redefining The Voicemail Features For Users December 12, 2025
  • Microsoft’s Nadella Is Building a Cricket App in His Spare Time December 12, 2025
  • Google Photos Expands ‘Remix’ Feature to More Countries December 12, 2025
  • Google Play Store Reinstates Fortnite December 12, 2025
  • Vodacom Announces Price Hike December 12, 2025
  • ChatGPT Set to Launch ‘Adult Mode’ By Q1 2026 December 12, 2025
  • Amazon to Invest $35B in India by 2030 for Jobs & AI Growth December 11, 2025
  • SpaceX May Launch Its Big IPO Next Year With a $1tr Valuation December 11, 2025
  • GPT-5.2 Debuts as OpenAI Answers “Code Red” Challenge December 11, 2025
  • Netflix Plans Heavy Borrowing to Fund Warner Bros Deal December 11, 2025

Browse Archives

December 2025
MTWTFSS
1234567
891011121314
15161718192021
22232425262728
293031 
« Nov    

Quick Links

  • About TechBooky
  • Advertise Here
  • Contact us
  • Submit Article
  • Privacy Policy
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
  • African
  • Artificial Intelligence
  • Gadgets
  • Metaverse
  • Tips
  • About TechBooky
  • Advertise Here
  • Submit Article
  • Contact us

© 2025 Designed By TechBooky Elite

Discover more from TechBooky

Subscribe now to keep reading and get access to the full archive.

Continue reading

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.