• AI Search
  • Cryptocurrency
  • Earnings
  • Enterprise
  • About TechBooky
  • Submit Article
  • Advertise Here
  • Contact Us
TechBooky
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
TechBooky
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Home Artificial Intelligence

Google Expands Gemini 2.0 with Advanced AI Models

Akinola Ajibola by Akinola Ajibola
February 7, 2025
in Artificial Intelligence
Share on FacebookShare on Twitter

In Google AI Studio and Vertex AI, the company’s managed machine learning development platform, Google is first making a new version of Gemini 2.0 Flash widely accessible. This comes after the business made 2.0 Flash accessible to all customers through the desktop and mobile Gemini app. The business, Google, said on Wednesday that a number of its Gemini 2.0 artificial intelligence (AI) models were now available. The IT giant from Mountain View is introducing its Gemini 2.0 Flash Thinking Experimental AI model to web clients and mobile applications. In addition, an agentic version of the AI model that can communicate with specific apps is being released. Additionally, the business is giving premium subscribers access to an experimental version of Gemini 2.0 Pro. Furthermore, a public preview of a light version of the 2.0 Flash is also being released. The technology giant also said today that it is making more of its Gemini artificial intelligence models available and growing the family of models it already has.

The tech giant listed every model that is currently available to consumers in a blog post. Gemini’s free users can access some of these, while premium customers can access others, and developers are the only ones who can access others.

The Gemini 2.0 Flash Thinking, a reasoning-focused model similar to the DeepSeek-R1 and OpenAI’s o1 models, is the most prominent of them. Prior to its first release in December 2024, it was exclusively available through Google’s AI Studio.

The business is now making the model available to all users of the Gemini app and website. The model selector option at the top of the interface will provide access to the new AI model. Interestingly, it’s unclear if there will be any rate restrictions on the free tier for employing the Thinking model.

In addition, users will be able to access an agentic version of Flash Thinking 2.0 through the tech giant. Apps like Google Maps, YouTube, and Google Search may all be used with this device. Users should be able to ask Gemini to do certain activities on various applications thanks to this connection. The degree of its capability is presently unknown.

Additionally, Google revealed that 2.0 Flash Thinking Experimental is now widely accessible and launched an experimental version of Gemini 2.0 Pro, the company’s flagship model with the greatest performance for coding and sophisticated instructions. The new 2.0 Flash Thinking model is a compact, quick AI model that is tailored for reasoning and logic.

Google is releasing an experimental version of Gemini 2.0 Pro, the high-performance frontier model of the 2.0 series, for Gemini Advanced users. According to reports, the model performs well in activities involving coding and mathematics as well as decomposing complicated situations. With a context window of two million tokens, this is the most sophisticated model offered by the tech giant. Additionally, the model’s application programming interface (API) will have the capability to invoke tools like code execution and Google Search. Vertex AI and Google AI Studio will also offer it.

Additionally, Gemini 2.0 Flash-Lite, a brand-new AI model that is intended to be the company’s most economical model, to the public and this will now be available to developers. According to the business, it maintains speed and affordability while providing higher performance than the 1.5 Flash. It takes multimodal input and has a one million token context window. Additionally, Google is using the Gemini API to make the 2.0 Flash paradigm available to developers. At the moment, it can perform text-based activities; in the future, the business plans to add picture production and text-to-speech capabilities.

Google claimed to have gained insightful input on the capabilities of their AI models by distributing early, experimental versions of Gemini 2.0 to developers and experienced users. The business wants to carry on that trend with the launch of the experimental version of Gemini 2.0 Pro.

With a context window of 2 million tokens, the experimental Gemini 2.0 Pro model can process large documents and films, or around 1.5 million words. It can also run code and make calls to programs like Google Search.

The Gemini 2.0 Pro is the replacement for Google’s flagship Gemini 1.5 Pro model, which was introduced in February of last year.

Related Posts:

  • GEMINI-2.0
    Google Rolls out Gemini 2.0 Flash AI Model to All Users
  • Frame-876-1024x569
    Google uses Gemini 2.0 Flash Thinking to Power…
  • Gemini-3
    Google Increases Gemini 3 Usage Limits With New…
  • NB2_SS.width-1300
    Google Unveils Nano Banana 2 With Faster AI Imaging…
  • Bard_Gemini_SS.width-1300
    Google Workspace Users Can Now Download the Gemini…
  • Gmail_icon_(2020).svg
    Gmail adds Gemini for Calendar-based Searches
  • veo-2
    Google Launches Veo 2 Video AI for Advanced Gemini Users
  • Blogpost_Header_Option_2_v02.width-1300
    Gemini Live Adds Camera & Screen Sharing on Android

Discover more from TechBooky

Subscribe to get the latest posts sent to your email.

Tags: AIgeminigemini 2.0
Akinola Ajibola

Akinola Ajibola

BROWSE BY CATEGORIES

Receive top tech news directly in your inbox

subscription from
Loading

Freshly Squeezed

  • Meta Acquires Robotics Startup To Boost & Improve Its Humanoid AI Efforts May 2, 2026
  • xAI Rolls out Grok 4.3 and a New Voice Cloning Suite May 2, 2026
  • Pentagon Taps Nvidia, Microsoft And AWS To Bring AI To Classified Networks May 1, 2026
  • Hackers Are Exploiting Critical cPanel Bug, Putting Millions of Websites at Risk May 1, 2026
  • Alibaba’s Metis Agent Aims to Fix ‘Trigger‑Happy’ AI Tool Use With New RL Framework May 1, 2026
  • Samsung Q1 2026 Earnings: Record Profit Driven by AI Memory Chip Boom May 1, 2026
  • Qualcomm Q1 2026 Earnings: China Weakness and AI Push Drive Mixed Results May 1, 2026
  • Amazon Q1 2026 Earnings: AWS and AI Drive Strong Growth Despite Spending Concerns May 1, 2026
  • Meta Q1 2026 Earnings: Strong Revenue Growth Overshadowed by Massive AI Spending May 1, 2026
  • Apple Q2 2026 Earnings: $111B Revenue, iPhone 17 Drives Record Growth May 1, 2026
  • IBM Rolls out ‘Bob’, an AI Development Partner Built around Multi-model Routing and Human Checkpoints April 29, 2026
  • iOS 27 Reportedly Adds New Apple Intelligence Photo Editing Tools April 29, 2026

Browse Archives

May 2026
MTWTFSS
 123
45678910
11121314151617
18192021222324
25262728293031
« Apr    

Quick Links

  • About TechBooky
  • Advertise Here
  • Contact us
  • Submit Article
  • Privacy Policy
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
  • African
  • Artificial Intelligence
  • Gadgets
  • Metaverse
  • Tips
  • AI Search
  • About TechBooky
  • Advertise Here
  • Submit Article
  • Contact us

© 2025 Designed By TechBooky Elite

Discover more from TechBooky

Subscribe now to keep reading and get access to the full archive.

Continue reading

Chat with TechBooky AI
💬
TechBooky AI ✕
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.