What you need to know
Google has announced the general availability of Gemini 2.0 Flash through Gemini API, allowing developers to build apps with it.The company has also announced the Gemini 2.0 Pro experimental model: its best model yet for coding performance and complex prompts.Google has also announced Gemini 2.0 Flash-Lite — a cost-effective model in public preview through Google AI Studio and Vertex AI.
Sundar Pichai has announced new updates coming to Gemini 2.0, which includes its broader availability alongside a release on new model dubbed Gemini 2.0 Flash-Lite, which promises to be the company’s most cost-efficient model.
Complementing the stable Gemini 2.0 Flash rollout to Gemini from last week, the search giant is making it available via the Gemini API in Google AI Studio and Vertex AI — where “Developers can now build production applications with 2.0 Flash,” Google notes in the announcement blog post.
2/ We’re introducing a new model Flash-Lite, which is extremely efficient and capable, in public preview.
Plus an experimental version of Gemini 2.0 Pro, our best model for coding performance and complex prompts, is now available. Gemini Advanced users can try it out in the… pic.twitter.com/83Eb9pERdGFebruary 5, 2025
“2.0 Flash is now generally available to more people across our AI products, alongside improved performance in key benchmarks, with image generation and text-to-speech coming soon.”
The search giant is also releasing an experimental version of Gemini 2.0 Pro. It’s said to have the “strongest coding performance and ability to handle complex prompts.” It also claims to utilize its better understanding and reasoning of world knowledge than the previous Gemini models.
This model equips the largest context window at 2 million tokens; for instance, the previous Gemini 2.0 Flash experimental model was capable of 1 million tokens that could upload up to 1500 pages of file uploads.
The experimental Gemini 2.0 Pro model with 2 million tokens thus enables “it to comprehensively analyze and understand vast amounts of information, as well as the ability to call tools like Google Search and code execution.”
Developers can take advantage of the experimental Gemini 2.0 Pro through Google AI Studio and Vertex AI. Meanwhile, Gemini Advanced users can also try it through the drop-down menu on their desktop and mobile clients.
The cost-efficient model dubbed Gemini 2.0 Flash-Lite comes as a successor to the Gemini 1.5 Flash while sticking to the same speed and cost. The 2.0 Flash-Lite model also claims to outperform the previous iteration quite significantly in most of the benchmarks.
It also incorporates a 1 million context window and multimodal input. The company also cites an example of the 2.0 Flash-Lite wherein “it can generate a relevant one-line caption for around 40,000 unique photos, costing less than a dollar in Google AI Studio’s paid tier. “
The new 2.0 Flash-Lite model is also available in Google AI Studio and Vertex AI in public preview. And here are the pricing details of all the aforementioned Gemini AI API models.
(Image credit: Google for Developers)
While all the models feature multimodal input with text output, the company notes that they will be ready with more modalities in the coming months for general availability.
GIPHY App Key not set. Please check settings