Flash 1.5, Gemma 2 and Project Astra


1.5 Flash excels at summarization, chat applications, image and video captioning, data extraction from long documents and tables, and more. This is because it’s been trained by 1.5 Pro through a process called “distillation,” where the most essential knowledge and skills from a larger model are transferred to a smaller, more efficient model.

Read more about 1.5 Flash in our updated Gemini 1.5 technical report, on the Gemini technology page, and learn about 1.5 Flash’s availability and pricing.

Significantly improving 1.5 Pro

Over the last few months, we’ve significantly improved 1.5 Pro, our best model for general performance across a wide range of tasks.

Beyond extending its context window to 2 million tokens, we’ve enhanced its code generation, logical reasoning and planning, multi-turn conversation, and audio and image understanding through data and algorithmic advances. We see strong improvements on public and internal benchmarks for each of these tasks.

1.5 Pro can now follow increasingly complex and nuanced instructions, including ones that specify product-level behavior involving role, format and style. We’ve improved control over the model’s responses for specific use cases, like crafting the persona and response style of a chat agent or automating workflows through multiple function calls. And we’ve enabled users to steer model behavior by setting system instructions.

We added audio understanding in the Gemini API and Google AI Studio, so 1.5 Pro can now reason across image and audio for videos uploaded in Google AI Studio. And we’re now integrating 1.5 Pro into Google products, including Gemini Advanced and in Workspace apps.

Read Also:  Addressing Missing Data. Understand missing data patterns (MCAR… | by Gizem Kaya | Nov, 2024

Read more about 1.5 Pro in our updated Gemini 1.5 technical report and on the Gemini technology page.

Gemini Nano understands multimodal inputs

Gemini Nano is expanding beyond text-only inputs to include images as well. Starting with Pixel, applications using Gemini Nano with Multimodality will be able to understand the world the way people do — not just through text, but also through sight, sound and spoken language.

Read more about Gemini 1.0 Nano on Android.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top