CodeGemma
CodeGemma is a collection of powerful, lightweight models that can perform a variety of coding tasks like fill-in-the-middle code completion, code generation, natural language understanding, mathematical reasoning, and instruction following. CodeGemma has 3 model variants, a 7B pre-trained variant that specializes in code completion and generation from code prefixes and/or suffixes, a 7B instruction-tuned variant for natural language-to-code chat and instruction following; and a state-of-the-art 2B pre-trained variant that provides up to 2x faster code completion. Complete lines, and functions, and even generate entire blocks of code, whether you're working locally or using Google Cloud resources. Trained on 500 billion tokens of primarily English language data from web documents, mathematics, and code, CodeGemma models generate code that's not only more syntactically correct but also semantically meaningful, reducing errors and debugging time.
Learn more
ReadYourLab
ReadYourLab is a DICOM viewer that reads raw CT and MRI scan files for free. AI-assisted features analyze scans quickly and help explain medical terminology. You can ask questions about the scans, and ReadYourLab’s explanations aim to support you in understanding your body and preparing questions for your clinician.
Specifications:
Your CT scan & MRI scan are evaluated by MedGemma 1.5 from Google Research. This is a specialized 4-billion-parameter medical AI built on Gemma 3, with a medically-tuned vision encoder (MedSigLIP) trained on de-identified medical imaging data. It reviews every slice of your scan as a complete 3D volume — just like a radiologist would.
- Full 3D volumetric analysis of CT and MRI DICOM series
- Understands MRI sequences: T1, T2, FLAIR, DWI, contrast-enhanced
- Trained on medical imaging datasets including MIMIC-CXR and ChestImaGenome
- 128K token context window for processing large scan series
Learn more
Gemma
Gemma is a family of lightweight, state-of-the-art open models built from the same research and technology used to create the Gemini models. Developed by Google DeepMind and other teams across Google, Gemma is inspired by Gemini, and the name reflects the Latin gemma, meaning “precious stone.” Accompanying our model weights, we’re also releasing tools to support developer innovation, foster collaboration, and guide the responsible use of Gemma models. Gemma models share technical and infrastructure components with Gemini, our largest and most capable AI model widely available today. This enables Gemma 2B and 7B to achieve best-in-class performance for their sizes compared to other open models. And Gemma models are capable of running directly on a developer laptop or desktop computer. Notably, Gemma surpasses significantly larger models on key benchmarks while adhering to our rigorous standards for safe and responsible outputs.
Learn more
PaliGemma 2
PaliGemma 2, the next evolution in tunable vision-language models, builds upon the performant Gemma 2 models, adding the power of vision and making it easier than ever to fine-tune for exceptional performance. With PaliGemma 2, these models can see, understand, and interact with visual input, opening up a world of new possibilities. It offers scalable performance with multiple model sizes (3B, 10B, 28B parameters) and resolutions (224px, 448px, 896px). PaliGemma 2 generates detailed, contextually relevant captions for images, going beyond simple object identification to describe actions, emotions, and the overall narrative of the scene. Our research demonstrates leading performance in chemical formula recognition, music score recognition, spatial reasoning, and chest X-ray report generation, as detailed in the technical report. Upgrading to PaliGemma 2 is a breeze for existing PaliGemma users.
Learn more