NVIDIA Isaac GR00T N1.5 is the world's first open foundation model
An AI-powered security review GitHub Action using Claude
Language modeling in a sentence representation space
GLM-4 series: Open Multilingual Multimodal Chat LMs
Qwen2.5-VL is the multimodal large language model series
A GPT-4o Level MLLM for Vision, Speech and Multimodal Live Streaming
Collection of Gemma 3 variants that are trained for performance
ICLR2024 Spotlight: curation/training code, metadata, distribution
Memory-efficient and performant finetuning of Mistral's models
HY-Motion model for 3D character animation generation
Unified Multimodal Understanding and Generation Models
PyTorch code and models for the DINOv2 self-supervised learning
Pokee Deep Research Model Open Source Repo
Tooling for the Common Objects In 3D dataset
Renderer for the harmony response format to be used with gpt-oss
LTX-Video Support for ComfyUI
MapAnything: Universal Feed-Forward Metric 3D Reconstruction
Large Multimodal Models for Video Understanding and Editing
Open Source Speech Language Model
Implementation of "MobileCLIP" CVPR 2024
CLIP, Predict the most relevant text snippet given an image
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
Pretrained time-series foundation model developed by Google Research
Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI
State-of-the-art Image & Video CLIP, Multimodal Large Language Models