Qwen2.5-VL is the multimodal large language model series
Official inference repo for FLUX.1 models
Ultra-Efficient LLMs on End Device
INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model
A Pragmatic VLA Foundation Model
DeepSeek Coder: Let the Code Write Itself
GLM-4.5: Open-source LLM for intelligent agents by Z.ai
Chinese and English multimodal conversational language model
State-of-the-art Image & Video CLIP, Multimodal Large Language Models
Open-source large language model family from Tencent Hunyuan
Large-language-model & vision-language-model based on Linear Attention
Ling is a MoE LLM provided and open-sourced by InclusionAI
A series of math-specific large language models of our Qwen2 series
NVIDIA Isaac GR00T N1.5 is the world's first open foundation model
A state-of-the-art open visual language model
Official inference repo for FLUX.2 models
tiktoken is a fast BPE tokeniser for use with OpenAI's models
A Family of Open Sourced Music Foundation Models
Contexts Optical Compression
Repo of Qwen2-Audio chat & pretrained large audio language model
Ring is a reasoning MoE LLM provided and open-sourced by InclusionAI
Research code artifacts for Code World Model (CWM)
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
Diversity-driven optimization and large-model reasoning ability
CLIP, Predict the most relevant text snippet given an image