Diversity-driven optimization and large-model reasoning ability
Tongyi Deep Research, the Leading Open-source Deep Research Agent
Open Source Speech Language Model
Foundation model for image generation
Hunyuan Translation Model Version 1.5
Multimodal embedding and reranking models built on Qwen3-VL
LTX-Video Support for ComfyUI
Implementation of "MobileCLIP" CVPR 2024
High-resolution models for human tasks
Video understanding codebase from FAIR for reproducing video models
Tool for exploring and debugging transformer model behaviors
A Unified Framework for Text-to-3D and Image-to-3D Generation
Multimodal-Driven Architecture for Customized Video Generation
Personalize Any Characters with a Scalable Diffusion Transformer
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
GLM-4.6V/4.5V/4.1V-Thinking, towards versatile multimodal reasoning
Project Lyra: Open Generative 3D World Models
General-purpose image editing model that delivers high-fidelity
Ling-V2 is a MoE LLM provided and open-sourced by InclusionAI
Inference script for Oasis 500M
Fast and Universal 3D reconstruction model for versatile tasks
4M: Massively Multimodal Masked Modeling
This repository contains the official implementation of FastVLM
ICLR2024 Spotlight: curation/training code, metadata, distribution
A PyTorch library for implementing flow matching algorithms