A library for accelerating Transformer models on NVIDIA GPUs
LM Studio Apple MLX engine
A real time inference engine for temporal logical specifications
High-performance reactive message-passing based Bayesian engine
A high-throughput and memory-efficient inference and serving engine
950 line, minimal, extensible LLM inference engine built from scratch
Jlama is a modern LLM inference engine for Java
A lightweight vLLM implementation built from scratch
A high-performance inference engine for AI models
Alibaba's high-performance LLM inference engine for diverse apps
lightweight, standalone C++ inference engine for Google's Gemma models
High-performance inference framework for large language models
RGBD video generation model conditioned on camera input
Code for running inference and finetuning with SAM 3 model
Offline inference engine for art, real-time voice conversations
Mooncake is the serving platform for Kimi
Fast Multimodal LLM on Mobile Devices
Inference Llama 2 in one file of pure C
Fast inference engine for Transformer models
Pruna is a model optimization framework built for developers
Universal LLM Deployment Engine with ML Compilation
WebAssembly binding for llama.cpp - Enabling on-browser LLM inference
Parallax is a distributed model serving framework
Fast, flexible LLM inference
User-friendly AI Interface