LM Studio Apple MLX engine
A real time inference engine for temporal logical specifications
950 line, minimal, extensible LLM inference engine built from scratch
A high-performance inference engine for AI models
A lightweight vLLM implementation built from scratch
Alibaba's high-performance LLM inference engine for diverse apps
High-performance inference framework for large language models
RGBD video generation model conditioned on camera input
Code for running inference and finetuning with SAM 3 model
Offline inference engine for art, real-time voice conversations
Fast Multimodal LLM on Mobile Devices
Universal LLM Deployment Engine with ML Compilation
Mooncake is the serving platform for Kimi
LightLLM is a Python-based LLM (Large Language Model) inference
User-friendly AI Interface
WebAssembly binding for llama.cpp - Enabling on-browser LLM inference
Parallax is a distributed model serving framework
Run a 1-billion parameter LLM on a $10 board with 256MB RAM
Extensible workflow development framework
Fully private LLM chatbot that runs entirely with a browser
Multi-Agent daTa geneRation Infra and eXperimentation framework
Minimalist web-searching platform with an AI assistant
Superduper: Integrate AI models and machine learning workflows
Running large language models on a single GPU
Masks sensitive data and secrets before they reach AI