Run Local LLMs on Any Device. Open-source
Large Language Model Text Generation Inference
Operating LLMs in production
Phi-3.5 for Mac: Locally-run Vision and Language Models
A high-throughput and memory-efficient inference and serving engine
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
Sparsity-aware deep learning inference runtime for CPUs
Ready-to-use OCR with 80+ supported languages
Openai style api for open large language models
Neural Network Compression Framework for enhanced OpenVINO
Replace OpenAI GPT with another LLM in your app
Efficient few-shot learning with Sentence Transformers
FlashInfer: Kernel Library for LLM Serving
State-of-the-art Parameter-Efficient Fine-Tuning
A high-performance ML model serving framework, offers dynamic batching
PyTorch library of curated Transformer models and their components
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
Libraries for applying sparsification recipes to neural networks
DoWhy is a Python library for causal inference
LLM training code for MosaicML foundation models
Optimizing inference proxy for LLMs
Easiest and laziest way for building multi-agent LLMs applications
Low-latency REST API for serving text-embeddings
Bring the notion of Model-as-a-Service to life
20+ high-performance LLMs with recipes to pretrain, finetune at scale