Run Local LLMs on Any Device. Open-source
A high-throughput and memory-efficient inference and serving engine
AIMET is a library that provides advanced quantization and compression
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
The official Python client for the Huggingface Hub
A library for accelerating Transformer models on NVIDIA GPUs
Uncover insights, surface problems, monitor, and fine tune your LLM
Operating LLMs in production
Everything you need to build state-of-the-art foundation models
20+ high-performance LLMs with recipes to pretrain, finetune at scale
Multilingual Automatic Speech Recognition with word-level timestamps
State-of-the-art Parameter-Efficient Fine-Tuning
Optimizing inference proxy for LLMs
Large Language Model Text Generation Inference
MII makes low-latency and high-throughput inference possible
Easiest and laziest way for building multi-agent LLMs applications
Bring the notion of Model-as-a-Service to life
Standardized Serverless ML Inference Platform on Kubernetes
Replace OpenAI GPT with another LLM in your app
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
Phi-3.5 for Mac: Locally-run Vision and Language Models
Official inference library for Mistral models
Low-latency REST API for serving text-embeddings
Training and deploying machine learning models on Amazon SageMaker
Data manipulation and transformation for audio signal processing