Run Local LLMs on Any Device. Open-source
A high-throughput and memory-efficient inference and serving engine
Ready-to-use OCR with 80+ supported languages
AIMET is a library that provides advanced quantization and compression
Uncover insights, surface problems, monitor, and fine tune your LLM
Everything you need to build state-of-the-art foundation models
The official Python client for the Huggingface Hub
State-of-the-art Parameter-Efficient Fine-Tuning
A set of Docker images for training and serving models in TensorFlow
Bring the notion of Model-as-a-Service to life
Replace OpenAI GPT with another LLM in your app
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
A Pythonic framework to simplify AI service building
GPU environment management and cluster orchestration
Operating LLMs in production
Library for OCR-related tasks powered by Deep Learning
The Triton Inference Server provides an optimized cloud
MII makes low-latency and high-throughput inference possible
Optimizing inference proxy for LLMs
Uplift modeling and causal inference with machine learning algorithms
Multilingual Automatic Speech Recognition with word-level timestamps
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
A library for accelerating Transformer models on NVIDIA GPUs
A unified framework for scalable computing
Easiest and laziest way for building multi-agent LLMs applications