A graphical manager for ollama that can manage your LLMs
Run Local LLMs on Any Device. Open-source
A high-throughput and memory-efficient inference and serving engine
AIMET is a library that provides advanced quantization and compression
FlashInfer: Kernel Library for LLM Serving
Uncover insights, surface problems, monitor, and fine tune your LLM
The official Python client for the Huggingface Hub
Everything you need to build state-of-the-art foundation models
A set of Docker images for training and serving models in TensorFlow
Replace OpenAI GPT with another LLM in your app
Multilingual Automatic Speech Recognition with word-level timestamps
Easiest and laziest way for building multi-agent LLMs applications
Operating LLMs in production
State-of-the-art Parameter-Efficient Fine-Tuning
Optimizing inference proxy for LLMs
Training and deploying machine learning models on Amazon SageMaker
Low-latency REST API for serving text-embeddings
Standardized Serverless ML Inference Platform on Kubernetes
Single-cell analysis in Python
Library for serving Transformers models on Amazon SageMaker
MII makes low-latency and high-throughput inference possible
20+ high-performance LLMs with recipes to pretrain, finetune at scale
GPU environment management and cluster orchestration
Uplift modeling and causal inference with machine learning algorithms
Simplifies the local serving of AI models from any source