Official inference library for Mistral models
Replace OpenAI GPT with another LLM in your app
The Triton Inference Server provides an optimized cloud
Large Language Model Text Generation Inference
High-performance inference server for text embeddings models API layer
Library for serving Transformers models on Amazon SageMaker
A high-throughput and memory-efficient inference and serving engine
Optimizing inference proxy for LLMs
C++ library for high performance inference on NVIDIA GPUs
FlashInfer: Kernel Library for LLM Serving
Bayesian inference with probabilistic programming
Deep learning optimization library: makes distributed training easy
A general-purpose probabilistic programming system
Port of Facebook's LLaMA model in C/C++
High-performance reactive message-passing based Bayesian engine
lightweight, standalone C++ inference engine for Google's Gemma models
C#/.NET binding of llama.cpp, including LLaMa/GPT model inference
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
DoWhy is a Python library for causal inference
ONNX Runtime: cross-platform, high performance ML inferencing
AlphaFold 3 inference pipeline
Standardized Serverless ML Inference Platform on Kubernetes
High-performance inference framework for large language models
MII makes low-latency and high-throughput inference possible
Unofficial (Golang) Go bindings for the Hugging Face Inference API