A Pythonic framework to simplify AI service building
A unified framework for scalable computing
Ready-to-use OCR with 80+ supported languages
Library for OCR-related tasks powered by Deep Learning
Single-cell analysis in Python
GPU environment management and cluster orchestration
Large Language Model Text Generation Inference
Run Local LLMs on Any Device. Open-source
LLM training code for MosaicML foundation models
PyTorch library of curated Transformer models and their components
Powering Amazon custom machine learning chips
A library for accelerating Transformer models on NVIDIA GPUs
Bring the notion of Model-as-a-Service to life
State-of-the-art diffusion models for image and audio generation
DoWhy is a Python library for causal inference
A high-performance ML model serving framework, offers dynamic batching
Openai style api for open large language models
The Triton Inference Server provides an optimized cloud
Trainable models and NN optimization tools
Uplift modeling and causal inference with machine learning algorithms
Python Package for ML-Based Heterogeneous Treatment Effects Estimation
Unified Model Serving Framework
Replace OpenAI GPT with another LLM in your app
Build your chatbot within minutes on your favorite device
Library for serving Transformers models on Amazon SageMaker