Training and deploying machine learning models on Amazon SageMaker
Uplift modeling and causal inference with machine learning algorithms
Trainable models and NN optimization tools
Bring the notion of Model-as-a-Service to life
Lightweight Python library for adding real-time multi-object tracking
A unified framework for scalable computing
Powering Amazon custom machine learning chips
Replace OpenAI GPT with another LLM in your app
Uncover insights, surface problems, monitor, and fine tune your LLM
State-of-the-art Parameter-Efficient Fine-Tuning
A library for accelerating Transformer models on NVIDIA GPUs
Integrate, train and manage any AI models and APIs with your database
GPU environment management and cluster orchestration
Trainable, memory-efficient, and GPU-friendly PyTorch reproduction
The Triton Inference Server provides an optimized cloud
Superduper: Integrate AI models and machine learning workflows
Probabilistic reasoning and statistical analysis in TensorFlow
Phi-3.5 for Mac: Locally-run Vision and Language Models
Libraries for applying sparsification recipes to neural networks
A high-performance ML model serving framework, offers dynamic batching
Library for OCR-related tasks powered by Deep Learning
Standardized Serverless ML Inference Platform on Kubernetes
Low-latency REST API for serving text-embeddings
OpenMLDB is an open-source machine learning database
Pytorch domain library for recommendation systems