High-performance neural network inference framework for mobile
Open source machine learning framework
ONNX Runtime: cross-platform, high performance ML inferencing
OpenVINO™ Toolkit repository
C++ library for high performance inference on NVIDIA GPUs
Deep Learning API and Server in C++14 support for Caffe, PyTorch
PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
Toolkit for making machine learning and data analysis applications
MNN is a blazing fast, lightweight deep learning framework
Open standard for machine learning interoperability
High-level, high-performance dynamic language for technical computing
oneAPI Deep Neural Network Library (oneDNN)
PArallel Distributed Deep LEarning: Machine Learning Framework
Pre-trained Deep Learning models and demos
A GPU-accelerated library containing highly optimized building blocks
Geometric deep learning extension library for PyTorch
Our first fully AI generated deep learning system
The Triton Inference Server provides an optimized cloud
Ongoing research training transformer models at scale
Enabling PyTorch on Google TPU
Easy-to-use deep learning framework with 3 key features
Open deep learning compiler stack for cpu, gpu, etc.
Unity machine learning agents toolkit
A game theoretic approach to explain the output of ml models
A high-level machine learning and deep learning library for PHP