High-performance neural network inference framework for mobile
Open source machine learning framework
ONNX Runtime: cross-platform, high performance ML inferencing
C++ library for high performance inference on NVIDIA GPUs
OpenVINO™ Toolkit repository
Deep Learning API and Server in C++14 support for Caffe, PyTorch
MNN is a blazing fast, lightweight deep learning framework
Toolkit for making machine learning and data analysis applications
PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
Open standard for machine learning interoperability
High-level, high-performance dynamic language for technical computing
oneAPI Deep Neural Network Library (oneDNN)
PArallel Distributed Deep LEarning: Machine Learning Framework
Pre-trained Deep Learning models and demos
A GPU-accelerated library containing highly optimized building blocks
Our first fully AI generated deep learning system
The Triton Inference Server provides an optimized cloud
Ongoing research training transformer models at scale
Open deep learning compiler stack for cpu, gpu, etc.
Enabling PyTorch on Google TPU
Easy-to-use deep learning framework with 3 key features
A high-level machine learning and deep learning library for PHP
Jittor is a high-performance deep learning framework
Unity machine learning agents toolkit
Geometric deep learning extension library for PyTorch