6 Integrations with NVIDIA Jetson
View a list of NVIDIA Jetson integrations and software that integrates with NVIDIA Jetson below. Compare the best NVIDIA Jetson integrations as well as features, ratings, user reviews, and pricing of software that integrates with NVIDIA Jetson. Here are the current NVIDIA Jetson integrations in 2026:
-
1
NVIDIA TensorRT
NVIDIA
NVIDIA TensorRT is an ecosystem of APIs for high-performance deep learning inference, encompassing an inference runtime and model optimizations that deliver low latency and high throughput for production applications. Built on the CUDA parallel programming model, TensorRT optimizes neural network models trained on all major frameworks, calibrating them for lower precision with high accuracy, and deploying them across hyperscale data centers, workstations, laptops, and edge devices. It employs techniques such as quantization, layer and tensor fusion, and kernel tuning on all types of NVIDIA GPUs, from edge devices to PCs to data centers. The ecosystem includes TensorRT-LLM, an open source library that accelerates and optimizes inference performance of recent large language models on the NVIDIA AI platform, enabling developers to experiment with new LLMs for high performance and quick customization through a simplified Python API.Starting Price: Free -
2
Flower
Flower
Flower is an open source federated learning framework designed to simplify the development and deployment of machine learning models across decentralized data sources. It enables training on data located on devices or servers without transferring the data itself, thereby enhancing privacy and reducing bandwidth usage. Flower supports a wide range of machine learning frameworks, including PyTorch, TensorFlow, Hugging Face Transformers, scikit-learn, and XGBoost, and is compatible with various platforms and cloud services like AWS, GCP, and Azure. It offers flexibility through customizable strategies and supports both horizontal and vertical federated learning scenarios. Flower's architecture allows for scalable experiments, with the capability to handle workloads involving tens of millions of clients. It also provides built-in support for privacy-preserving techniques like differential privacy and secure aggregation.Starting Price: Free -
3
Photon
Moondream
Photon is Moondream’s official high-performance inference engine, designed to run vision-language models efficiently across cloud, desktop, and edge environments while delivering real-time performance for production AI systems. It is built as a custom inference layer tightly integrated with the Moondream model architecture, using optimized scheduling, native image processing, and purpose-built CUDA kernels to maximize speed and efficiency. This co-designed approach allows Photon to significantly reduce latency compared to traditional VLM setups, enabling responsive interactions on edge devices and real-time throughput on server-grade hardware. It supports deployment across a wide range of NVIDIA GPUs, from embedded systems like Jetson devices to high-end multi-GPU servers, making it adaptable for diverse operational needs. It includes production-ready features such as automatic batching, prefix caching, and memory-efficient attention mechanisms.Starting Price: $300 per month -
4
CUDA
NVIDIA
CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs. In GPU-accelerated applications, the sequential part of the workload runs on the CPU – which is optimized for single-threaded performance – while the compute intensive portion of the application runs on thousands of GPU cores in parallel. When using CUDA, developers program in popular languages such as C, C++, Fortran, Python and MATLAB and express parallelism through extensions in the form of a few basic keywords. The CUDA Toolkit from NVIDIA provides everything you need to develop GPU-accelerated applications. The CUDA Toolkit includes GPU-accelerated libraries, a compiler, development tools and the CUDA runtime.Starting Price: Free -
5
NVIDIA Metropolis
NVIDIA
NVIDIA Metropolis is an application framework, set of developer tools, and partner ecosystem that brings visual data and AI together to improve operational efficiency and safety across a broad range of industries. It helps make sense of the flood of data created by trillions of sensors for frictionless retail, streamlined inventory management, traffic engineering in smart cities, optical inspection on factory floors, patient care in healthcare facilities, and more. Businesses can now take advantage of this cutting-edge technology and the extensive Metropolis developer ecosystem to create, deploy, and scale AI and IoT applications from the edge to the cloud. Maintain and improve city infrastructure, parking spaces, buildings, and public services. Improve industrial inspection, increase productivity, and reduce waste on manufacturing lines. -
6
NVIDIA DeepStream SDK
NVIDIA
NVIDIA's DeepStream SDK is a comprehensive streaming analytics toolkit based on GStreamer, designed for AI-based multi-sensor processing, including video, audio, and image understanding. It enables developers to create stream-processing pipelines that incorporate neural networks and complex tasks like tracking, video encoding/decoding, and rendering, facilitating real-time analytics on various data types. DeepStream is integral to NVIDIA Metropolis, a platform for building end-to-end services that transform pixel and sensor data into actionable insights. The SDK offers a powerful and flexible environment suitable for a wide range of industries, supporting multiple programming options such as C/C++, Python, and Graph Composer's intuitive UI. It allows for real-time insights by understanding rich, multi-modal sensor data at the edge and supports managed AI services through deployment in cloud-native containers orchestrated with Kubernetes.
- Previous
- You're on page 1
- Next