Browse free open source LLM Inference tools and projects for Windows below. Use the toggles on the left to filter open source LLM Inference tools by OS, license, language, programming language, and project status.
Port of OpenAI's Whisper model in C/C++
Port of Facebook's LLaMA model in C/C++
Run Local LLMs on Any Device. Open-source
User-friendly AI Interface
ONNX Runtime: cross-platform, high performance ML inferencing
Protect and discover secrets using Gitleaks
OpenVINO™ Toolkit repository
LMDeploy is a toolkit for compressing, deploying, and serving LLMs
A high-throughput and memory-efficient inference and serving engine
The free, Open Source alternative to OpenAI, Claude and others
A library for accelerating Transformer models on NVIDIA GPUs
High-performance neural network inference framework for mobile
Ready-to-use OCR with 80+ supported languages
C++ library for high performance inference on NVIDIA GPUs
GPU environment management and cluster orchestration
AIMET is a library that provides advanced quantization and compression
The official Python client for the Huggingface Hub
Open standard for machine learning interoperability
The Triton Inference Server provides an optimized cloud
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
Open-Source AI Camera. Empower any camera/CCTV
LLMs as Copilots for Theorem Proving in Lean
Lightweight anchor-free object detection model
Lightweight inference library for ONNX files, written in C++
Everything you need to build state-of-the-art foundation models