Showing 522 open source projects for "inference"

View related business solutions
  • Secure your business by securing your people. Icon
    Secure your business by securing your people.

    Over 100,000 businesses trust 1Password

    Take the guesswork out of password management, shadow IT, infrastructure, and secret sharing so you can keep your people safe and your business moving.
    Learn More
  • Globalscape Enhanced File Transfer (EFT) is a best-in-class managed file transfer (MFT) solution Icon
    Globalscape Enhanced File Transfer (EFT) is a best-in-class managed file transfer (MFT) solution

    For Windows-Centric Organizations Looking for Secure File Transfer solutions

    Globalscape’s Enhanced File Transfer (EFT) platform is a comprehensive, user-friendly managed file transfer (MFT) software. Thousands of Windows-Centric Organizations trust Globalscape EFT for their mission-critical file transfers.
    Learn More
  • 1
    LTX-2.3

    LTX-2.3

    Official Python inference and LoRA trainer package

    LTX-2.3 is an open-source multimodal artificial intelligence foundation model developed by Lightricks for generating synchronized video and audio from prompts or other inputs. Unlike most earlier video generation systems that only produced silent clips, LTX-2 combines video and audio generation in a unified architecture capable of producing coherent audiovisual scenes. The model uses a diffusion-transformer-based architecture designed to generate high-fidelity visual frames while...
    Downloads: 177 This Week
    Last Update:
    See Project
  • 2
    Qwen3

    Qwen3

    Qwen3 is the large language model series developed by Qwen team

    ...It delivers higher quality and more helpful text generation across multiple languages and domains, including mathematics, coding, science, and tool usage. Various quantized versions, tools/pipelines provided for inference using quantized formats (e.g. GGUF, etc.). Coverage for many languages in training and usage, alignment with human preferences in open-ended tasks, etc.
    Downloads: 30 This Week
    Last Update:
    See Project
  • 3
    Orpheus TTS

    Orpheus TTS

    Towards Human-Sounding Speech

    ...The project ships both pretrained and finetuned English models, as well as a family of multilingual models released as a research preview, and includes data-processing scripts so users can train or finetune their own variants. Inference is provided through a Python package that uses vLLM under the hood for high-throughput, low-latency generation, including streaming examples that show how to generate audio chunks in real time. The maintainers provide Colab notebooks, a standardized prompting format, and one-click deployment via Baseten for production-grade, FP8/FP16 optimized inference with ~200 ms streaming latency.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 4
    ZML

    ZML

    Any model. Any hardware. Zero compromise

    ...One of its key strengths is cross-compilation, enabling developers to build once and deploy across various platforms without rewriting code. zml provides example implementations of models and workflows, demonstrating how to run inference tasks such as image classification or large language models. It is designed to handle complex distributed setups, including scenarios where model components are split across devices connected via networks.
    Downloads: 2 This Week
    Last Update:
    See Project
  • AI-Powered Identity Governance Icon
    AI-Powered Identity Governance

    For IT Teams and MSPs in need of a solution to simplify, optimize and secure their SaaS, file, and device management operations

    Define governance policies, manage access, and optimize licenses with unified visibility across every identity, app, and file.
    Learn More
  • 5
    OpenJarvis

    OpenJarvis

    Personal AI, On Personal Devices

    ...The framework provides shared primitives for building local-first agents, along with evaluation tools that measure performance using metrics such as energy consumption, latency, cost, and accuracy. OpenJarvis integrates with local inference engines like Ollama, vLLM, SGLang, and llama.cpp to run language models directly on personal hardware. It also includes a learning loop that allows models to improve over time using locally generated interaction traces. By prioritizing local execution and efficiency, OpenJarvis aims to provide a foundation for privacy-preserving personal AI assistants.
    Downloads: 209 This Week
    Last Update:
    See Project
  • 6
    Parallax

    Parallax

    Parallax is a distributed model serving framework

    Parallax is a decentralized inference framework designed to run large language models across distributed computing resources. Instead of relying on centralized GPU clusters in data centers, the system allows multiple heterogeneous machines to collaborate in serving AI inference workloads. Parallax divides model layers across different nodes and dynamically coordinates them to form a complete inference pipeline.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 7
    Ling

    Ling

    Ling is a MoE LLM provided and open-sourced by InclusionAI

    ...The project offers different sizes (Ling-lite, Ling-plus) and emphasizes flexibility and efficiency: being able to scale, adapt expert activation, and perform across a range of natural language/reasoning tasks. Example scripts, inference pipelines, and documentation. The codebase includes inference, examples, models, documentation, and model download infrastructure. As more developers and researchers engage with the platform, we can expect rapid advancements and improvements, leading to even more sophisticated applications. Model inference and API code (e.g. integration with Transformers). ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    GPT4All

    GPT4All

    Run Local LLMs on Any Device. Open-source

    GPT4All is an open-source project that allows users to run large language models (LLMs) locally on their desktops or laptops, eliminating the need for API calls or GPUs. The software provides a simple, user-friendly application that can be downloaded and run on various platforms, including Windows, macOS, and Ubuntu, without requiring specialized hardware. It integrates with the llama.cpp implementation and supports multiple LLMs, allowing users to interact with AI models privately. This...
    Downloads: 123 This Week
    Last Update:
    See Project
  • 9
    PyMC

    PyMC

    Bayesian Modeling and Probabilistic Programming in Python

    PyMC is a Python library for probabilistic programming focused on Bayesian statistical modeling and machine learning. Built on top of computational tools like Aesara and NumPy, PyMC allows users to define models using intuitive syntax and perform inference using MCMC, variational inference, and other advanced algorithms. It’s widely used in scientific research, data science, and decision modeling.
    Downloads: 1 This Week
    Last Update:
    See Project
  • Self-hosted password manager Icon
    Self-hosted password manager

    Developed and headquartered in Europe (Barcelona, Spain), Passwork meets GDPR, NIS2, ENS and other European regulatory requirements by design.

    On-premise solution with double encryption and certified development processes for maximum protection of corporate data. Zero‑knowledge architecture ensures your passwords never leave your infrastructure.
    Learn More
  • 10
    LatentSync

    LatentSync

    Taming Stable Diffusion for Lip Sync

    ...The system leverages a U-Net diffusion backbone, with cross-attention of audio embeddings (via an audio encoder) and reference video frames to guide generation, and applies a set of loss functions (temporal, perceptual, sync-net based) to enforce lip-sync accuracy, visual fidelity, and temporal consistency. Over versions, LatentSync has improved temporal stability and lowered resource requirements — making inference more practical (e.g. 8 GB VRAM for earlier versions, somewhat higher for latest models).
    Downloads: 6 This Week
    Last Update:
    See Project
  • 11
    OpenLLM

    OpenLLM

    Operating LLMs in production

    An open platform for operating large language models (LLMs) in production. Fine-tune, serve, deploy, and monitor any LLMs with ease. With OpenLLM, you can run inference with any open-source large-language models, deploy to the cloud or on-premises, and build powerful AI apps. Built-in supports a wide range of open-source LLMs and model runtime, including Llama 2, StableLM, Falcon, Dolly, Flan-T5, ChatGLM, StarCoder, and more. Serve LLMs over RESTful API or gRPC with one command, query via WebUI, CLI, our Python/Javascript client, or any HTTP client.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 12
    GPUStack

    GPUStack

    Performance-optimized AI inference on your GPUs

    ...The system aggregates GPU resources from multiple machines into a unified cluster so developers and administrators can run large language models and other AI workloads efficiently across distributed infrastructure. Instead of requiring complex orchestration systems such as Kubernetes, GPUStack provides a lightweight environment that automatically selects appropriate inference engines, configures deployment parameters, and schedules workloads across available GPUs. The platform supports GPUs from a wide range of vendors and can run on laptops, workstations, and servers across operating systems such as macOS, Windows, and Linux. It also enables developers to deploy models from common repositories like Hugging Face and access them through APIs similar to cloud-based AI services.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 13
    WiFi DensePose

    WiFi DensePose

    Turn WiFi signals into real-time human pose estimation and detection

    ...It is designed to showcase the emerging field of RF-based sensing, where machine learning models interpret wireless channel data to reconstruct human movement and posture. The repository includes components for data processing, model inference, and real-time visualization, making it suitable for research and experimental deployments. Its architecture emphasizes performance and reproducibility, allowing developers to explore non-visual motion capture systems using accessible hardware. Overall, WiFi DensePose functions as an advanced research-grade toolkit for WiFi-based human sensing and pose estimation.
    Downloads: 66 This Week
    Last Update:
    See Project
  • 14
    Transformer Engine

    Transformer Engine

    A library for accelerating Transformer models on NVIDIA GPUs

    ...As the number of parameters in Transformer models continues to grow, training and inference for architectures such as BERT, GPT, and T5 become very memory and compute-intensive. Most deep learning frameworks train with FP32 by default. This is not essential, however, to achieve full accuracy for many deep learning models.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 15
    LitServe

    LitServe

    Minimal Python framework for scalable AI inference servers fast

    LitServe is a minimal Python framework designed for building custom AI inference servers with full control over how models are executed and served. It allows developers to define their own inference logic, making it suitable for complex systems such as multi-model pipelines, agents, and retrieval-augmented generation workflows. Unlike traditional serving tools that enforce rigid abstractions, LitServe focuses on flexibility by letting users control request handling, batching strategies, and output processing directly in Python. ...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 16
    LightLLM

    LightLLM

    LightLLM is a Python-based LLM (Large Language Model) inference

    LightLLM is a high-performance inference and serving framework designed specifically for large language models, focusing on lightweight architecture, scalability, and efficient deployment. The framework enables developers to run and serve modern language models with significantly improved speed and resource efficiency compared to many traditional inference systems. Built primarily in Python, the project integrates optimization techniques and ideas from several leading open-source implementations, including FasterTransformer, vLLM, and FlashAttention, to accelerate token generation and reduce latency. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    Intel LLM Library for PyTorch

    Intel LLM Library for PyTorch

    Accelerate local LLM inference and finetuning

    ...The library can integrate with common AI frameworks and serving tools such as Hugging Face Transformers, LangChain, and vLLM, allowing developers to incorporate optimized inference into existing pipelines.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    PEFT

    PEFT

    State-of-the-art Parameter-Efficient Fine-Tuning

    Parameter-Efficient Fine-Tuning (PEFT) methods enable efficient adaptation of pre-trained language models (PLMs) to various downstream applications without fine-tuning all the model's parameters. Fine-tuning large-scale PLMs is often prohibitively costly. In this regard, PEFT methods only fine-tune a small number of (extra) model parameters, thereby greatly decreasing the computational and storage costs. Recent State-of-the-Art PEFT techniques achieve performance comparable to that of full...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 19
    SenseVoice

    SenseVoice

    Multilingual speech recognition and audio understanding model

    ...SenseVoice is trained on more than 400,000 hours of speech data and supports over 50 languages for multilingual recognition tasks. It is built to achieve high transcription accuracy while maintaining efficient inference performance. It includes different model variants optimized for either speed or accuracy, allowing developers to choose a configuration suitable for their use case. In addition to speech transcription, SenseVoice can detect emotional cues in speech and identify common sound events such as applause, laughter, or coughing. It also provides tools for running inference, exporting models to formats like ONNX or LibTorch, and deploying the system through APIs.
    Downloads: 10 This Week
    Last Update:
    See Project
  • 20
    Llama Recipes

    Llama Recipes

    Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT method

    The 'llama-recipes' repository is a companion to the Meta Llama models. We support the latest version, Llama 3.1, in this repository. The goal is to provide a scalable library for fine-tuning Meta Llama models, along with some example scripts and notebooks to quickly get started with using the models in a variety of use-cases, including fine-tuning for domain adaptation and building LLM-based applications with Llama and other tools in the LLM ecosystem. The examples here showcase how to run...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    ChatGLM-6B

    ChatGLM-6B

    ChatGLM-6B: An Open Bilingual Dialogue Language Model

    ChatGLM-6B is an open bilingual (Chinese + English) conversational language model based on the GLM architecture, with approximately 6.2 billion parameters. The project provides inference code, demos (command line, web, API), quantization support for lower memory deployment, and tools for finetuning (e.g., via P-Tuning v2). It is optimized for dialogue and question answering with a balance between performance and deployability in consumer hardware settings. Support for quantized inference (INT4, INT8) to reduce GPU memory requirements. ...
    Downloads: 6 This Week
    Last Update:
    See Project
  • 22
    AWS Neuron

    AWS Neuron

    Powering Amazon custom machine learning chips

    AWS Neuron is a software development kit (SDK) for running machine learning inference using AWS Inferentia chips. It consists of a compiler, run-time, and profiling tools that enable developers to run high-performance and low latency inference using AWS Inferentia-based Amazon EC2 Inf1 instances. Using Neuron developers can easily train their machine learning models on any popular framework such as TensorFlow, PyTorch, and MXNet, and run it optimally on Amazon EC2 Inf1 instances. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    SparseML

    SparseML

    Libraries for applying sparsification recipes to neural networks

    SparseML is an optimization toolkit for training and deploying deep learning models using sparsification techniques like pruning and quantization to improve efficiency.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    HunyuanOCR

    HunyuanOCR

    OCR expert VLM powered by Hunyuan's native multimodal architecture

    ...The project provides code, pretrained weights, and inference instructions, making it feasible to deploy locally or on a server, and to integrate with applications.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 25
    whisper-timestamped

    whisper-timestamped

    Multilingual Automatic Speech Recognition with word-level timestamps

    Multilingual Automatic Speech Recognition with word-level timestamps and confidence. Whisper is a set of multi-lingual, robust speech recognition models trained by OpenAI that achieve state-of-the-art results in many languages. Whisper models were trained to predict approximate timestamps on speech segments (most of the time with 1-second accuracy), but they cannot originally predict word timestamps. This repository proposes an implementation to predict word timestamps and provide a more...
    Downloads: 5 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB