Showing 35 open source projects for "running cppcheck on linux"

View related business solutions
  • Next-Gen Encryption for Post-Quantum Security | CLEAR by Quantum Knight Icon
    Next-Gen Encryption for Post-Quantum Security | CLEAR by Quantum Knight

    Lock Down Any Resource, Anywhere, Anytime

    CLEAR by Quantum Knight is a FIPS-140-3 validated encryption SDK engineered for enterprises requiring top-tier security. Offering robust post-quantum cryptography, CLEAR secures files, streaming media, databases, and networks with ease across over 30 modern platforms. Its compact design, smaller than a single smartphone image, ensures maximum efficiency and low energy consumption.
    Learn More
  • Failed Payment Recovery for Subscription Businesses Icon
    Failed Payment Recovery for Subscription Businesses

    For subscription companies searching for a failed payment recovery solution to grow revenue, and retain customers.

    FlexPay’s innovative platform uses multiple technologies to achieve the highest number of retained customers, resulting in reduced involuntary churn, longer life span after recovery, and higher revenue. Leading brands like LegalZoom, Hooked on Phonics, and ClinicSense trust FlexPay to recover failed payments, reduce churn, and increase customer lifetime value.
    Learn More
  • 1
    ChatGLM.cpp

    ChatGLM.cpp

    C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)

    ChatGLM.cpp is a C++ implementation of the ChatGLM-6B model, enabling efficient local inference without requiring a Python environment. It is optimized for running on consumer hardware.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 2
    llama.cpp

    llama.cpp

    Port of Facebook's LLaMA model in C/C++

    The llama.cpp project enables the inference of Meta's LLaMA model (and other models) in pure C/C++ without requiring a Python runtime. It is designed for efficient and fast model execution, offering easy integration for applications needing LLM-based capabilities. The repository focuses on providing a highly optimized and portable implementation for running large language models directly within C/C++ environments.
    Downloads: 220 This Week
    Last Update:
    See Project
  • 3
    gemma.cpp

    gemma.cpp

    lightweight, standalone C++ inference engine for Google's Gemma models

    Gemma.cpp is a C++ implementation for running inference with Gemma models efficiently on CPUs and GPUs. Developed by Google, it allows running large language models (LLMs) like Gemma with minimal hardware, focusing on optimized performance and low latency. Gemma.cpp is intended for developers seeking to deploy LLMs in production environments without needing massive computational resources.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 4
    AlphaFold 3

    AlphaFold 3

    AlphaFold 3 inference pipeline

    AlphaFold 3, developed by Google DeepMind, is an advanced deep learning system for predicting biomolecular structures and interactions with exceptional accuracy. This repository provides the complete inference pipeline for running AlphaFold 3, though access to the model parameters is restricted and must be obtained directly from Google under specific terms of use. The system is designed for scientific research applications in structural biology, biochemistry, and bioinformatics, enabling...
    Downloads: 6 This Week
    Last Update:
    See Project
  • Collect! is a highly configurable debt collection software Icon
    Collect! is a highly configurable debt collection software

    Everything that matters to debt collection, all in one solution.

    The flexible & scalable debt collection software built to automate your workflow. From startup to enterprise, we have the solution for you.
    Learn More
  • 5
    GPT4All

    GPT4All

    Run Local LLMs on Any Device. Open-source

    GPT4All is an open-source project that allows users to run large language models (LLMs) locally on their desktops or laptops, eliminating the need for API calls or GPUs. The software provides a simple, user-friendly application that can be downloaded and run on various platforms, including Windows, macOS, and Ubuntu, without requiring specialized hardware. It integrates with the llama.cpp implementation and supports multiple LLMs, allowing users to interact with AI models privately. This...
    Downloads: 134 This Week
    Last Update:
    See Project
  • 6
    PyTorch/XLA

    PyTorch/XLA

    Enabling PyTorch on Google TPU

    PyTorch/XLA is a Python package that uses the XLA deep learning compiler to connect the PyTorch deep learning framework and Cloud TPUs. You can try it right now, for free, on a single Cloud TPU with Google Colab, and use it in production and on Cloud TPU Pods with Google Cloud. Take a look at one of our Colab notebooks to quickly try different PyTorch networks running on Cloud TPUs and learn how to use Cloud TPUs as PyTorch devices. We are also introducing new TPU VMs for more transparent...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    EnvPool

    EnvPool

    C++-based high-performance parallel environment execution engine

    EnvPool is a fast, asynchronous, and parallel RL environment library designed for scaling reinforcement learning experiments. Developed by SAIL at Singapore, it leverages C++ backend and Python frontend for extremely high-speed environment interaction, supporting thousands of environments running in parallel on a single machine. It's compatible with Gymnasium API and RLlib, making it suitable for scalable training pipelines.
    Downloads: 31 This Week
    Last Update:
    See Project
  • 8
    Torch-TensorRT

    Torch-TensorRT

    PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT

    Torch-TensorRT is a compiler for PyTorch/TorchScript, targeting NVIDIA GPUs via NVIDIA’s TensorRT Deep Learning Optimizer and Runtime. Unlike PyTorch’s Just-In-Time (JIT) compiler, Torch-TensorRT is an Ahead-of-Time (AOT) compiler, meaning that before you deploy your TorchScript code, you go through an explicit compile step to convert a standard TorchScript program into a module targeting a TensorRT engine. Torch-TensorRT operates as a PyTorch extension and compiles modules that integrate...
    Downloads: 10 This Week
    Last Update:
    See Project
  • 9
    GoogleTest

    GoogleTest

    Google Testing and Mocking Framework

    GoogleTest is Google's C++ mocking and test framework. It's used by many internal projects at Google, as well as a number of notable projects such as The Chromium projects, the OpenCV computer vision library, and the LLVM compiler. This GoogleTest project is actually a union of what used to be two separate projects: the old GoogleTest and GoogleMock, an extension of GoogleTest for writing and using C++ mock classes. Since they were so closely related, they were merged to create an even...
    Downloads: 17 This Week
    Last Update:
    See Project
  • Iris Powered By Generali - Iris puts your customer in control of their identity. Icon
    Iris Powered By Generali - Iris puts your customer in control of their identity.

    Increase customer and employee retention by offering Onwatch identity protection today.

    Iris Identity Protection API sends identity monitoring and alerts data into your existing digital environment – an ideal solution for businesses that are looking to offer their customers identity protection services without having to build a new product or app from scratch.
    Learn More
  • 10
    RTP-LLM

    RTP-LLM

    Alibaba's high-performance LLM inference engine for diverse apps

    RTP-LLM is an open-source large language model inference acceleration engine developed by Alibaba to provide high-performance serving infrastructure for modern LLM deployments. The system focuses on improving throughput, latency, and resource utilization when running large models in production environments. It achieves this by implementing optimized GPU kernels, batching strategies, and memory management techniques tailored for transformer inference workloads. The framework is designed for...
    Downloads: 7 This Week
    Last Update:
    See Project
  • 11
    ggml

    ggml

    Tensor library for machine learning

    ggml is an open-source tensor library designed for efficient machine learning computation with a focus on running models locally and with minimal dependencies. Written primarily in C and C++, the library provides low-level tensor operations and automatic differentiation that allow developers to implement machine learning algorithms and neural networks efficiently. The project emphasizes portability and performance, enabling machine learning inference across a wide range of hardware...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 12
    BehaviorTree.CPP

    BehaviorTree.CPP

    C++ behavior tree library for robotics and AI decision systems

    BehaviorTree.CPP is a C++ library designed to create, manage, and execute behavior trees, a widely used model for decision-making in robotics and artificial intelligence systems. It provides a flexible and modular framework that allows developers to define complex behaviors as reusable tree structures composed of nodes. BehaviorTree.CPP emphasizes performance and real-time execution, making it particularly suitable for robotics applications where responsiveness is critical. It supports...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 13
    TensorFlow Serving

    TensorFlow Serving

    Serving system for machine learning models

    TensorFlow Serving is a flexible, high-performance serving system for machine learning models, designed for production environments. It deals with the inference aspect of machine learning, taking models after training and managing their lifetimes, providing clients with versioned access via a high-performance, reference-counted lookup table. TensorFlow Serving provides out-of-the-box integration with TensorFlow models, but can be easily extended to serve other types of models and data. The...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 14
    mllm

    mllm

    Fast Multimodal LLM on Mobile Devices

    mllm is an open-source inference engine designed to run multimodal large language models efficiently on mobile devices and edge computing environments. The framework focuses on delivering high-performance AI inference in resource-constrained systems such as smartphones, embedded hardware, and lightweight computing platforms. Implemented primarily in C and C++, it is designed to operate with minimal external dependencies while taking advantage of hardware-specific acceleration technologies...
    Downloads: 1 This Week
    Last Update:
    See Project
  • 15
    pytorch-cpp

    pytorch-cpp

    C++ Implementation of PyTorch Tutorials for Everyone

    C++ Implementation of PyTorch Tutorials for Everyone. This repository provides tutorial code in C++ for deep learning researchers to learn PyTorch (i.e. Section 1 to 3) Interactive Tutorials are currently running on LibTorch Nightly Version. Libtorch only supports 64bit Windows and an x64 generator needs to be specified. Create all required script module files for pre-learned models/weights during the build. Requires installed python3 with PyTorch and torch-vision. You can choose to only...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    PowerInfer

    PowerInfer

    High-speed Large Language Model Serving for Local Deployment

    PowerInfer is a high-performance inference engine designed to run large language models efficiently on personal computers equipped with consumer-grade GPUs. The project focuses on improving the performance of local AI inference by optimizing how neural network computations are distributed between CPU and GPU resources. Its architecture exploits the observation that only a subset of neurons in large models are frequently activated, allowing the system to preload frequently used neurons into...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    RunAnywhere

    RunAnywhere

    Production ready toolkit to run AI locally

    RunAnywhere SDKs are a set of cross-platform development tools that enable applications to run artificial intelligence models directly on user devices instead of relying on cloud infrastructure. The toolkit allows developers to integrate language models, speech recognition, and voice synthesis capabilities into mobile or desktop applications while keeping all computation local. By running models entirely on device, the platform eliminates network latency and protects user data because...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    FAY

    FAY

    Framework for building AI-powered interactive digital humans and agent

    Fay is an open source framework designed to build and deploy interactive digital humans powered by large language models. It acts as a middleware layer that connects digital character technologies with conversational AI systems and business applications. Fay supports various types of digital humans, including 2.5D and 3D avatars, and can be integrated with applications running on mobile devices, PCs, web platforms, and embedded systems. Its architecture allows developers to combine different...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Uranie

    Uranie

    Uranie is CEA's uncertainty analysis platform, based on ROOT

    Uranie is a sensitivity and uncertainty analysis plateform based on the ROOT framework (http://root.cern.ch) . It is developed at CEA, the French Atomic Energy Commission (http://www.cea.fr). It provides various tools for: - data analysis - sampling - statistical modeling - optimisation - sensitivity analysis - uncertainty analysis - running code on high performance computers - etc. Thanks to ROOT, it is easily scriptable in CINT (c++ like syntax) and Python. Is is...
    Downloads: 5 This Week
    Last Update:
    See Project
  • 20
    Hello AI World

    Hello AI World

    Guide to deploying deep-learning inference networks

    Hello AI World is a great way to start using Jetson and experiencing the power of AI. In just a couple of hours, you can have a set of deep learning inference demos up and running for realtime image classification and object detection on your Jetson Developer Kit with JetPack SDK and NVIDIA TensorRT. The tutorial focuses on networks related to computer vision, and includes the use of live cameras. You’ll also get to code your own easy-to-follow recognition program in Python or C++, and train...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Physics Simulation Software based on user sketchs running a pattern recognition agent, this app is able to animate a physics sketch, from a blackboard
    Downloads: 1 This Week
    Last Update:
    See Project
  • 22
    MACE

    MACE

    Deep learning inference framework optimized for mobile platforms

    Mobile AI Compute Engine (or MACE for short) is a deep learning inference framework optimized for mobile heterogeneous computing on Android, iOS, Linux and Windows devices. Runtime is optimized with NEON, OpenCL and Hexagon, and Winograd algorithm is introduced to speed up convolution operations. The initialization is also optimized to be faster. Chip-dependent power options like big.LITTLE scheduling, Adreno GPU hints are included as advanced APIs. UI responsiveness guarantee is sometimes obligatory when running a model. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    TurboTransformers

    TurboTransformers

    Fast and user-friendly runtime for transformer inference

    TurboTransformers is a high-performance inference framework optimized for running Transformer models efficiently on CPUs and GPUs. It improves latency and throughput for NLP applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    nGraph

    nGraph

    nGraph has moved to OpenVINO

    Frameworks using nGraph Compiler stack to execute workloads have shown up to 45X performance boost when compared to native framework implementations. We've also seen performance boosts running workloads that are not included on the list of Validated workloads, thanks to nGraph's powerful subgraph pattern matching. Additionally, we have integrated nGraph with PlaidML to provide deep learning performance acceleration on Intel, nVidia, & AMD GPUs. nGraph Compiler aims to accelerate developing...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    UnrealCV

    UnrealCV

    Connecting Computer Vision to Unreal Engine

    UnrealCV is a project to help computer vision researchers build virtual worlds using Unreal Engine (UE). It extends UE with a plugin. UnrealCV can be used in two ways. The first one is using a compiled game binary with UnrealCV embedded. This is as simple as running a game, no knowledge of Unreal Engine is required. The second is installing the UnrealCV plugin into Unreal Engine and using the editor to build a new virtual world.
    Downloads: 4 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next
MongoDB Logo MongoDB