Showing 29 open source projects for "snap7-full"

View related business solutions
  • Rev Your Digital Product Delivery Engine Icon
    Rev Your Digital Product Delivery Engine

    Enterprise-grade platform designed to connect strategy, planning, and execution across digital product development and software delivery

    Planview links your technology vision directly to teams' daily work, providing complete visibility and control over your digital product delivery ecosystem.
    Learn More
  • Zendesk: The Complete Customer Service Solution Icon
    Zendesk: The Complete Customer Service Solution

    Discover AI-powered, award-winning customer service software trusted by 200k customers

    Equip your agents with powerful AI tools and workflows that boost efficiency and elevate customer experiences across every channel.
    Learn More
  • 1
    SWIFT LLM

    SWIFT LLM

    Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs

    SWIFT LLM is a comprehensive framework developed within the ModelScope ecosystem for training, fine-tuning, evaluating, and deploying large language models and multimodal models. The platform provides a full machine learning pipeline that supports tasks ranging from model pre-training to reinforcement learning alignment techniques. It integrates with popular inference engines such as vLLM and LMDeploy to accelerate deployment and runtime performance. The framework also includes support for many modern training strategies, including preference learning methods and parameter-efficient fine-tuning techniques. ms-swift is designed to work with hundreds of language and multimodal models, providing a unified environment for experimentation and production deployment.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 2
    PEFT

    PEFT

    State-of-the-art Parameter-Efficient Fine-Tuning

    ...In this regard, PEFT methods only fine-tune a small number of (extra) model parameters, thereby greatly decreasing the computational and storage costs. Recent State-of-the-Art PEFT techniques achieve performance comparable to that of full fine-tuning.
    Downloads: 6 This Week
    Last Update:
    See Project
  • 3
    yt-fts

    yt-fts

    Search all of YouTube from the command line

    yt-fts, short for YouTube Full Text Search, is an open-source command-line tool that enables users to search the spoken content of YouTube videos by indexing their subtitles. The program automatically downloads subtitles from a specified YouTube channel using the yt-dlp utility and stores them in a local SQLite database. Once indexed, users can perform full-text searches across all transcripts to quickly locate keywords or phrases mentioned within the videos.
    Downloads: 7 This Week
    Last Update:
    See Project
  • 4
    ChatGLM-6B

    ChatGLM-6B

    ChatGLM-6B: An Open Bilingual Dialogue Language Model

    ...It is optimized for dialogue and question answering with a balance between performance and deployability in consumer hardware settings. Support for quantized inference (INT4, INT8) to reduce GPU memory requirements. Automatic mode switching between precision/memory tradeoffs (full/quantized).
    Downloads: 5 This Week
    Last Update:
    See Project
  • Get More Customers For Your Auto Repair Shop Icon
    Get More Customers For Your Auto Repair Shop

    Drive the Right Business to Your Auto Repair Shop with KUKUI.

    Kukui's All-in-One Success Platform is a robust integrated marketing software solution that helps businesses in the automotive repair industry to grow their brand and take it to the next level. Kukui offers tools for conversion rate optimization, POS integration, email marketing and retention as well as revenue tracking.
    Learn More
  • 5
    llama2.c

    llama2.c

    Inference Llama 2 in one file of pure C

    ...Created by Andrej Karpathy, this project offers an educational and lightweight framework for performing inference on small Llama 2 models without external dependencies. It provides a full training and inference pipeline: models can be trained in PyTorch and later executed using a concise 700-line C program (run.c). While it can technically load Meta’s official Llama 2 models, current support is limited to fp32 precision, meaning practical use is capped at models up to around 7B parameters. The goal of llama2.c is to demonstrate how a compact and transparent implementation can perform meaningful inference even with small models, emphasizing simplicity, clarity, and accessibility. ...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 6
    GLM-4.5

    GLM-4.5

    GLM-4.5: Open-source LLM for intelligent agents by Z.ai

    GLM-4.5 is a cutting-edge open-source large language model designed by Z.ai for intelligent agent applications. The flagship GLM-4.5 model has 355 billion total parameters with 32 billion active parameters, while the compact GLM-4.5-Air version offers 106 billion total parameters and 12 billion active parameters. Both models unify reasoning, coding, and intelligent agent capabilities, providing two modes: a thinking mode for complex reasoning and tool usage, and a non-thinking mode for...
    Downloads: 81 This Week
    Last Update:
    See Project
  • 7
    OpenCompass

    OpenCompass

    OpenCompass is an LLM evaluation platform

    ...Pre-support for 20+ HuggingFace and API models, a model evaluation scheme of 50+ datasets with about 300,000 questions, comprehensively evaluating the capabilities of the models in five dimensions. One line command to implement task division and distributed evaluation, completing the full evaluation of billion-scale models in just a few hours. Support for zero-shot, few-shot, and chain-of-thought evaluations, combined with standard or dialogue type prompt templates, to easily stimulate the maximum performance of various models.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 8
    Qwen

    Qwen

    The official repo of Qwen chat & pretrained large language model

    Qwen is a series of large language models developed by Alibaba Cloud, consisting of various pretrained versions like Qwen-1.8B, Qwen-7B, Qwen-14B, and Qwen-72B. These models, which range from smaller to larger configurations, are designed for a wide range of natural language processing tasks. They are openly available for research and commercial use, with Qwen's code and model weights shared on GitHub. Qwen's capabilities include text generation, comprehension, and conversation, making it a...
    Downloads: 13 This Week
    Last Update:
    See Project
  • 9
    SAG

    SAG

    SQL-Driven RAG Engine

    ...The architecture also includes a three-stage retrieval pipeline consisting of recall, expansion, and reranking steps to improve search accuracy. The engine integrates semantic vector similarity with traditional full-text search to improve both recall and precision. Because the knowledge graph is generated dynamically, the system can adapt to new information without requiring manual graph maintenance.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Entity Management Software Icon
    Entity Management Software

    Filejet’s entity management software organizes & files all of your entity reports: every year, in every jurisdiction – with total visibility.

    Automate everything: entity compliance, registered agent services, annual report and BOI filings, org charts, and DBA/fictitious name and business registration renewals, so you can focus on higher-value work.
    Learn More
  • 10
    Xorbits Inference

    Xorbits Inference

    Replace OpenAI GPT with another LLM in your app

    ...With Xorbits Inference, you can effortlessly deploy and serve your or state-of-the-art built-in models using just a single command. Whether you are a researcher, developer, or data scientist, Xorbits Inference empowers you to unleash the full potential of cutting-edge AI models.
    Downloads: 8 This Week
    Last Update:
    See Project
  • 11
    ERNIE

    ERNIE

    The official repository for ERNIE 4.5 and ERNIEKit

    ...The repository positions ERNIEKit as an industrial-grade development toolkit, emphasizing end-to-end workflows that span high-performance pre-training, supervised fine-tuning, and alignment. It supports both full-parameter training and parameter-efficient approaches so teams can choose between maximum quality and lower-cost adaptation depending on their constraints. The project also emphasizes optimization techniques for large-scale training, including mixed-precision and hybrid-parallel strategies that are commonly needed for multi-node GPU clusters. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    Ludwig AI

    Ludwig AI

    Low-code framework for building custom LLMs, neural networks

    ...Automatic batch size selection, distributed training (DDP, DeepSpeed), parameter efficient fine-tuning (PEFT), 4-bit quantization (QLoRA), and larger-than-memory datasets. Retain full control of your models down to the activation functions. Support for hyperparameter optimization, explainability, and rich metric visualizations. Experiment with different model architectures, tasks, features, and modalities with just a few parameter changes in the config. Think building blocks for deep learning.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 13
    self-llm

    self-llm

    Tutorial tailored for Chinese babies on rapid fine-tuning

    self-llm is an open source educational project created by the Datawhale community that serves as a practical guide for deploying, fine-tuning, and using open-source large language models on Linux systems. The repository focuses on helping beginners and developers understand how to run and customize modern LLMs locally rather than relying solely on hosted APIs. It provides step-by-step tutorials covering environment setup, model deployment, inference workflows, and efficient fine-tuning...
    Downloads: 4 This Week
    Last Update:
    See Project
  • 14
    DocETL

    DocETL

    A system for agentic LLM-powered data processing and ETL

    ...Instead of relying on single prompts or ad-hoc scripts, DocETL provides a declarative pipeline framework that breaks complex document analysis tasks into manageable operations that can be optimized and orchestrated automatically. Pipelines are typically defined using a low-code YAML interface, giving users full control over prompts and processing steps while still simplifying workflow creation.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 15
    LLMs-from-scratch

    LLMs-from-scratch

    Implement a ChatGPT-like LLM in PyTorch from scratch, step by step

    LLMs-from-scratch is an educational codebase that walks through implementing modern large-language-model components step by step. It emphasizes building blocks—tokenization, embeddings, attention, feed-forward layers, normalization, and training loops—so learners understand not just how to use a model but how it works internally. The repository favors clear Python and NumPy or PyTorch implementations that can be run and modified without heavyweight frameworks obscuring the logic. Chapters...
    Downloads: 3 This Week
    Last Update:
    See Project
  • 16
    MetaScreener

    MetaScreener

    AI-powered tool for efficient abstract and PDF screening

    ...Instead of manually reviewing hundreds or thousands of documents, researchers can use MetaScreener to apply machine learning techniques that assist with classification and prioritization of candidate papers. The platform can analyze both abstracts and full PDF documents, enabling automated filtering based on research criteria defined by the user. By incorporating natural language processing techniques, the system can identify potentially relevant studies and reduce the workload associated with manual screening.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 17
    Nano-vLLM

    Nano-vLLM

    A lightweight vLLM implementation built from scratch

    ...Despite its compact design, nano-vllm incorporates advanced optimization techniques such as prefix caching, tensor parallelism, and CUDA graph execution to achieve high performance during model inference. The engine is intended primarily for educational use, experimentation, and lightweight deployments where a full production-grade inference stack may be unnecessary. Its API closely mirrors that of the original vLLM framework, allowing developers familiar with vLLM to adopt the tool with minimal changes.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 18
    Claude Code Tools

    Claude Code Tools

    Practical productivity tools for Claude Code, Codex-CLI

    Claude Code Tools is an open-source collection of command-line utilities and productivity plugins designed to enhance developer workflows when using AI coding agents such as Claude Code and Codex-CLI. The project focuses on solving common problems encountered in AI-assisted development environments, including managing session history, automating terminal interactions, and maintaining context across multiple coding sessions. It includes tools that allow developers to search conversation logs...
    Downloads: 2 This Week
    Last Update:
    See Project
  • 19
    All-in-RAG

    All-in-RAG

    Big Model Application Development Practice 1

    All-in-RAG is an open-source educational project designed to teach developers how to build applications using retrieval-augmented generation techniques. The repository provides a structured learning path that covers both theoretical foundations and practical implementation steps for RAG systems. It explains the full development pipeline required to create knowledge-aware AI assistants, including data preparation, document indexing, vector embedding generation, and retrieval strategies. The project also explores advanced topics such as hybrid retrieval methods, query optimization, and evaluation techniques for improving system accuracy. Alongside theoretical explanations, the repository includes hands-on exercises and example projects that demonstrate how to build production-ready RAG systems. ...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    AirLLM

    AirLLM

    AirLLM 70B inference with single 4GB GPU

    ...AirLLM preprocesses model weights so that each transformer layer can be loaded independently during computation, reducing the memory footprint while still performing full inference. As a result, developers can experiment with models that previously required specialized high-end GPUs.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Reader 3

    Reader 3

    Quick illustration of how one can easily read books together with LLMs

    ...It was created primarily as a simple demonstration of how to combine local book reading with LLM workflows without heavy dependencies or complicated setup, and it runs with just a small Python script and a basic HTTP server. The interface focuses on clarity and ease of use, offering straightforward navigation of book chapters rather than full-featured e-reading capabilities. While it lacks advanced features like built-in annotations or rich media support, its simplicity is intentional, enabling users to quickly load EPUBs, view them in a browser, and even repurpose text for downstream tasks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    ChatGLM2-6B

    ChatGLM2-6B

    ChatGLM2-6B: An Open Bilingual Chat LLM

    ChatGLM2-6B is the second-gen Chinese-English conversational LLM from ZhipuAI/Tsinghua. It upgrades the base model with GLM’s hybrid pretraining objective, 1.4 TB bilingual data, and preference alignment—delivering big gains on MMLU, CEval, GSM8K, and BBH. The context window extends up to 32K (FlashAttention), and Multi-Query Attention improves speed and memory use. The repo includes Python APIs, CLI & web demos, OpenAI-style/FASTAPI servers, and quantized checkpoints for lightweight local...
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    llms-from-scratch-cn

    llms-from-scratch-cn

    Build a large language model from 0 only with Python foundation

    llms-from-scratch-cn is an educational open-source project designed to teach developers how to build large language models step by step using practical code and conceptual explanations. The repository provides a hands-on learning path that begins with the fundamentals of natural language processing and gradually progresses toward implementing full GPT-style architectures from the ground up. Rather than focusing on using pre-trained models through APIs, the project emphasizes understanding the internal mechanisms of modern language models, including tokenization, attention mechanisms, transformer architecture, and training workflows. Through a collection of notebooks, code examples, and translated learning materials, users can explore how to implement components such as multi-head attention, data loaders, and training pipelines using Python and PyTorch.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    second-brain-ai-assistant-course

    second-brain-ai-assistant-course

    Learn to build your Second Brain AI assistant with LLMs

    ...Through a series of modules, the project explains how to design data pipelines, build retrieval-augmented generation systems, and implement agent-based reasoning workflows. The course also introduces practical techniques such as dataset generation, model fine-tuning, and deployment strategies for AI applications. Learners build a full system capable of retrieving information from stored resources and generating responses based on that data.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    files-to-prompt

    files-to-prompt

    Concatenate a directory full of files into a single prompt

    files-to-prompt is a Python command-line tool that takes one or more files or entire directories and concatenates their contents into a single, LLM-friendly prompt. It walks the directory tree, outputting each file preceded by its relative path and a separator, so a model can understand which content came from where. The tool is aimed at workflows where you want to ask an LLM questions about a whole codebase, documentation set, or notes folder without manually copying files together. It...
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next
MongoDB Logo MongoDB