Open Source Python Large Language Models (LLM) - Page 13

Python Large Language Models (LLM)

View 366 business solutions

Browse free open source Python Large Language Models (LLM) and projects below. Use the toggles on the left to filter open source Python Large Language Models (LLM) by OS, license, language, programming language, and project status.

  • Feroot AI automates website security with 24/7 monitoring Icon
    Feroot AI automates website security with 24/7 monitoring

    Trusted by enterprises, healthcare providers, retailers, SaaS platforms, payment service providers, and public sector organizations.

    Feroot unifies JavaScript behavior analysis, web compliance scanning, third-party script monitoring, consent enforcement, and data privacy posture management to stop Magecart, formjacking, and unauthorized tracking.
    Learn More
  • PageDNA: Web-to-Print eCommerce Software Icon
    PageDNA: Web-to-Print eCommerce Software

    eCommerce for Print, Signs and Fulfillment Trusted by In‑Plants and Commercial Print Leaders

    PageDNA enables successful eCommerce strategies for commercial print sales organizations, internal print shops, and brand owners. PageDNA’s online ordering platform increases print volume while decreasing touch costs for all stakeholders: clientele, print operations, and the organizations they support.
    Learn More
  • 1
    LlamaGen

    LlamaGen

    Autoregressive Model Beats Diffusion

    LlamaGen is an open-source research project that introduces a new approach to image generation by applying the autoregressive next-token prediction paradigm used in large language models to visual generation tasks. Instead of relying on diffusion models, the framework treats images as sequences of tokens that can be generated progressively using transformer architectures similar to those used for text generation. The project explores how scaling autoregressive models and improving image tokenization techniques can produce competitive results compared with modern diffusion-based image generators. LlamaGen provides several pre-trained models and training configurations that support both class-conditional image generation and text-conditioned image synthesis. The repository includes image tokenizers, training scripts, and models ranging from hundreds of millions to several billion parameters.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Local File Organizer

    Local File Organizer

    An AI-powered file management tool that ensures privacy

    Local-File-Organizer is an AI-powered file management system designed to automatically analyze, categorize, and reorganize files stored on a user’s local machine. The project focuses on privacy-first file organization by performing all processing locally rather than sending data to external cloud services. It uses language and vision models to understand the contents of documents, images, and other file types so that files can be grouped intelligently according to their meaning or context. The system scans directories, extracts relevant information from files, and restructures folder hierarchies to make content easier to locate and manage. Through AI-driven analysis, the software can detect themes, topics, and metadata in files, allowing it to organize information in ways that traditional rule-based file managers cannot achieve. The tool supports multiple sorting strategies that allow users to categorize files by content, date, or type depending on their workflow preferences.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    LongBench

    LongBench

    LongBench v2 and LongBench (ACL 25'&24')

    LongBench is a comprehensive benchmark designed to evaluate the ability of large language models to understand and reason over very long textual contexts. Traditional language model benchmarks typically evaluate tasks involving relatively short inputs, which does not reflect many real-world applications such as analyzing large documents or entire code repositories. LongBench addresses this gap by providing datasets that require models to process and reason over long sequences of text across multiple tasks. The benchmark includes multiple categories such as single-document question answering, multi-document reasoning, summarization, long dialogue understanding, and code analysis. It supports bilingual evaluation in English and Chinese to assess multilingual capabilities across extended contexts. Newer versions of the benchmark introduce extremely long context windows ranging from thousands to millions of tokens, enabling researchers to test the limits of modern long-context models.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    LongWriter

    LongWriter

    Unleashing 10,000+ Word Generation from Long Context LLMs

    LongWriter is an open-source framework and set of large language models designed to enable ultra-long text generation that can exceed 10,000 words while maintaining coherence and structure. Traditional large language models can process large inputs but often struggle to generate long outputs due to limitations in training data and alignment strategies. LongWriter addresses this challenge by introducing a specialized dataset and training approach that encourages models to produce longer responses. The system uses an agent-based pipeline called AgentWrite that decomposes large writing tasks into smaller subtasks, allowing the model to produce long documents section by section. Researchers also created the LongWriter-6k dataset containing thousands of examples with outputs ranging from a few thousand to tens of thousands of words.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Non Emergency Medical Transportation (NEMT) Software Icon
    Non Emergency Medical Transportation (NEMT) Software

    Healthcare providers in search of a scheduling and dispatch solution for non emergency medical transportation

    NovusMED is an ecosystem that includes call center, administrative, driver applications, and client/clinic booking applications. NovusMED is the platform of choice for a wide range of medical transportation services and includes configurations for brokerage, providers, senior, community, and home health programs. Accurately manage calls and patient information. Monitor real-time performance and adjust resource capacity to meet changes in service demand. Manage will calls, confirmation calls, and recurring trips/standing orders in real time. Improved mileage reimbursement and cost calculators to manage multiple contractors, funding sources (payors), multiple providers, and volunteer driver programs. Enhanced credential management for vehicles and drivers. Manage subcontractor outsourcing with provider mobile, trip bidding, and trip offers. Able to see the closest vehicle and perform immediate bookings.
    Learn More
  • 5
    MGIE

    MGIE

    Guiding Instruction-based Image Editing via Multimodal Large Language

    MGIE—Guiding Instruction-based Image Editing—demonstrates how a multimodal LLM can parse natural-language editing instructions and then drive image transformations accordingly. The project focuses on making edits explainable and controllable: the model interprets text guidance, reasons over image content, and outputs edits aligned with user intent. It’s positioned as an ICLR 2024 Spotlight work, with code and references that show how to connect language planning to concrete image operations. This bridges a gap between free-form prompts and precise edits by letting users describe “what” and “where” in everyday language. The repo includes instructions, examples, and links that situate MGIE within Apple’s broader line of multimodal research. For practitioners, MGIE provides a blueprint for text-to-edit systems that are more semantically grounded than naive prompt-only pipelines.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    MING

    MING

    A large-scale model of medical consultation in Chinese

    MING is an open-source medical large language model designed for intelligent medical consultation and question answering in Chinese. The project focuses on building a healthcare-focused conversational system capable of responding to medical questions, analyzing case descriptions, and guiding diagnostic reasoning. It is trained using medical instruction tuning so that the model can understand patient symptoms and respond with structured explanations and clinical suggestions. One of its primary goals is to simulate a multi-round medical consultation process, allowing the system to ask follow-up questions before offering diagnostic recommendations. This interactive capability makes it suitable for conversational health applications, patient triage scenarios, and educational demonstrations. The model is built on transformer-based architectures using frameworks such as PyTorch and integrates with Hugging Face tooling for training and inference workflows.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    ML Ferret

    ML Ferret

    Refer and Ground Anything Anywhere at Any Granularity

    Ferret is Apple’s end-to-end multimodal large language model designed specifically for flexible referring and grounding: it can understand references of any granularity (boxes, points, free-form regions) and then ground open-vocabulary descriptions back onto the image. The core idea is a hybrid region representation that mixes discrete coordinates with continuous visual features, so the model can fluidly handle “any-form” referring while maintaining precise spatial localization. The repo presents the vision-language pipeline, model assets, and paper resources that show how Ferret answers questions, follows instructions, and returns grounded outputs rather than just text. In practice, this enables tasks like “find that small red icon next to the chart and describe it” where both the linguistic reference and the visual region are ambiguous without fine spatial reasoning.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    ML Retreat

    ML Retreat

    Machine Learning Journal for Intermediate to Advanced Topics

    ML Retreat is an open-source learning repository that serves as a structured journal documenting advanced topics in machine learning and artificial intelligence. The project compiles detailed notes, technical explanations, and curated resources that guide readers through complex concepts across modern AI research. Rather than functioning as a traditional tutorial series, the repository is organized as a learning journey that progressively explores increasingly advanced subjects. Topics include large language models, graph neural networks, mechanistic interpretability, transformer architectures, and emerging research areas such as quantum machine learning. The repository includes references to influential research papers, lectures, and educational content from well-known machine learning educators.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Magicoder

    Magicoder

    Empowering Code Generation with OSS-Instruct

    Magicoder is an open-source family of large language models designed specifically for code generation and software development tasks. The project focuses on improving the quality and diversity of code generation by training models with a novel dataset construction approach known as OSS-Instruct. This technique uses open-source code repositories as a foundation for generating more realistic and diverse instruction datasets for training language models. By grounding training data in real open-source examples, Magicoder aims to reduce bias and improve the reliability of code generation results compared to models trained solely on synthetic instructions. The project includes model implementations, training resources, and evaluation benchmarks that demonstrate how the approach improves instruction-following and code synthesis capabilities. Magicoder models are intended for tasks such as programming assistance, code explanation, automated debugging, and software documentation generation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • GoAnywhere Managed File Transfer (MFT) Icon
    GoAnywhere Managed File Transfer (MFT)

    Secure and simplify your file transfers

    GoAnywhere MFT provides secure managed file transfer for enterprises. Deployable on-premise, in the cloud, or in hybrid environments, GoAnywhere MFT software enables organizations to exchange data among employees, customers, and trading partners, as well as between systems, securely. GoAnywhere MFT was a recipient of the Cybersecurity Excellence Award for Secure File Transfer.
    Learn More
  • 10
    MatMul-Free LM

    MatMul-Free LM

    Implementation for MatMul-free LM

    MatMul-Free LM is an experimental implementation of a large language model architecture designed to eliminate traditional matrix multiplication operations used in transformer networks. Since matrix multiplication is one of the most computationally expensive components of modern language models, the project explores alternative computational strategies that reduce hardware requirements while maintaining comparable performance. The architecture relies on quantization-aware training and lightweight operations to replace conventional dense matrix multiplications with more efficient alternatives. These optimizations can significantly reduce memory consumption and potentially improve computational efficiency during both training and inference. The repository provides implementations of models at several parameter scales and includes tools for experimenting with the architecture using modern machine learning frameworks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    MathModelAgent

    MathModelAgent

    An Agent Designed for Mathematical Modeling

    MathModelAgent is an AI agent system designed specifically for assisting with mathematical modeling tasks and academic problem solving. The platform automates the process of analyzing mathematical problems, constructing models, generating code for simulations or computations, and producing a complete research-style report. The project uses a multi-agent architecture where different specialized agents handle tasks such as problem interpretation, modeling design, programming implementation, and paper writing. Through integration with multiple large language models, the system can coordinate these components to generate structured modeling solutions and formatted research papers suitable for submission. The platform also includes a code execution environment that allows generated programs to be tested, corrected, and refined during the modeling workflow.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    MaxText

    MaxText

    A simple, performant and scalable Jax LLM

    MaxText is a high-performance, highly scalable open-source framework designed to train and fine-tune large language models using the JAX ecosystem. The project acts as both a reference implementation and a practical training library that demonstrates best practices for building and scaling transformer-based language models on modern accelerator hardware. It is optimized to run efficiently on Google Cloud TPUs and GPUs, enabling researchers and engineers to train models ranging from small experiments to extremely large distributed workloads. The framework focuses on simplicity while still supporting advanced techniques such as model sharding, distributed computation, and high-throughput training pipelines. MaxText includes ready-to-use configurations and reproducible training examples that help developers understand how to deploy large-scale AI workloads with modern machine learning infrastructure.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    MegaParse

    MegaParse

    File Parser optimised for LLM Ingestion with no loss

    MegaParse is a file parser optimized for Large Language Model (LLM) ingestion, ensuring no loss of information. It efficiently parses various document formats, such as PDFs, DOCX, and PPTX, converting them into formats ideal for processing by LLMs. This tool is essential for applications that require accurate and comprehensive data extraction from diverse document types.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Megatron

    Megatron

    Ongoing research training transformer models at scale

    Megatron is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA. This repository is for ongoing research on training large transformer language models at scale. We developed efficient, model-parallel (tensor, sequence, and pipeline), and multi-node pre-training of transformer based models such as GPT, BERT, and T5 using mixed precision. Megatron is also used in NeMo Megatron, a framework to help enterprises overcome the challenges of building and training sophisticated natural language processing models with billions and trillions of parameters. Copyright (c) 2022, NVIDIA CORPORATION. All rights reserved.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    MemoryOS

    MemoryOS

    MemoryOS is designed to provide a memory operating system

    MemoryOS is an open-source framework designed to provide a structured memory management system for AI agents and large language model applications. The project addresses one of the major limitations of modern language models: their inability to maintain long-term context beyond the limits of their prompt window. MemoryOS introduces a hierarchical memory architecture inspired by operating system memory management principles, allowing agents to store, update, retrieve, and generate information from multiple layers of memory. These layers typically include short-term memory for immediate conversation context, mid-term memory for topic-level grouping, and long-term personal memory for persistent knowledge about users or tasks. The system dynamically updates and promotes information between these layers using structured algorithms that prioritize relevance and recency.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    MiniMax-01

    MiniMax-01

    Large-language-model & vision-language-model based on Linear Attention

    MiniMax-01 is the official repository for two flagship models: MiniMax-Text-01, a long-context language model, and MiniMax-VL-01, a vision-language model built on top of it. MiniMax-Text-01 uses a hybrid attention architecture that blends Lightning Attention, standard softmax attention, and Mixture-of-Experts (MoE) routing to achieve both high throughput and long-context reasoning. It has 456 billion total parameters with 45.9 billion activated per token and is trained with advanced parallel strategies such as LASP+, varlen ring attention, and Expert Tensor Parallelism, enabling a training context of 1 million tokens and up to 4 million tokens at inference. MiniMax-VL-01 extends this core by adding a 303M-parameter Vision Transformer and a two-layer MLP projector in a ViT–MLP–LLM framework, allowing the model to process images at dynamic resolutions up to 2016×2016.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    MiniMax-M1

    MiniMax-M1

    Open-weight, large-scale hybrid-attention reasoning model

    MiniMax-M1 is presented as the world’s first open-weight, large-scale hybrid-attention reasoning model, designed to push the frontier of long-context, tool-using, and deeply “thinking” language models. It is built on the MiniMax-Text-01 foundation and keeps the same massive parameter budget, but reworks the attention and training setup for better reasoning and test-time compute scaling. Architecturally, it combines Mixture-of-Experts layers with lightning attention, enabling the model to support a native context length of 1 million tokens while using far fewer FLOPs than comparable reasoning models for very long generations. The team emphasizes efficient scaling of test-time compute: at 100K-token generation lengths, M1 reportedly uses only about 25 percent of the FLOPs of some competing models, making extended “think step” traces more feasible. M1 is further trained with large-scale reinforcement learning over diverse tasks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    MiniOneRec

    MiniOneRec

    Minimal reproduction of OneRec

    MiniOneRec is an open-source framework designed to explore generative approaches to recommendation systems using large language model architectures. Traditional recommender systems typically rely on large embedding tables and ranking models, but MiniOneRec adopts a generative paradigm in which items are represented as sequences of semantic identifiers generated by autoregressive models. The framework provides an end-to-end pipeline for building generative recommender systems, including semantic identifier construction, supervised fine-tuning, and reinforcement learning-based optimization. Semantic IDs are created using techniques such as quantized variational autoencoders to convert item features into token sequences that can be modeled by transformer architectures. Developers can train and evaluate recommendation models using different backbone language models while benefiting from the generative framework’s parameter efficiency and scalability.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Mixtral offloading

    Mixtral offloading

    Run Mixtral-8x7B models in Colab or consumer desktops

    Mixtral-Offloading is an open-source project designed to enable efficient inference of large Mixture-of-Experts language models such as Mixtral-8x7B on hardware with limited GPU memory. The project implements techniques that allow model components to be dynamically moved between CPU memory and GPU memory during inference, significantly reducing the amount of GPU VRAM required to run the model. This approach takes advantage of the sparse activation properties of mixture-of-experts architectures, where only a subset of expert networks are used for each token during generation. By selectively loading and caching the required experts, the system avoids keeping the entire model in GPU memory at once. The repository includes notebooks and code examples that demonstrate how to run large language models on consumer hardware such as personal GPUs or cloud notebook environments.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    MoBA

    MoBA

    MoBA: Mixture of Block Attention for Long-Context LLMs

    MoBA, short for Mixture of Block Attention, is an open-source research implementation of a novel attention mechanism designed to improve the efficiency of large language models processing extremely long contexts. The architecture adapts ideas from Mixture-of-Experts networks and applies them directly to the attention mechanism of transformer models. Instead of forcing each token to attend to every other token in the sequence, MoBA divides the context into blocks and dynamically routes queries to only the most relevant segments of information. This routing strategy reduces the computational cost associated with traditional attention while preserving performance on reasoning and long-context tasks. The approach allows language models to scale to significantly longer input contexts without the quadratic computational cost normally associated with transformer attention mechanisms.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    NExT-GPT

    NExT-GPT

    Code and models for ICML 2024 paper, NExT-GPT

    NExT-GPT is an open-source research framework that implements an advanced multimodal large language model capable of understanding and generating content across multiple modalities. Unlike traditional models that primarily handle text, NExT-GPT supports input and output combinations involving text, images, video, and audio in a unified architecture. The system connects a large language model with multimodal encoders and diffusion-based decoders so it can interpret information from different sensory formats and generate responses in different media types. This architecture allows the model to convert between modalities, such as generating images from text descriptions or producing audio or video outputs based on textual prompts. The project also introduces instruction-tuning strategies that enable the model to perform complex multimodal reasoning and generation tasks with minimal additional parameters.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    NeMo Curator

    NeMo Curator

    Scalable data pre processing and curation toolkit for LLMs

    NeMo Curator is a Python library specifically designed for fast and scalable dataset preparation and curation for large language model (LLM) use-cases such as foundation model pretraining, domain-adaptive pretraining (DAPT), supervised fine-tuning (SFT) and paramter-efficient fine-tuning (PEFT). It greatly accelerates data curation by leveraging GPUs with Dask and RAPIDS, resulting in significant time savings. The library provides a customizable and modular interface, simplifying pipeline expansion and accelerating model convergence through the preparation of high-quality tokens. At the core of the NeMo Curator is the DocumentDataset which serves as the the main dataset class. It acts as a straightforward wrapper around a Dask DataFrame. The Python library offers easy-to-use methods for expanding the functionality of your curation pipeline while eliminating scalability concerns.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    NewsNotFound

    NewsNotFound

    This is the entire source code for NewsNotFound's article gen process

    Our mission is to lead the way in AI journalism by providing completely neutral and unbiased news articles that can be governed by the public. NewsNotFound is a news website located at https://newsnotfound.com. We want to build the most unbiased news platform on the internet.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    OSS-Fuzz Gen

    OSS-Fuzz Gen

    LLM powered fuzzing via OSS-Fuzz

    OSS-Fuzz-Gen is a companion project that helps automatically create or improve fuzz targets for open-source codebases, aiming to increase coverage in OSS-Fuzz with minimal maintainer effort. It analyses a library’s APIs, examples, and tests to propose harnesses that exercise parsers, decoders, or protocol handlers—precisely the code where fuzzing pays off. The system integrates with modern LLM-assisted workflows to draft harness code and then iterates based on build errors or low coverage signals. Importantly, it aligns with OSS-Fuzz conventions, generating corpus seeds, build rules, and sanitizer settings so projects can plug in quickly. Reports highlight what functions were targeted, how coverage evolved, and where manual hints could unlock more paths. The goal is pragmatic: shrink the gap between “we should fuzz this” and “we have robust fuzzing running in CI,” especially for understaffed maintainers.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    OneFileLLM

    OneFileLLM

    Specify a github or local repo, github pull request

    OneFileLLM is an open-source project designed to simplify the distribution and execution of large language model applications by packaging them into a single portable file. The concept behind the project is to eliminate the complexity normally associated with deploying AI systems, which often require multiple dependencies, frameworks, and configuration steps. Instead, the entire runtime environment, model interface, and application logic are bundled together into a single executable artifact. This design allows developers to share AI tools in a format that can be easily distributed and executed across different machines without complicated installation procedures. Such packaging strategies help make AI software easier to use in educational settings, demonstrations, and lightweight deployments.
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB