Open Source Python Large Language Models (LLM) - Page 11

Python Large Language Models (LLM)

View 366 business solutions

Browse free open source Python Large Language Models (LLM) and projects below. Use the toggles on the left to filter open source Python Large Language Models (LLM) by OS, license, language, programming language, and project status.

  • Cloud-Based Software Licensing - Zentitle by Nalpeiron Icon
    Cloud-Based Software Licensing - Zentitle by Nalpeiron

    The #1 Software Licensing Solution. Release new Software License Models fast with no engineering. Increase software sales and drive up revenues.

    1000’s software companies have used Zentitle to launch new software products fast and control their entitlements easily - many going from startup to IPO on our platform. Our software monetization infrastructure allows you to easily build or
    Learn More
  • Monitoring, Securing, Optimizing 3rd party scripts Icon
    Monitoring, Securing, Optimizing 3rd party scripts

    For developers looking for a solution to monitor, script, and optimize 3rd party scripts

    c/side is crawling many sites to get ahead of new attacks. c/side is the only fully autonomous detection tool for assessing 3rd party scripts. We do not rely purely on threat feed intel or easy to circumvent detections. We also use historical context and AI to review the payload and behavior of scripts.
    Learn More
  • 1
    Firefly LLM

    Firefly LLM

    A large model training tool that supports training large models

    Firefly is an open-source framework designed to simplify the training and fine-tuning of large language models through a unified and configurable workflow. The project provides a comprehensive environment where developers can perform tasks such as model pre-training, instruction tuning, and preference optimization using widely adopted machine learning techniques. Its architecture supports both full-parameter training and parameter-efficient strategies like LoRA and QLoRA, making it suitable for environments with limited computational resources. Firefly is compatible with a wide range of popular open-source models including LLaMA, Qwen, Baichuan, InternLM, and Mistral, enabling developers to experiment with different architectures using a consistent training pipeline. The framework also provides curated datasets and training templates that help streamline the process of instruction tuning and conversational model development.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    Fun Audio Chat

    Fun Audio Chat

    Large Audio Language Model built for natural interactions

    Fun Audio Chat is an interactive voice-first conversational AI platform designed to let users engage in natural spoken dialogue with large language models in real time, turning speech into context-aware responses while maintaining a smooth back-and-forth experience. It combines speech recognition, audio processing, and AI generation so users can speak simply and receive spoken replies, enabling applications such as virtual assistants, voice bots, and hands-free chat interfaces. The system supports dynamic audio input and output, meaning it can handle different voices, tones, and conversational contexts without forcing users into typed interactions. With real-time streaming, it minimizes latency and delivers responses quickly, making it suitable for applications where responsiveness matters, such as interactive demos, accessibility tools, and conversational games.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    Functionary

    Functionary

    Chat language model that can use tools and interpret the results

    Functionary is an open-source large language model specifically designed for interpreting and executing structured functions or external tools within conversational AI systems. The model extends traditional chat-based language models by enabling them to determine when external functions should be called and how to extract the necessary parameters from natural language input. Function definitions are typically provided in JSON schema format, allowing the model to generate structured function calls compatible with modern tool-calling interfaces used in AI applications. Functionary can decide whether to execute tools sequentially or in parallel and can analyze the outputs of those tools to produce context-aware responses. This capability allows AI systems to interact with external services, APIs, or computation engines rather than relying solely on knowledge embedded in the model.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    GLM-V

    GLM-V

    GLM-4.5V and GLM-4.1V-Thinking: Towards Versatile Multimodal Reasoning

    GLM-V is an open-source vision-language model (VLM) series from ZhipuAI that extends the GLM foundation models into multimodal reasoning and perception. The repository provides both GLM-4.5V and GLM-4.1V models, designed to advance beyond basic perception toward higher-level reasoning, long-context understanding, and agent-based applications. GLM-4.5V builds on the flagship GLM-4.5-Air foundation (106B parameters, 12B active), achieving state-of-the-art results on 42 benchmarks across image, video, document, GUI, and grounding tasks. It introduces hybrid training for broad-spectrum reasoning and a Thinking Mode switch to balance speed and depth of reasoning. GLM-4.1V-9B-Thinking incorporates reinforcement learning with curriculum sampling (RLCS) and Chain-of-Thought reasoning, outperforming models much larger in scale (e.g., Qwen-2.5-VL-72B) across many benchmarks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Self-hosted n8n: No-code AI workflows Icon
    Self-hosted n8n: No-code AI workflows

    Connect workflows. Integrate data

    A free-to-use workflow automation tool, n8n lets you connect all your apps and data in one customizable, no-code platform. Design workflows and process data from a simple, unified dashboard.
    Learn More
  • 5
    GPT Academic

    GPT Academic

    Research-oriented chatbot framework

    GPT Academic is a research-oriented chatbot framework designed to integrate large language models (LLMs) into academic workflows. It provides tools for structured document processing, citation management, and enhanced interaction with research papers.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    GPT-NeoX

    GPT-NeoX

    Implementation of model parallel autoregressive transformers on GPUs

    This repository records EleutherAI's library for training large-scale language models on GPUs. Our current framework is based on NVIDIA's Megatron Language Model and has been augmented with techniques from DeepSpeed as well as some novel optimizations. We aim to make this repo a centralized and accessible place to gather techniques for training large-scale autoregressive language models, and accelerate research into large-scale training. For those looking for a TPU-centric codebase, we recommend Mesh Transformer JAX. If you are not looking to train models with billions of parameters from scratch, this is likely the wrong library to use. For generic inference needs, we recommend you use the Hugging Face transformers library instead which supports GPT-NeoX models.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    GeneralAI

    GeneralAI

    Large-scale Self-supervised Pre-training Across Tasks, Languages, etc.

    Fundamental research to develop new architectures for foundation models and AI, focusing on modeling generality and capability, as well as training stability and efficiency.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    Genoss GPT

    Genoss GPT

    One API for all LLMs either Private or Public

    One line replacement for openAI ChatGPT & Embeddings powered by OSS models. Genoss is a pioneering open-source initiative that aims to offer a seamless alternative to OpenAI models such as GPT 3.5 & 4, using open-source models like GPT4ALL.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    Gorilla

    Gorilla

    Gorilla: An API store for LLMs

    Gorilla is Apache 2.0 With Gorilla being fine-tuned on MPT, and Falcon, you can use Gorilla commercially with no obligations. Gorilla enables LLMs to use tools by invoking APIs. Given a natural language query, Gorilla comes up with the semantically- and syntactically- correct API to invoke. With Gorilla, we are the first to demonstrate how to use LLMs to invoke 1,600+ (and growing) API calls accurately while reducing hallucination. We also release APIBench, the largest collection of APIs, curated and easy to be trained on! Join us, as we try to expand the largest API store and teach LLMs how to write them! Hop on our Discord, or open a PR, or email us if you would like to have your API incorporated as well.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Business password and access manager solution for IT security teams Icon
    Business password and access manager solution for IT security teams

    Simplify Access, Secure Your Business

    European businesses use Uniqkey to simplify password management, reclaim IT control and reduce password-based cyber risk. All in one super easy-to-use tool.
    Learn More
  • 10
    Gorilla CLI

    Gorilla CLI

    LLMs for your CLI

    Gorilla CLI powers your command-line interactions with a user-centric tool. Simply state your objective, and Gorilla CLI will generate potential commands for execution. Gorilla today supports ~1500 APIs, including Kubernetes, AWS, GCP, Azure, GitHub, Conda, Curl, Sed, and many more. No more recalling intricate CLI arguments.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    Grade School Math

    Grade School Math

    8.5K high quality grade school math problems

    The grade-school-math repository (sometimes called GSM8K) is a curated dataset of 8,500 high-quality grade school math word problems intended for evaluating mathematical reasoning capabilities of language models. It is structured into 7,500 training problems and 1,000 test problems. These aren’t trivial exercises — many require multi-step reasoning, combining arithmetic operations, and handling intermediate steps (e.g. “If she sold half as many in May… how many in total?”). The problems are written by human authors (not automatically generated) to ensure linguistic variety and realism. The repository maintains strict formatting (e.g. JSONL) for problem + answer pairs, and is used broadly in research to benchmark model performance under “word problem” settings. Issues are tracked (people report incorrect problems, ambiguous statements), and contributions are possible for cleaning or expanding the set.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    HN Time Capsule

    HN Time Capsule

    Analyzing Hacker News discussions from a decade ago in hindsight

    HN Time Capsule is a creative and nostalgic project that captures and preserves snapshots of Hacker News content over time, providing a historical look at how topics, discussions, and popular threads have evolved. Rather than functioning like a live aggregator, it stores periodic captures of posts and comments, creating a time capsule that lets researchers, enthusiasts, and historians trace changes in sentiment, technology trends, and community priorities across different eras of the Hacker News community. The interface allows users to browse archived posts by date, explore trending discussions of the past, and filter content by keywords, authors, or tags to study how particular themes have emerged or faded. By preserving content that might otherwise be lost to time or buried in the fast-moving flow of new posts, HN Time Capsule becomes both an educational resource and a research tool for community dynamics and tech history.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    Hallucination Leaderboard

    Hallucination Leaderboard

    Leaderboard Comparing LLM Performance at Producing Hallucinations

    Hallucination Leaderboard is an open research project that tracks and compares the tendency of large language models to produce hallucinated or inaccurate information when generating summaries. The project provides a standardized benchmark that evaluates different models using a dedicated hallucination detection system known as the Hallucination Evaluation Model. Each model is tested on document summarization tasks to measure how often generated responses introduce information that is not supported by the original source material. The results are published as a leaderboard that allows researchers and developers to compare model reliability and factual consistency. By focusing on hallucination rates rather than traditional metrics such as accuracy or fluency, the benchmark highlights an important aspect of AI system safety and trustworthiness. The leaderboard is regularly updated as new models are released and evaluation methods evolve.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    Happy-LLM

    Happy-LLM

    Large Language Model Principles and Practice Tutorial from Scratch

    Happy-LLM is an open-source educational project created by the Datawhale AI community that provides a structured and comprehensive tutorial for understanding and building large language models from scratch. The project guides learners through the entire conceptual and practical pipeline of modern LLM development, starting with foundational natural language processing concepts and gradually progressing to advanced architectures and training techniques. It explains the Transformer architecture, pre-training paradigms, and model scaling strategies while also providing hands-on coding examples so readers can implement and experiment with their own models. The tutorial emphasizes practical understanding by walking users through building and training small language models, including tokenizer construction, pre-training workflows, and fine-tuning methods.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    Heretic

    Heretic

    Fully automatic censorship removal for language models

    Heretic is an open-source Python tool that automatically removes the built-in censorship or “safety alignment” from transformer-based language models so they respond to a broader range of prompts with fewer refusals. It works by applying directional ablation techniques and a parameter optimization strategy to adjust internal model behaviors without expensive post-training or altering the core capabilities. Designed for researchers and advanced users, Heretic makes it possible to study and experiment with uncensored model responses in a reproducible, automated way. The project can decensor many popular dense and some mixture-of-experts (MoE) models, supporting workflows that would otherwise require manual tuning. Beyond simple decensoring, Heretic includes research-oriented options for analyzing model internals and interpretability data.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 16
    HumanEval

    HumanEval

    Code for the paper "Evaluating Large Language Models Trained on Code"

    human-eval is a benchmark dataset and evaluation framework created by OpenAI for measuring the ability of language models to generate correct code. It consists of hand-written programming problems with unit tests, designed to assess functional correctness rather than superficial metrics like text similarity. Each task includes a natural language prompt and a function signature, requiring the model to generate an implementation that passes all provided tests. The benchmark has become a standard for evaluating code generation models, including those in the Codex and GPT families. Researchers can use the dataset to run reproducible comparisons across models and track improvements in functional code synthesis. By focusing on correctness through execution, human-eval provides a rigorous and practical way to evaluate programming capabilities in AI systems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 17
    II Agent

    II Agent

    A new open-source framework to build and deploy intelligent agents

    II-Agent is an open-source intelligent assistant framework designed to automate complex workflows across multiple domains using large language models and external tools. The platform allows users to interact with multiple AI models within a single environment while connecting those models to external services and knowledge sources. Through a unified interface, users can switch between models, access specialized tools, and execute tasks that require information retrieval, code execution, or file analysis. The architecture focuses on transforming traditional software tools into autonomous assistants capable of completing tasks independently based on user instructions. II-Agent supports integration with modern AI services and can coordinate interactions between different models and capabilities within the same workflow.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 18
    ImPromptu

    ImPromptu

    Domain Agnostic Prompts for Savvy Professionals

    A community-driven wiki of sorts full of your favorite prompts for various Large Language Models such as ChatGPT, GPT-3, MidJourney, and soon (Google's Bard) and more! Choose a subject area you are interested in, and click the link below to go to the page with prompts for that subject. If that page is empty, then you can help by adding prompts to that page. If you are not sure how to do that, you can read the contributing guidelines. If you are feeling like having your mind melt into magic today then head over to the prompt generator and let the magic happen. This script will literally write your prompts for you, as if chatGPT wasn't enough magic for you already.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Instructor Python

    Instructor Python

    Structured outputs for llms

    Instructor is a Python library that bridges OpenAI responses with structured data validation using Pydantic models. It lets developers specify expected output schemas and ensures that the responses from OpenAI APIs are automatically parsed and validated against those models. This makes integrating LLMs into structured workflows safer and more predictable, especially in production applications.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    Intel LLM Library for PyTorch

    Intel LLM Library for PyTorch

    Accelerate local LLM inference and finetuning

    Intel LLM Library for PyTorch is an open-source acceleration library developed to optimize large language model inference and fine-tuning on Intel hardware platforms. Built as an extension of the PyTorch ecosystem, the library enables developers to run modern transformer models efficiently on Intel CPUs, GPUs, and specialized AI accelerators. The framework provides hardware-aware optimizations and low-precision computation techniques that significantly improve the performance of large language models while reducing memory consumption. IPEX-LLM supports a wide range of popular models, including architectures such as LLaMA, Mistral, Qwen, and other transformer-based systems. The library can integrate with common AI frameworks and serving tools such as Hugging Face Transformers, LangChain, and vLLM, allowing developers to incorporate optimized inference into existing pipelines.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    InternGPT

    InternGPT

    Open source demo platform where you can easily showcase your AI models

    InternGPT is an open-source multimodal AI framework designed to extend large language models beyond text interactions into visual reasoning and image manipulation tasks. The system integrates conversational AI with computer vision models so users can interact with images, videos, and visual environments through natural language instructions. Unlike traditional chat systems that rely solely on text prompts, InternGPT allows users to interact with visual content using both language and nonverbal signals such as pointing or highlighting objects within images. The framework connects multiple specialized AI models that perform tasks such as object detection, segmentation, captioning, and visual editing while coordinating them through a central conversational interface. This architecture enables the system to plan actions, execute visual operations, and return results in a coherent dialogue with the user.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    InternLM

    InternLM

    Official release of InternLM series

    InternLM is an open-source family of multilingual foundation and chat models, accompanied by an ecosystem that supports training, inference, and application development. The repository highlights multiple model sizes intended to serve different needs, from efficient research and prototyping to more capable deployments for complex scenarios. Beyond model weights, the project emphasizes an ecosystem view, pointing developers to compatible tools and projects across training and inference so teams can build end-to-end workflows. InternLM’s direction includes strong general-purpose capabilities and ongoing iterations that target improved reasoning, coding, and tool-use behaviors. The broader InternLM ecosystem also includes training tooling and guidance aimed at making fine-tuning and adaptation more accessible across hardware setups, including smaller single-GPU environments and larger multi-node configurations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    InternLM-XComposer-2.5

    InternLM-XComposer-2.5

    InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System

    InternLM-XComposer is an open-source multimodal AI system designed to generate long-form content that combines text with visual elements such as images and diagrams. The model is built on top of the InternLM language model architecture and extends its capabilities to handle multimodal inputs and outputs. Instead of producing only textual responses, the system can generate visually enriched documents such as illustrated articles, presentations, and educational materials. It incorporates visual understanding modules that allow the model to analyze images and integrate them into coherent narrative outputs. The framework also supports tasks such as image captioning, multimodal reasoning, and layout generation for structured visual documents. By combining language generation with visual composition capabilities, the system enables new forms of content creation that integrate written explanations with automatically generated visual components.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    InternVL

    InternVL

    A Pioneering Open-Source Alternative to GPT-4o

    InternVL is a large-scale multimodal foundation model designed to integrate computer vision and language understanding within a unified architecture. The project focuses on scaling vision models and aligning them with large language models so that they can perform tasks involving both visual and textual information. InternVL is trained on massive collections of image-text data, enabling it to learn representations that capture both visual patterns and semantic meaning. The model supports a wide variety of tasks, including visual perception, image classification, and cross-modal retrieval between images and text. It can also be connected to language models to enable conversational interfaces that understand images, videos, and other visual content. By combining large-scale vision architectures with language reasoning capabilities, the project aims to create a more general multimodal AI system capable of handling diverse real-world tasks.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    JSON_REPAIR

    JSON_REPAIR

    A python module to repair invalid JSON from LLMs

    json_repair is an open-source Python library designed to automatically fix malformed JSON data and convert it into valid, parseable structures. The tool is particularly useful in scenarios where JSON output is generated by large language models or external services that may produce syntactically invalid responses. Instead of failing when encountering errors such as missing quotes, trailing commas, or incomplete objects, the library analyzes the malformed data and reconstructs it into valid JSON. The repair process can also be combined with optional JSON Schema validation to enforce structural constraints and ensure the output conforms to expected data types and formats. Developers can integrate the library into applications as a drop-in replacement for standard JSON parsing functions, allowing systems to tolerate imperfect structured data without crashing.
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB