Open Source Python Large Language Models (LLM) - Page 7

Python Large Language Models (LLM)

View 362 business solutions

Browse free open source Python Large Language Models (LLM) and projects below. Use the toggles on the left to filter open source Python Large Language Models (LLM) by OS, license, language, programming language, and project status.

  • Skillfully - The future of skills based hiring Icon
    Skillfully - The future of skills based hiring

    Realistic Workplace Simulations that Show Applicant Skills in Action

    Skillfully transforms hiring through AI-powered skill simulations that show you how candidates actually perform before you hire them. Our platform helps companies cut through AI-generated resumes and rehearsed interviews by validating real capabilities in action. Through dynamic job specific simulations and skill-based assessments, companies like Bloomberg and McKinsey have cut screening time by 50% while dramatically improving hire quality.
    Learn More
  • The Most Powerful Software Platform for EHSQ and ESG Management Icon
    The Most Powerful Software Platform for EHSQ and ESG Management

    Addresses the needs of small businesses and large global organizations with thousands of users in multiple locations.

    Choose from a complete set of software solutions across EHSQ that address all aspects of top performing Environmental, Health and Safety, and Quality management programs.
    Learn More
  • 1
    GPT Neo

    GPT Neo

    An implementation of model parallel GPT-2 and GPT-3-style models

    An implementation of model & data parallel GPT3-like models using the mesh-tensorflow library. If you're just here to play with our pre-trained models, we strongly recommend you try out the HuggingFace Transformer integration. Training and inference is officially supported on TPU and should work on GPU as well. This repository will be (mostly) archived as we move focus to our GPU-specific repo, GPT-NeoX. NB, while neo can technically run a training step at 200B+ parameters, it is very inefficient at those scales. This, as well as the fact that many GPUs became available to us, among other things, prompted us to move development over to GPT-NeoX. All evaluations were done using our evaluation harness. Some results for GPT-2 and GPT-3 are inconsistent with the values reported in the respective papers. We are currently looking into why, and would greatly appreciate feedback and further testing of our eval harness.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 2
    Tongyi DeepResearch

    Tongyi DeepResearch

    Tongyi Deep Research, the Leading Open-source Deep Research Agent

    DeepResearch (Tongyi DeepResearch) is an open-source “deep research agent” developed by Alibaba’s Tongyi Lab designed for long-horizon, information-seeking tasks. It’s built to act like a research agent: synthesizing, reasoning, retrieving information via the web and documents, and backing its outputs with evidence. The model is about 30.5 billion parameters in size, though at any given token only ~3.3B parameters are active. It uses a mix of synthetic data generation, fine-tuning and reinforcement learning; supports benchmarks like web search, document understanding, question answering, “agentic” tasks; provides inference tools, evaluation scripts, and “web agent” style interfaces. The aim is to enable more autonomous, agentic models that can perform sustained knowledge gathering, reasoning, and synthesis across multiple modalities (web, files, etc.).
    Downloads: 3 This Week
    Last Update:
    See Project
  • 3
    llama2.c

    llama2.c

    Inference Llama 2 in one file of pure C

    llama2.c is a minimalist implementation of the Llama 2 language model architecture designed to run entirely in pure C. Created by Andrej Karpathy, this project offers an educational and lightweight framework for performing inference on small Llama 2 models without external dependencies. It provides a full training and inference pipeline: models can be trained in PyTorch and later executed using a concise 700-line C program (run.c). While it can technically load Meta’s official Llama 2 models, current support is limited to fp32 precision, meaning practical use is capped at models up to around 7B parameters. The goal of llama2.c is to demonstrate how a compact and transparent implementation can perform meaningful inference even with small models, emphasizing simplicity, clarity, and accessibility. The project builds upon lessons from nanoGPT and takes inspiration from llama.cpp, focusing instead on minimalism and educational value over large-scale performance.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 4
    llmware

    llmware

    Unified framework for building enterprise RAG pipelines

    llmware is an open source framework designed to simplify the creation of enterprise-grade applications powered by large language models. The platform focuses on building secure and private AI workflows that can run locally on laptops, edge devices, or self-hosted servers without relying exclusively on cloud APIs. It provides a unified interface for constructing retrieval-augmented generation pipelines, agent workflows, and document intelligence applications. One of the framework’s defining characteristics is its collection of small specialized language models optimized for specific tasks such as summarization, classification, and document analysis. The system supports a wide range of inference backends including PyTorch, OpenVINO, ONNX Runtime, and other optimized runtimes, allowing developers to choose the most efficient execution environment for their hardware.
    Downloads: 3 This Week
    Last Update:
    See Project
  • CompanyCam is a photo-based solution created for contractors, by contractors. Icon
    CompanyCam is a photo-based solution created for contractors, by contractors.

    Take photos, track progress, and collaborate on tasks with job site management tools and AI shortcuts for every phase of any project.

    Take unlimited photos—which are location and time-stamped, sent to the cloud, and stored securely. Every photo is organized by project and instantly available to your team, allowing you to see what’s going on anytime, anywhere. Annotate photos with drawings, arrows, comments, tags, and voice notes, and create project timelines, photo galleries, reports, and transformation photos through the app. Sharing photos with customers and insurance adjusters has never been easier, and keeping your entire process organized has never been simpler.
    Learn More
  • 5
    tldw Server

    tldw Server

    Your Personal Research Multi-Tool

    tldw-server (mirror) is a mirrored distribution of an open-source backend service designed to store, process, and serve summarized information extracted from long pieces of content. The name “tldw” reflects the phrase “too long; didn’t watch,” which refers to tools that condense lengthy videos, articles, or documents into concise summaries. The server component typically acts as the core infrastructure that manages summaries, metadata, and retrieval operations for client applications or user interfaces. In practical deployments, a system like this can support AI-powered summarization pipelines that process transcripts, articles, or other long-form material and store condensed versions for easier consumption. The mirrored project hosted on SourceForge exists to preserve the availability of the code and provide an alternative download location for developers and researchers. Such servers are commonly integrated with AI models that generate summaries and tag content automatically.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 6
    AgentEvolver

    AgentEvolver

    Towards Efficient Self-Evolving Agent System

    AgentEvolver is an open-source research framework for building self-evolving AI agents powered by large language models. The system focuses on improving the efficiency and scalability of training autonomous agents by allowing them to generate tasks, explore environments, and refine strategies without heavy reliance on manually curated datasets. Its architecture combines reinforcement learning with LLM-driven reasoning mechanisms to guide exploration and learning. The framework introduces several key mechanisms, including self-questioning to create new learning tasks, self-navigating to improve exploration through experience reuse, and self-attributing to assign rewards based on the usefulness of actions. These mechanisms enable agents to continuously improve their capabilities while interacting with complex environments and tools. AgentEvolver also integrates environment sandboxes, experience management systems, and modular data pipelines to support large-scale experimentation.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 7
    Awesome LLM Apps

    Awesome LLM Apps

    Collection of awesome LLM apps with AI Agents and RAG using OpenAI

    Awesome LLM Apps is a community-curated directory of interesting, practical, and innovative applications built on or around large language models, serving as a discovery hub for developers, researchers, and enthusiasts. The list spans a wide range of categories including productivity tools, creative assistants, utilities, education platforms, research frameworks, and niche vertical apps, showcasing how generative models are being used across domains. Each entry includes a brief description, language model dependencies, technology stack notes, and sometimes links to demos or source code, making it easy to explore ideas and reuse concepts for your own projects. Because the landscape of LLM-powered applications changes quickly, the repository is designed to be updated regularly through community contributions, ensuring it stays current with new tools and releases.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 8
    Bard API

    Bard API

    The unofficial python package that returns response of Google Bard

    The Python package returns a response of Google Bard through the value of the cookie. This package is designed for application to the Python package ExceptNotifier and Co-Coder. Please note that the bardapi is not a free service, but rather a tool provided to assist developers with testing certain functionalities due to the delayed development and release of Google Bard's API. It has been designed with a lightweight structure that can easily adapt to the emergence of an official API. Therefore, I strongly discourage using it for any other purposes. If you have access to official PaLM-2 API, replace the provided response with the corresponding official code.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 9
    FlagEmbedding

    FlagEmbedding

    Retrieval and Retrieval-augmented LLMs

    FlagEmbedding is an open-source toolkit for building and deploying high-performance text embedding models used in information retrieval and retrieval-augmented generation systems. The project is part of the BAAI FlagOpen ecosystem and focuses on creating embedding models that transform text into dense vector representations suitable for semantic search and large language model pipelines. FlagEmbedding includes a family of models known as BGE (BAAI General Embedding), which are designed to achieve strong performance across multilingual and cross-lingual retrieval benchmarks. The toolkit provides infrastructure for inference, fine-tuning, evaluation, and dataset preparation, enabling developers to train custom embedding models for specific domains or applications. It also includes reranker models that refine search results by re-evaluating candidate documents using cross-encoder architectures, improving retrieval accuracy in complex queries.
    Downloads: 2 This Week
    Last Update:
    See Project
  • Effortlessly manage macOS, iOS, iPadOS and tvOS devices across multiple clients and locations. Icon
    Effortlessly manage macOS, iOS, iPadOS and tvOS devices across multiple clients and locations.

    The Most Powerful Apple Device Management Tool for MSPs and IT Teams

    Addigy solutions accelerate Apple adoption in any environment.
    Learn More
  • 10
    Free LLM API resources

    Free LLM API resources

    A list of free LLM inference resources accessible via API

    Free LLM API resources repository curated by cheahjs is a community-driven index of free and open API endpoints, tools, datasets, runtimes, and utilities for working with large language models (LLMs) without cost-barriers. It collects a wide range of resources including hosted free-tier LLM APIs, documentation links, public model endpoints, open datasets useful for training or evaluation, tooling integrations, and examples showing how to interact with these services in real applications. This list helps developers, hobbyists, and researchers quickly find models they can use for prototyping, experimentation, or production proofs-of-concept without needing paid subscriptions, reducing friction for innovation. The repository typically categorizes offerings by provider, type of service (text, embeddings, vision), availability conditions (open without key, free tier with key), and usage examples to make discovery and adoption easier.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 11
    Gemini Fullstack LangGraph Quickstart

    Gemini Fullstack LangGraph Quickstart

    Get started w/ building Fullstack Agents using Gemini 2.5 & LangGraph

    gemini-fullstack-langgraph-quickstart is a fullstack reference application from Google DeepMind’s Gemini team that demonstrates how to build a research-augmented conversational AI system using LangGraph and Google Gemini models. The project features a React (Vite) frontend and a LangGraph/FastAPI backend designed to work together seamlessly for real-time research and reasoning tasks. The backend agent dynamically generates search queries based on user input, retrieves information via the Google Search API, and performs reflective reasoning to identify knowledge gaps. It then iteratively refines its search until it produces a comprehensive, well-cited answer synthesized by the Gemini model. The repository provides both a browser-based chat interface and a command-line script (cli_research.py) for executing research queries directly. For production deployment, the backend integrates with Redis and PostgreSQL to manage persistent memory, streaming outputs, & background task coordination.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 12
    Gorilla

    Gorilla

    Gorilla: An API store for LLMs

    Gorilla is Apache 2.0 With Gorilla being fine-tuned on MPT, and Falcon, you can use Gorilla commercially with no obligations. Gorilla enables LLMs to use tools by invoking APIs. Given a natural language query, Gorilla comes up with the semantically- and syntactically- correct API to invoke. With Gorilla, we are the first to demonstrate how to use LLMs to invoke 1,600+ (and growing) API calls accurately while reducing hallucination. We also release APIBench, the largest collection of APIs, curated and easy to be trained on! Join us, as we try to expand the largest API store and teach LLMs how to write them! Hop on our Discord, or open a PR, or email us if you would like to have your API incorporated as well.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 13
    Hephaestus

    Hephaestus

    Semi-Structured Agentic Framework. Workflows build themselves

    Hephaestus is an open-source semi-structured agentic framework designed to orchestrate multiple AI agents working together on complex tasks. Instead of relying entirely on predefined workflows, the framework allows agents to dynamically create tasks as they explore a problem space. Developers define high-level phases such as analysis, implementation, and testing, while agents generate specific subtasks within those phases. The system continuously monitors agent behavior and task progression, allowing workflows to evolve as new discoveries are made. For example, if an agent detects a bug or optimization opportunity, it can automatically create a new task and integrate it into the workflow. The framework also includes monitoring mechanisms that track agent trajectories and ensure that tasks remain aligned with overall objectives.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 14
    Heretic

    Heretic

    Fully automatic censorship removal for language models

    Heretic is an open-source Python tool that automatically removes the built-in censorship or “safety alignment” from transformer-based language models so they respond to a broader range of prompts with fewer refusals. It works by applying directional ablation techniques and a parameter optimization strategy to adjust internal model behaviors without expensive post-training or altering the core capabilities. Designed for researchers and advanced users, Heretic makes it possible to study and experiment with uncensored model responses in a reproducible, automated way. The project can decensor many popular dense and some mixture-of-experts (MoE) models, supporting workflows that would otherwise require manual tuning. Beyond simple decensoring, Heretic includes research-oriented options for analyzing model internals and interpretability data.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 15
    In-The-Wild Jailbreak Prompts on LLMs

    In-The-Wild Jailbreak Prompts on LLMs

    A dataset consists of 15,140 ChatGPT prompts from Reddit

    In-The-Wild Jailbreak Prompts on LLMs is an open-source research repository that provides datasets and analytical tools for studying jailbreak prompts used to bypass safety restrictions in large language models. The project is part of a research effort to understand how users attempt to circumvent alignment and safety mechanisms built into modern AI systems. The repository includes a large collection of prompts gathered from real-world platforms such as Reddit, Discord, prompt-sharing communities, and other public sources. Researchers analyze these prompts to identify patterns, attack strategies, and techniques commonly used to trick language models into producing restricted or harmful outputs. The dataset includes thousands of prompts collected across multiple platforms and represents one of the largest collections of jailbreak attempts available for research.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 16
    LLMs-from-scratch

    LLMs-from-scratch

    Implement a ChatGPT-like LLM in PyTorch from scratch, step by step

    LLMs-from-scratch is an educational codebase that walks through implementing modern large-language-model components step by step. It emphasizes building blocks—tokenization, embeddings, attention, feed-forward layers, normalization, and training loops—so learners understand not just how to use a model but how it works internally. The repository favors clear Python and NumPy or PyTorch implementations that can be run and modified without heavyweight frameworks obscuring the logic. Chapters and notebooks progress from tiny toy models to more capable transformer stacks, including sampling strategies and evaluation hooks. The focus is on readability, correctness, and experimentation, making it ideal for students and practitioners transitioning from theory to working systems. By the end, you have a grounded sense of how data pipelines, optimization, and inference interact to produce fluent text.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 17
    MetaScreener

    MetaScreener

    AI-powered tool for efficient abstract and PDF screening

    MetaScreener is an open-source AI-assisted tool designed to streamline the screening process in systematic literature reviews and academic research workflows. The system helps researchers analyze large collections of academic abstracts and research papers to determine which studies are relevant for inclusion in evidence synthesis projects. Instead of manually reviewing hundreds or thousands of documents, researchers can use MetaScreener to apply machine learning techniques that assist with classification and prioritization of candidate papers. The platform can analyze both abstracts and full PDF documents, enabling automated filtering based on research criteria defined by the user. By incorporating natural language processing techniques, the system can identify potentially relevant studies and reduce the workload associated with manual screening.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 18
    MobileLLM

    MobileLLM

    MobileLLM Optimizing Sub-billion Parameter Language Models

    MobileLLM is a lightweight large language model (LLM) framework developed by Facebook Research, optimized for on-device deployment where computational and memory efficiency are critical. Introduced in the ICML 2024 paper “MobileLLM: Optimizing Sub-billion Parameter Language Models for On-Device Use Cases”, it focuses on delivering strong reasoning and generalization capabilities in models under one billion parameters. The framework integrates several architectural innovations—SwiGLU activation, deep and thin network design, embedding sharing, and grouped-query attention (GQA)—to achieve a superior trade-off between model size, inference speed, and accuracy. MobileLLM demonstrates remarkable performance, with the 125M and 350M variants outperforming previous state-of-the-art models of the same scale by up to 4.3% on zero-shot commonsense reasoning tasks.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 19
    RAGxplorer

    RAGxplorer

    Open-source tool to visualise your RAG

    RAGxplorer is an open-source visualization tool designed to help developers analyze and understand Retrieval-Augmented Generation (RAG) pipelines. Retrieval-augmented generation combines language models with external document retrieval systems in order to produce more accurate and grounded responses. However, RAG systems can be complex because they involve multiple components such as embedding models, vector databases, and retrieval algorithms. RAGxplorer provides visual tools that allow developers to inspect how documents are embedded, retrieved, and used to answer queries. The software can load documents, generate embeddings, and project them into reduced vector spaces so that users can visually explore relationships between queries and retrieved documents. It also includes interactive interfaces that show how retrieval affects the final output of the language model.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 20
    Text-to-LoRA (T2L)

    Text-to-LoRA (T2L)

    Hypernetworks that adapt LLMs for specific benchmark tasks

    Text-to-LoRA is a research project that introduces a method for dynamically adapting large language models using hypernetworks that generate LoRA parameters directly from textual descriptions. Instead of training a new LoRA adapter for every task or dataset, the system can produce task-specific adaptations based solely on a text description of the desired capability. This approach enables models to rapidly internalize new contextual knowledge without performing traditional fine-tuning steps. The project provides a reference implementation of the Doc-to-LoRA method, which allows language models to quickly encode factual information or contextual constraints into lightweight LoRA modules. Developers and researchers can experiment with how textual task descriptions can generate LoRA weights that modify model behavior in real time.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 21
    UFO³

    UFO³

    Weaving the Digital Agent Galaxy

    UFO is an open-source framework developed by Microsoft for building intelligent agents that automate interactions with graphical user interfaces on the Windows operating system. The system allows users to issue natural language instructions that are translated into automated actions across multiple desktop applications. Using a dual-agent architecture, the framework analyzes both visual interface elements and system control structures in order to understand how applications should be manipulated. This enables the agent to navigate complex software environments and perform tasks that normally require manual interaction. UFO integrates mechanisms for task decomposition, planning, and execution so that high-level user requests can be broken down into smaller steps performed by specialized agents. The framework can operate across multiple applications simultaneously, allowing workflows that span several programs to be automated seamlessly.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 22
    VALL-E

    VALL-E

    PyTorch implementation of VALL-E (Zero-Shot Text-To-Speech)

    We introduce a language modeling approach for text to speech synthesis (TTS). Specifically, we train a neural codec language model (called VALL-E) using discrete codes derived from an off-the-shelf neural audio codec model, and regard TTS as a conditional language modeling task rather than continuous signal regression as in previous work. During the pre-training stage, we scale up the TTS training data to 60K hours of English speech which is hundreds of times larger than existing systems. VALL-E emerges in-context learning capabilities and can be used to synthesize high-quality personalized speech with only a 3-second enrolled recording of an unseen speaker as an acoustic prompt. Experiment results show that VALL-E significantly outperforms the state-of-the-art zero-shot TTS system in terms of speech naturalness and speaker similarity. In addition, we find VALL-E could preserve the speaker's emotion and acoustic environment of the acoustic prompt in synthesis.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 23
    Xtuner

    Xtuner

    A Next-Generation Training Engine Built for Ultra-Large MoE Models

    Xtuner is a large-scale training engine designed for efficient training and fine-tuning of modern large language models, particularly mixture-of-experts architectures. The framework focuses on enabling scalable training for extremely large models while maintaining efficiency across distributed computing environments. Unlike traditional 3D parallel training strategies, XTuner introduces optimized parallelism techniques that simplify scaling and reduce system complexity when training massive models. The engine supports training models with hundreds of billions of parameters and enables long-context training with sequence lengths reaching tens of thousands of tokens. Its architecture incorporates memory-efficient optimizations that allow researchers to train large models even when computational resources are limited. XTuner is also designed to integrate with modern AI ecosystems, supporting multimodal training, reinforcement learning optimization, and instruction tuning pipelines.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 24
    how-to-optim-algorithm-in-cuda

    how-to-optim-algorithm-in-cuda

    How to optimize some algorithm in cuda

    how-to-optim-algorithm-in-cuda is an open educational repository focused on teaching developers how to optimize algorithms for high-performance execution on GPUs using CUDA. The project combines technical notes, code examples, and practical experiments that demonstrate how common computational kernels can be optimized to improve speed and memory efficiency. Instead of presenting only theoretical explanations, the repository includes hand-written CUDA implementations of fundamental operations such as reductions, element-wise computations, softmax, and attention mechanisms. These examples show how different optimization techniques influence performance on modern GPU hardware and allow readers to experiment with real implementations. The repository also contains extensive learning notes that summarize CUDA programming concepts, GPU architecture details, and performance engineering strategies.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 25
    Grok-1

    Grok-1

    Open-source, high-performance Mixture-of-Experts large language model

    Grok-1 is a 314-billion-parameter Mixture-of-Experts (MoE) large language model developed by xAI. Designed to optimize computational efficiency, it activates only 25% of its weights for each input token. In March 2024, xAI released Grok-1's model weights and architecture under the Apache 2.0 license, making them openly accessible to developers. The accompanying GitHub repository provides JAX example code for loading and running the model. Due to its substantial size, utilizing Grok-1 requires a machine with significant GPU memory. The repository's MoE layer implementation prioritizes correctness over efficiency, avoiding the need for custom kernels. This is a full repo snapshot ZIP file of the Grok-1 code.
    Leader badge
    Downloads: 23 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB