Open Source TypeScript Large Language Models (LLM) - Page 4

TypeScript Large Language Models (LLM)

View 371 business solutions

Browse free open source TypeScript Large Language Models (LLM) and projects below. Use the toggles on the left to filter open source TypeScript Large Language Models (LLM) by OS, license, language, programming language, and project status.

  • Airlock Digital - Application Control (Allowlisting) Made Simple Icon
    Airlock Digital - Application Control (Allowlisting) Made Simple

    Airlock Digital delivers an easy-to-manage and scalable application control solution to protect endpoints with confidence.

    For organizations seeking the most effective way to prevent malware and ransomware in their environments. It has been designed to provide scalable, efficient endpoint security for organizations with even the most diverse architectures and rigorous compliance requirements. Built by practitioners for the world’s largest and most secure organizations, Airlock Digital delivers precision Application Control & Allowlisting for the modern enterprise.
    Learn More
  • Curtain LogTrace File Activity Monitoring Icon
    Curtain LogTrace File Activity Monitoring

    For any organizations (up to 10,000 PCs)

    Curtain LogTrace File Activity Monitoring is an enterprise file activity monitoring solution. It tracks user actions: create, copy, move, delete, rename, print, open, close, save. Includes source/destination paths and disk type. Perfect for monitoring user file activities.
    Learn More
  • 1
    TypedAI

    TypedAI

    TypeScript AI platform with AI chat, Autonomous agents

    TypedAI is an open-source TypeScript platform designed for building and running AI agents, chatbots, and large language model workflows. The framework provides developers with a full-featured environment for designing autonomous agents capable of performing complex tasks such as code analysis, workflow automation, or conversational assistance. Written in TypeScript, the platform emphasizes strong typing and structured development patterns to improve reliability when building AI-driven systems. TypedAI includes tools for building chat interfaces, managing LLM interactions, and orchestrating multi-step workflows that combine AI reasoning with external tools. The platform also includes specialized software engineering agents that can assist with tasks such as code reviews or repository analysis. Developers can integrate multiple model providers and tools into the platform to create flexible agent pipelines.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 2
    WebLLM

    WebLLM

    Bringing large-language models and chat to web browsers

    WebLLM is a modular, customizable javascript package that directly brings language model chats directly onto web browsers with hardware acceleration. Everything runs inside the browser with no server support and is accelerated with WebGPU. We can bring a lot of fun opportunities to build AI assistants for everyone and enable privacy while enjoying GPU acceleration. WebLLM offers a minimalist and modular interface to access the chatbot in the browser. The WebLLM package itself does not come with UI, and is designed in a modular way to hook to any of the UI components. The following code snippet demonstrates a simple example that generates a streaming response on a webpage.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 3
    axflow

    axflow

    The TypeScript framework for AI development

    Axflow is a modular TypeScript framework designed to support the development of natural language powered AI applications. The framework provides a collection of independent modules that can be adopted individually or combined to create a full AI application stack. Its core SDK enables developers to integrate language model capabilities into web applications while maintaining strong modular design principles. Additional components support data ingestion, evaluation, and model interaction workflows that are commonly required when building production AI systems. For example, the framework includes modules for connecting application data to language models, evaluating the quality of model outputs, and building streaming user interfaces. Because each component can be used independently, developers can adopt Axflow incrementally rather than committing to a monolithic framework. This flexibility makes the system suitable for both experimentation and production environments.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 4
    dLLM

    dLLM

    dLLM: Simple Diffusion Language Modeling

    dLLM is an open-source framework designed to simplify the development, training, and evaluation of diffusion-based large language models. Unlike traditional autoregressive models that generate text sequentially token by token, diffusion language models generate text through an iterative denoising process that refines masked tokens over multiple steps. This approach allows models to reason over the entire sequence simultaneously and potentially produce more coherent outputs with bidirectional context. The project provides an integrated pipeline that standardizes how diffusion language models are trained, evaluated, and deployed, helping researchers reproduce experiments and compare results more easily. The framework includes scalable training infrastructure inspired by modern deep learning toolkits and supports integrations with widely used libraries for distributed training.
    Downloads: 0 This Week
    Last Update:
    See Project
  • All-in-One Inspection Software Icon
    All-in-One Inspection Software

    flowdit is a connected worker platform tailored for industry needs in commissioning, quality, maintenance, and EHS management.

    Optimize Frontline Operations: Elevate Equipment Uptime, Operational Excellence, and Safety with Connected Teams and Data, Including Issue Capture and Corrective Action.
    Learn More
  • 5
    dataline

    dataline

    AI data analysis and visualization on CSV, Postgres, MySQL, Snowflake

    dataline is an open-source AI data analysis and visualization platform that allows users to interact with datasets using natural language. The system enables both technical and non-technical users to explore data by asking questions conversationally, which the platform translates into database queries and analytical operations. It supports connections to multiple structured data sources such as PostgreSQL, MySQL, Snowflake, SQLite, Excel files, CSV datasets, and other database systems. Once connected, users can generate tables, charts, and reports automatically based on queries produced by the AI engine. The platform is designed with a privacy-first architecture that stores data locally on the user’s device rather than sending it to external cloud services by default. It can also hide sensitive data from language models during processing, ensuring that only necessary metadata is used for query generation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 6
    llama.vscode

    llama.vscode

    VS Code extension for LLM-assisted code/text completion

    llama.vscode is a Visual Studio Code extension that provides AI-assisted coding features powered primarily by locally running language models. The extension is designed to be lightweight and efficient, enabling developers to use AI tools even on consumer-grade hardware. It integrates with the llama.cpp runtime to run language models locally, eliminating the need to rely entirely on external APIs or cloud providers. The extension supports common AI development features such as code completion, conversational chat assistance, and AI-assisted code editing directly within the IDE. Developers can select and manage models through a configuration interface that automatically downloads and runs the required models locally. The extension also supports agent-style coding workflows, where AI tools can perform more complex tasks such as analyzing project context or editing multiple files.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 7
    llmx.txt hub

    llmx.txt hub

    The largest directory for AI-ready documentation and tools

    llms-txt-hub serves as a central directory and knowledge base for the emerging llms.txt convention, a simple, text-based way for project owners to communicate preferences to AI tools. It catalogs implementations across projects and platforms, helping maintain a shared understanding of how LLM-powered services should interact with code and documentation. The repository aims to standardize patterns for allowlists, denylists, attribution, rate expectations, and contact information, mirroring the spirit of robots.txt for the AI era. It provides examples and templates to make adoption straightforward for maintainers of websites, docs portals, and repos. The hub encourages community debate and iteration so conventions remain practical as tooling evolves. By consolidating examples and tools, it accelerates consistent, respectful AI consumption of public content.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 8
    lms

    lms

    LM Studio CLI

    lms is a command-line interface tool designed to interact with and manage local large language models through the LM Studio ecosystem. The tool allows developers to control model execution directly from the terminal, providing programmatic access to features that are otherwise available through graphical interfaces. Through the CLI, users can load and unload models, start or stop local inference servers, and inspect the inputs and outputs generated by language models. LMS is built using the LM Studio JavaScript SDK and integrates tightly with the LM Studio runtime environment. The interface is designed to simplify automation workflows and scripting tasks related to local AI deployment. By exposing model management capabilities through command-line commands, the tool enables developers to integrate local LLM operations into development pipelines and backend services. As a result, LMS acts as a bridge between interactive local AI tools and automated software development workflows.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 9
    opcode

    opcode

    A powerful GUI app and Toolkit for Claude Code

    opcode is an open source desktop application and toolkit designed to enhance the developer experience when working with Claude Code by providing a graphical interface and advanced workflow management tools. The project acts as a command center for AI-assisted programming, bridging the gap between command-line workflows and modern visual development environments. Built using the Tauri framework, Opcode enables developers to manage multiple Claude sessions, create custom agents, and track usage in a centralized interface. The platform is intended to make AI-assisted coding more intuitive by providing visual tools for monitoring agent activity, organizing projects, and reviewing development timelines. It includes features that help developers coordinate tasks between agents and human collaborators while maintaining transparency over the actions performed by AI systems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • More Bookings. Better Experience. Icon
    More Bookings. Better Experience.

    For tour and activity providers

    The all-in-one solution built to help you stay organised and get more bookings with thousands of connections to online travel agencies (OTAs), resellers and suppliers.
    Learn More
  • 10
    opensrc

    opensrc

    Fetch source code for npm packages

    OpenSrc is an open-source utility developed by Vercel Labs that retrieves and exposes the source code of npm packages so that AI coding agents can better understand how external libraries work. When large language models generate code, they often rely only on type definitions or documentation, which can limit their understanding of how a library actually behaves. OpenSrc addresses this limitation by allowing agents to fetch the underlying source code of dependencies and analyze their implementation directly. This gives AI coding assistants richer context about functions, internal logic, and architectural patterns used within external packages. The tool is designed to integrate into AI-driven developer workflows where coding agents explore repositories, inspect dependencies, and reason about how to use libraries correctly.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 11
    react-llm

    react-llm

    Easy-to-use headless React Hooks to run LLMs in the browser with WebGP

    Easy-to-use headless React Hooks to run LLMs in the browser with WebGPU. As simple as useLLM().
    Downloads: 0 This Week
    Last Update:
    See Project
  • 12
    repo2txt

    repo2txt

    Web-based tool converts GitHub repository contents

    repo2txt is an open-source developer tool that converts the contents of a code repository into a single structured text file that can be easily consumed by large language models. The tool is designed to address the challenge of analyzing entire codebases with AI assistants, where code is normally distributed across many files and directories. By collecting repository contents and formatting them into a single text document, repo2txt allows developers to feed complete projects into AI systems for analysis, documentation, or code explanation tasks. The application can load repositories from platforms such as GitHub or from local directories and provides an interface for selecting which files should be included in the generated output. It also supports filtering by file types and automatically respecting ignore patterns defined in project configuration files.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 13
    swark.io

    swark.io

    Create architecture diagrams from code automatically using LLMs

    Swark is an open-source developer tool and Visual Studio Code extension that automatically generates software architecture diagrams directly from source code using large language models. The project aims to help developers quickly understand complex codebases by analyzing repositories and producing visual diagrams that represent system architecture, dependencies, and component relationships. Instead of relying on manually maintained diagrams that often become outdated, Swark uses AI to infer architecture patterns dynamically from the code itself. The tool integrates with GitHub Copilot and the VS Code environment, allowing developers to generate diagrams with minimal setup and without requiring additional authentication or API configuration. Because the logic of understanding code structure is handled by an LLM, Swark can support many programming languages and frameworks without requiring custom rules for each language.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 14
    trench

    trench

    Open-Source Analytics Infrastructure

    Trench is an open-source analytics infrastructure designed for tracking events and performing real-time analysis of application data at scale. The system is built on top of high-performance data technologies including Apache Kafka and ClickHouse, which allows it to ingest and process very large volumes of events while maintaining fast query performance. It was originally developed to solve scaling challenges in product analytics systems where traditional relational databases become inefficient as event tables grow. The platform enables developers to collect events such as page views, user actions, and behavioral metrics while storing them in a column-oriented analytics database optimized for time-series workloads. By combining streaming ingestion with fast analytical queries, the system supports use cases such as product analytics dashboards, observability pipelines, and machine learning data preparation.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 15
    wllama

    wllama

    WebAssembly binding for llama.cpp - Enabling on-browser LLM inference

    wllama is a WebAssembly-based library that enables large language model inference directly inside a web browser. Built as a binding for the llama.cpp inference engine, the project allows developers to run LLM models locally without requiring a server backend or dedicated GPU hardware. The library leverages WebAssembly SIMD capabilities to achieve efficient execution within modern browsers while maintaining compatibility across platforms. By running models locally on the user’s device, wllama enables privacy-preserving AI applications that do not require sending data to remote servers. The framework provides both high-level APIs for common tasks such as text generation and embeddings, as well as low-level APIs that expose tokenization, sampling controls, and model state management.
    Downloads: 0 This Week
    Last Update:
    See Project
MongoDB Logo MongoDB