wllama is a WebAssembly-based library that enables large language model inference directly inside a web browser. Built as a binding for the llama.cpp inference engine, the project allows developers to run LLM models locally without requiring a server backend or dedicated GPU hardware. The library leverages WebAssembly SIMD capabilities to achieve efficient execution within modern browsers while maintaining compatibility across platforms. By running models locally on the user’s device, wllama enables privacy-preserving AI applications that do not require sending data to remote servers. The framework provides both high-level APIs for common tasks such as text generation and embeddings, as well as low-level APIs that expose tokenization, sampling controls, and model state management.

Features

  • WebAssembly binding that enables llama.cpp inference inside browsers
  • Local execution of large language models without server infrastructure
  • High-level APIs for text completion and embeddings generation
  • Low-level control over tokenization, sampling, and model caching
  • Support for GGUF model format and parallel model loading
  • TypeScript integration for building modern web AI applications

Project Samples

Project Activity

See All Activity >

License

MIT License

Follow wllama

wllama Web Site

Other Useful Business Software
Endpoint Protection Software for Businesses | HYPERSECURE Icon
Endpoint Protection Software for Businesses | HYPERSECURE

DriveLock protects systems, data, end devices from data loss and misuse.

The HYPERSECURE endpoint protection platform is a comprehensive suite of products and services enhanced by European third-party solutions. It ensures our customers’ IT security, regulatory compliance, and digital sovereignty.
Learn More
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of wllama!

Additional Project Details

Programming Language

TypeScript

Related Categories

TypeScript Large Language Models (LLM)

Registered

2026-03-10