Run Local LLMs on Any Device. Open-source
local-first semantic code search engine
Run LLM prompts from your shell
Universal LLM Deployment Engine with ML Compilation
LangChain powered shell command generator and runner CLI
Performance-optimized AI inference on your GPUs
the terminal client for Ollama