High-performance inference server for text embeddings models API layer
Python-free Rust inference server
Rust async runtime based on io-uring
Fast, flexible LLM inference
Fast ML inference & training for ONNX models in Rust
Package and deploy machine learning models using Docker containers
IronClaw is OpenClaw inspired but focused on privacy & security
Fast and efficient unstructured data extraction
Command-line tool for Drive, Gmail, Calendar, Sheets, Docs, Chat, etc.
Local AI coding agent CLI with multi-agent orchestration tools
AI enabled pair programmer for Claude, GPT, O Series, Grok, Deepseek
Open-source LLM load balancer and serving platform for hosting LLMs
Shinkai allows you to create advanced AI (local) agents effortlessly