qvac-fabric-llm.cpp is a cross-platform large language model inference and fine-tuning engine built as an advanced fork of llama.cpp, designed to run efficiently across desktops, mobile devices, and heterogeneous GPU environments. The project focuses on removing hardware limitations traditionally associated with LLM deployment by enabling support for a wide range of backends, including Vulkan, Metal, CUDA, and CPU, making it accessible on devices ranging from smartphones to enterprise servers. It introduces native LoRA fine-tuning capabilities that can be executed directly on consumer hardware, allowing developers to train and adapt models locally without relying on cloud infrastructure. A key innovation is its support for BitNet ternary quantized models, enabling highly efficient inference and training even on resource-constrained systems.
Features
- Cross-platform LLM inference and fine-tuning across CPU, Vulkan, Metal, and CUDA
- Native LoRA fine-tuning on consumer hardware including mobile devices
- Support for BitNet ternary quantized models for efficient inference
- Memory-based model loading for streaming and embedded deployments
- Optimizations for mobile GPUs such as Adreno with improved throughput
- Compatibility with GGUF models and llama.cpp ecosystem