DFlash is an open-source framework for ultra-fast speculative decoding using a lightweight block diffusion model to draft text in parallel with a target large language model, dramatically improving inference speed without sacrificing generation quality. It acts as a “drafter” that proposes likely continuations which the main model then verifies, enabling significant throughput gains compared to traditional autoregressive decoding methods that generate token by token. This approach has been shown to deliver lossless acceleration on models like Qwen3-8B by combining block diffusion techniques with efficient batching, making it ideal for applications where latency and cost matter. The project includes support for multiple draft models, example integration code, and scripts to benchmark performance, and it is structured to work with popular model serving stacks like SGLang and the Hugging Face Transformers ecosystem.

Features

  • Block diffusion based speculative decoding
  • Parallel drafting for accelerated generation
  • Integration examples with SGLang and Transformers
  • Support for multiple draft model sizes
  • Benchmarking and performance scripts
  • Modular, research-friendly architecture

Project Samples

Project Activity

See All Activity >

Categories

AI Models

License

MIT License

Follow DFlash

DFlash Web Site

Other Useful Business Software
Rezku Point of Sale Icon
Rezku Point of Sale

Designed for Real-World Restaurant Operations

Rezku is an all-inclusive ordering platform and management solution for all types of restaurant and bar concepts. You can now get a fully custom branded downloadable smartphone ordering app for your restaurant exclusively from Rezku.
Learn More
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of DFlash!

Additional Project Details

Programming Language

Python

Related Categories

Python AI Models

Registered

2026-01-28