Gemini 2.5 Pro TTS
Gemini 2.5 Pro TTS is Google’s advanced text-to-speech model in the Gemini 2.5 family, optimized for high-quality, expressive, controllable speech synthesis for structured and professional audio generation tasks. The model delivers natural-sounding voice output with enhanced expressivity, tone control, pacing, and pronunciation fidelity, enabling developers to dictate style, accent, rhythm, and emotional nuance through text-based prompts, making it suitable for applications like podcasts, audiobooks, customer assistance, tutorials, and multimedia narration that require premium audio output. It supports both single-speaker and multi-speaker audio, allowing distinct voices and conversational flows in the same output, and can synthesize speech across multiple languages with consistent style adherence. Compared with lower-latency variants like Flash TTS, the Pro TTS model prioritizes sound quality, depth of expression, and nuanced control.
Learn more
Voxtral TTS
Voxtral TTS is a state-of-the-art, multilingual text-to-speech model designed to generate highly realistic and emotionally expressive speech from text, combining strong contextual understanding with advanced speaker modeling to produce natural, human-like audio output. Built as a lightweight model with around 4 billion parameters, it delivers efficient performance while maintaining high quality, enabling scalable deployment for enterprise voice applications. It supports nine major languages and diverse dialects, and can adapt to new voices using only a short reference audio sample, capturing not just tone but also rhythm, pauses, intonation, and emotional nuance. Its zero-shot voice cloning capabilities allow it to replicate a speaker’s style without additional training, and it can even perform cross-lingual voice adaptation, generating speech in one language while preserving the accent of another.
Learn more
Orpheus TTS
Canopy Labs has introduced Orpheus, a family of state-of-the-art speech large language models (LLMs) designed for human-level speech generation. These models are built on the Llama-3 architecture and are trained on over 100,000 hours of English speech data, enabling them to produce natural intonation, emotion, and rhythm that surpasses current state-of-the-art closed source models. Orpheus supports zero-shot voice cloning, allowing users to replicate voices without prior fine-tuning, and offers guided emotion and intonation control through simple tags. The models achieve low latency, with approximately 200ms streaming latency for real-time applications, reducible to around 100ms with input streaming. Canopy Labs has released both pre-trained and fine-tuned 3B-parameter models under the permissive Apache 2.0 license, with plans to release smaller models of 1B, 400M, and 150M parameters for use on resource-constrained devices.
Learn more
EVI 3
Hume AI's EVI 3 is a third-generation speech-language model that streams in user speech and forms natural, expressive speech and language responses. At conversational latency, it produces the same quality of speech as our text-to-speech model, Octave. Simultaneously, it responds with the same intelligence as the most advanced LLMs of similar latency. It also communicates with reasoning models and web search systems as it speaks, “thinking fast and slow” to match the intelligence of any frontier AI system. EVI 3 can instantly generate new voices and personalities instead of being limited to a handful of speakers. For instance, users can speak to any of the more than 100,000 custom voices already created on our text-to-speech platform, each with an inferred personality. No matter the voice, it responds with a wide range of emotions or styles, implicitly or on command.
Learn more