Voxtral has launched new open-source speech understanding models, aiming to revolutionize human-computer interaction by making voice interfaces more reliable and accessible. These state-of-the-art models are available under the Apache 2.0 license.
The models, available in 24B and 3B variants, offer exceptional transcription and deep understanding capabilities, addressing limitations of current proprietary and open-source systems. Voxtral bridges the gap between high-cost, closed APIs and less accurate open-source alternatives, providing state-of-the-art accuracy and native semantic understanding at less than half the price of comparable APIs. The models support long-form audio up to 30 minutes for transcription and 40 minutes for understanding, featuring a 32k token context length. Additionally, they include built-in Q&A and summarization, automatic language detection for widely used languages such as English, Spanish, French, Portuguese, Hindi, German, Dutch, and Italian, and direct function-calling from voice commands.
In benchmarks, Voxtral significantly outperforms leading open-source models like Whisper large-v3 and competes strongly with GPT-4o mini Transcribe and Gemini 2.5 Flash in speech transcription and audio understanding. For instance, Voxtral Mini Transcribe is more cost-effective than OpenAI Whisper, while Voxtral Small matches ElevenLabs Scribe’s performance at a lower price point. The models also retain strong text understanding capabilities from their Mistral Small 3.1 backbone.
Voxtral models are available for local download on Hugging Face and via API, with pricing starting at $0.001 per minute. Enterprise features include private deployment, domain-specific fine-tuning, and advanced context capabilities like speaker identification and emotion detection. Future updates will include speaker segmentation, audio markups, and word-level timestamps, further enhancing their utility.




