← Все кластеры
Open ASR Leaderboard: Trends and Insights with New Multilingual & Long-Form Tracks
closed
Тип событияother
Темаlarge language models
ОрганизацияNVIDIA
Страна
Статей2
Уник. источников1
Важность / Момент0.94 / 0
Период21.11.2025 00:00 — 21.11.2025 00:00
Создан06.04.2026 06:18:59
Статьи в кластере 2
Заголовок Источник Дата публикации Score
S Open ASR Leaderboard: Trends and Insights with New Multilingual & Long-Form Tracks huggingface 21.11.2025 00:00 1
Embedding sim.1
Entity overlap1
Title sim.1
Time proximity1
NLP типother
NLP организацияNVIDIA
NLP темаspeech recognition
NLP страна

Открыть оригинал

Open ASR Leaderboard: Trends and Insights with New Multilingual & Long-Form Tracks Published November 21, 2025 Update on GitHub Upvote 26 +20 Eric Bezzam bezzam Steven Zheng Steveeeeeeen Eustache Le Bihan eustlb Vaibhav Srivastav reach-vb 1. Conformer encoder 🤝 LLM decoder tops the charts 📈 2. Speed–accuracy tradeoffs ⚖️ 3. Multilingual 🌍 4. Long-form transcription is a different game ⏳ While everyone (and their grandma 👵) is spinning up new ASR models, picking the right one for your use case can feel more overwhelming than choosing your next Netflix show. As of 21 Nov 2025, there are 150 Audio-Text-to-Text and 27K ASR models on the Hub 🤯 Most benchmarks focus on short-form English transcription (<30s), and overlook other important tasks, such as (1) multilingual performance and (2) model throughput, which can a be deciding factor for long-form audio like meetings and podcasts. Over the past two years, the Open ASR Leaderboard has become a standard for comparing open and closed-source models on both accuracy and efficiency . Recently, multilingual and long-form transcription tracks have been added to the leaderboard 🎉 TL;DR - Open ASR Leaderboard 📝 New preprint on ASR trends from the leaderboard: https://hf.co/papers/2510.06961 🧠 Best accuracy: Conformer encoder + LLM decoders (open-source ftw 🥳) ⚡ Fastest: CTC / TDT decoders 🌍 Multilingual: Comes at the cost of single-language performance ⌛ Long-form: Closed-source systems still lead (for now 😉) 🧑‍💻 Fine-tuning guides ( Parakeet , Voxtral , Whisper ): to continue pushing performance Takeaways from 60+ models As of 21 Nov 2025, the Open ASR Leaderboard compares 60+ open and closed-source models from 18 organizations , across 11 datasets . In a recent preprint , we dive into the technical setup and highlight some key trends in modern ASR. Here are the big takeaways 👇 1. Conformer encoder 🤝 LLM decoder tops the charts 📈 Models combining Conformer encoders with large language model (LLM) decoders currently lead in English transcription accuracy. For example, NVIDIA’s Canary-Qwen-2.5B , IBM’s Granite-Speech-3.3-8B , and Microsoft’s Phi-4-Multimodal-Instruct achieve the lowest word error rates ( WER ), showing that integrating LLM reasoning can significantly boost ASR accuracy. 💡 Pro-tip: NVIDIA introduced Fast Conformer , a 2x faster variant of the Conformer, that is used in their Canary and Parakeet suite of models. 2. Speed–accuracy tradeoffs ⚖️ While highly accurate, these LLM decoders tend to be slower than simpler approaches. On the Open ASR Leaderboard , efficiency is measured using inverse real-time factor (RTFx), where higher is better. For even faster inference, CTC and TDT decoders deliver 10–100× faster throughput , albeit with slightly higher error rates. This makes them ideal for real-time , offline , or batch transcription tasks (such as meetings, lectures, or podcasts). 3. Multilingual 🌍 OpenAI’s Whisper Large v3 remains a strong multilingual baseline, supporting 99 languages . However, fine-tuned or distilled variants like Distil-Whisper and CrisperWhisper often outperform the original on English-only tasks, showing how targeted fine-tuning can improve specialization ( how to fine-tune? Check out guides for Whisper , Parakeet , and Voxtral ). That said, focusing on English tends to reduce multilingual coverage 👉 a classic case of the tradeoff between specialization and generalization. Similarly, while self-supervised systems like Meta’s Massively Multilingual Speech (MMS) and Omnilingual ASR can support 1K+ languages, they trail behind language-specific encoders in accuracy. ⭐ While just five languages are currently benchmarked, we’re planning to expand to more languages and are excited for new dataset and models contributions to multilingual ASR through GitHub pull requests . 🎯 Alongside multilingual benchmarks, several community-driven leaderboards focus on individual languages. For example, the Open Universal Arabic ASR Leaderboard compares models across Modern Standard Arabic and regional dialects , highlighting how speech variation and diglossia challenge current systems. Similarly. the Russian ASR Leaderboard provides a growing hub for evaluating encoder-decoder and CTC models on Russian-specific phonology and morphology . These localized efforts mirror the broader multilingual leaderboard’s mission to encourage dataset sharing, fine-tuned checkpoints, and transparent model comparisons , especially in languages with fewer established ASR resources. 4. Long-form transcription is a different game ⏳ For long-form audio (e.g., podcasts, lectures, meetings), closed-source systems still edge out open ones. It could be due to domain tuning, custom chunking, or production-grade optimization. Among open models, OpenAI’s Whisper Large v3 performs the best. But for throughput, CTC-based Conformers shine 👉 for example, NVIDIA’s Parakeet CTC 1.1B achieves an RTFx of 2793.75 , compared to 68.56 for Whisper Large v3, with only a moderate WER degradation ( 6.68 and 6.43 respectively). The tradeoff? Parakeet is English-only, again reminding us of that multilingual and specialization tradeoff 🫠. ⭐ While closed systems still lead, there’s huge potential for open-source innovation here. Long-form ASR remains one of the most exciting frontiers for the community to tackle next! 🎤 The Show Must Go On Given how fast ASR is evolving, we’re excited to see what new architectures push performance and efficiency, and how the Open ASR Leaderboard continues to serve as a transparent, community-driven benchmark for the field, and as a reference for other leaderboards ( Russian , Arabic , and Speech DeepFake Detection ). We’ll keep expanding the Open ASR LeaderBoard with more models, more languages, and more datasets so stay tuned 👀 👉 Want to contribute? Head on over to the GitHub repo to open a pull request 🚀 More Articles from our Blog ethics guide speech Voice Cloning with Consent 38 October 28, 2025 evaluation leaderboard community Community Evals: Because we're done trusting black-box leaderboards over the community +3 88 February 4, 2026
20x Faster TRL Fine-tuning with RapidFire AI huggingface 21.11.2025 00:00 0.625
Embedding sim.0.7094
Entity overlap0.0217
Title sim.0.1019
Time proximity1
NLP типproduct_launch
NLP организацияrapidfire ai
NLP темаlarge language models
NLP страна

Открыть оригинал

20x Faster TRL Fine-tuning with RapidFire AI Published November 21, 2025 Update on GitHub Upvote 27 +21 Kamran Bigdely kbigdelysh rapidfire-ai-inc Arun Kumar arunkk09 rapidfire-ai-inc Quentin Gallouédec qgallouedec Why this matters What you get, out of the box How it works Getting Started Supported TRL trainers Minimal TRL SFT example Benchmarks: Real-World Speedups Get Started Today Hugging Face TRL now officially integrates with RapidFire AI to accelerate your fine-tuning and post-training experiments. TRL users can now discover, install, and run RapidFire AI as the fastest way to compare multiple fine-tuning/post-training configurations to customize LLMs without major code changes and without bloating GPU requirements. Why this matters When fine-tuning or post-training LLMs, teams often do not have the time and/or budget to compare multiple configs even though that can significantly boost eval metrics. RapidFire AI lets you launch multiple TRL configs concurrently--even on a single GPU--and compare them in near real time via a new adaptive, chunk-based scheduling and execution scheme. In internal benchmarks referenced in the TRL page, this delivers ~16–24× higher experimentation throughput than sequentially comparing configs one after another, enabling you to reach much better metrics much faster. RapidFire AI establishes live three-way communication between your IDE, a metrics dashboard, and a multi-GPU execution backend What you get, out of the box Drop-in TRL wrappers — Use RFSFTConfig , RFDPOConfig , and RFGRPOConfig as near-zero-code replacements for TRL's SFT/DPO/GRPO configs. Adaptive chunk-based concurrent training — RapidFire AI shards the dataset into a given number of chunks and cycles configs at chunk boundaries to enable earlier apples-to-apples comparisons and also maximize GPU utilization. Interactive Control Ops (IC Ops) — From the dashboard itself, you can Stop, Resume, Delete, and Clone-Modify, possibly with Warm-Start, any runs in flight to avoid wasting resources on underperforming configs and double-down on better performing configs--no job restarts, no juggling separate GPUs or clusters, no resource bloat. Clone promising configurations with modified hyperparameters, optionally warm-starting from the parent's weights, all from the live dashboard Multi-GPU orchestration — The RapidFire AI scheduler automatically places and orchestrates configs across available GPUs on chunks of data via effcient shared-memory mechanisms. You focus on your models and eval metrics, not plumbing. MLflow-based dashboard — Real-time metrics, logs, and IC Ops in one place as soon as you start your experiment. Support for more dashboards such as Trackio, W&B, and TensorBoard coming soon. How it works RapidFire AI splits your dataset randomly into "chunks" and cycles LLM configurations through the GPUs at chunk boundaries. You get incremental signal on eval metrics across all configs much more quickly. The automatic checkpointing via an efficient shared-memory-based adapter/model spilling/loading mechanism keeps training smooth, stable, and consistent. Use IC Ops to adapt mid-flight to stop low-performers earlier and clone promising ones with tweaked config knobs, optionally warm-starting from the parent's weights. Sequential vs. Task Parallel vs. RapidFire AI: The adaptive scheduler maximizes GPU utilization across multiple configs and GPUs. The bottom row shows IC Ops in action—stopping, cloning, and modifying runs mid-flight. Getting Started Install RapidFire AI and get running in under a minute: pip install rapidfireai # Authenticate with Hugging Face huggingface-cli login --token YOUR_TOKEN # Workaround for current issue pip uninstall -y hf-xet # Initialize and start RapidFire AI rapidfireai init rapidfireai start The dashboard launches at http://localhost:3000 where you can monitor and control all your experiments. Supported TRL trainers SFT with RFSFTConfig DPO with RFDPOConfig GRPO with RFGRPOConfig These are designed as drop-in replacements so that you can keep your TRL mental model while gaining far more concurrency and control for your fine-tuning/post-training applications. Minimal TRL SFT example Here's what it looks like to train multiple configurations concurrently even on a single GPU: from rapidfireai import Experiment from rapidfireai.automl import List , RFGridSearch, RFModelConfig, RFLoraConfig, RFSFTConfig from datasets import load_dataset from transformers import AutoModelForCausalLM, AutoTokenizer # Setup: load your dataset and define formatting dataset = load_dataset( "bitext/Bitext-customer-support-llm-chatbot-training-dataset" ) train_dataset = dataset[ "train" ].select( range ( 128 )).shuffle(seed= 42 ) def formatting_function ( row ): return { "prompt" : [ { "role" : "system" , "content" : "You are a helpful customer support assistant." }, { "role" : "user" , "content" : row[ "instruction" ]}, ], "completion" : [{ "role" : "assistant" , "content" : row[ "response" ]}] } dataset = dataset. map (formatting_function) # Define multiple configs to compare config_set = List ([ RFModelConfig( model_name= "TinyLlama/TinyLlama-1.1B-Chat-v1.0" , peft_config=RFLoraConfig(r= 8 , lora_alpha= 16 , target_modules=[ "q_proj" , "v_proj" ]), training_args=RFSFTConfig(learning_rate= 1e-3 , max_steps= 128 , fp16= True ), ), RFModelConfig( model_name= "TinyLlama/TinyLlama-1.1B-Chat-v1.0" , peft_config=RFLoraConfig(r= 32 , lora_alpha= 64 , target_modules=[ "q_proj" , "v_proj" ]), training_args=RFSFTConfig(learning_rate= 1e-4 , max_steps= 128 , fp16= True ), formatting_func=formatting_function, ) ]) # Run all configs concurrently with chunk-based scheduling experiment = Experiment(experiment_name= "sft-comparison" ) config_group = RFGridSearch(configs=config_set, trainer_type= "SFT" ) def create_model ( model_config ): model = AutoModelForCausalLM.from_pretrained( model_config[ "model_name" ], device_map= "auto" , torch_dtype= "auto" ) tokenizer = AutoTokenizer.from_pretrained(model_config[ "model_name" ]) return (model, tokenizer) experiment.run_fit(config_group, create_model, train_dataset, num_chunks= 4 , seed= 42 ) experiment.end() What happens when you run this? Suppose you run the above on a 2-GPU machine. Instead of training sequentially (Config 1 → wait → Config 2 → wait), both configs train concurrently: Approach Time till Comparative Decision GPU utilization Sequential (traditional) ~15 minutes 60% utilization RapidFire AI (concurrent) ~5 minutes 95%+ utilization You can get to a comparative decision 3× sooner on the same resources after both configs finish processing the first data chunk instead of waiting for them to see the whole dataset one after another. Open http://localhost:3000 to watch live metrics and use IC Ops to stop, clone, or tweak runs in real-time based on what you're seeing. Benchmarks: Real-World Speedups Here is what teams see on time to reach a comparable overall best training loss (across all tried configs) when switching from sequential comparisons to RapidFire AI-enabled hyperparallel experimentation: Scenario Sequential Time RapidFire AI Time Speedup 4 configs, 1 GPU 120 min 7.5 min 16× 8 configs, 1 GPU 240 min 12 min 20× 4 configs, 2 GPUs 60 min 4 min 15× Benchmarks on NVIDIA A100 40GB with TinyLlama-1.1B and Llama-3.2-1B models Get Started Today 🚀 Try it hands-on : Interactive Colab Notebook — Zero setup, runs in your browser 📚 Full Documentation : oss-docs.rapidfire.ai — Complete guides, examples, and API reference 💻 GitHub : RapidFireAI/rapidfireai — Open source, production-ready 📦 Install via PyPI : pypi.org/project/rapidfireai — pip install rapidfireai 💬 Join the Community : Discord — Get help, share results, request features RapidFire AI was built because the common status quo of trying one config at a time wastes both time and GPU cycles. With this official integration, every TRL user can fine-tune/post-train smarter, iterate faster, and ship better models. Try the integration and let us know : How much faster is your experimentation loop? What should we build next? We're just getting started, and your feedback shapes where we go from here. More Articles from our Blog llm fine-tuning training Train AI models with Unsloth and Hugging Face Jobs for FREE +2 92 February 20, 2026 llm fine-tuning open-source Codex is Open Sourcing AI models 80 December 11, 2025