| Embedding sim. | 1 |
| Entity overlap | 1 |
| Title sim. | 1 |
| Time proximity | 1 |
| NLP тип | scientific_publication |
| NLP организация | Bibliothèque nationale du Luxembourg |
| NLP тема | computer vision |
| NLP страна | Luxembourg |
Открыть оригинал
-->
Computer Science > Computer Vision and Pattern Recognition
arXiv:2604.00725 (cs)
[Submitted on 1 Apr 2026]
Title: A Benchmark of State-Space Models vs. Transformers and BiLSTM-based Models for Historical Newspaper OCR
Authors: Merveilles Agbeti-messan , Thierry Paquet , Clément Chatelain , Pierrick Tranouez , Stéphane Nicolas
View a PDF of the paper titled A Benchmark of State-Space Models vs. Transformers and BiLSTM-based Models for Historical Newspaper OCR, by Merveilles Agbeti-messan and 4 other authors
View PDF
HTML (experimental)
Abstract: End-to-end OCR for historical newspapers remains challenging, as models must handle long text sequences, degraded print quality, and complex layouts. While Transformer-based recognizers dominate current research, their quadratic complexity limits efficient paragraph-level transcription and large-scale deployment. We investigate linear-time State-Space Models (SSMs), specifically Mamba, as a scalable alternative to Transformer-based sequence modeling for OCR.
We present to our knowledge, the first OCR architecture based on SSMs, combining a CNN visual encoder with bi-directional and autoregressive Mamba sequence modeling, and conduct a large-scale benchmark comparing SSMs with Transformer- and BiLSTM-based recognizers. Multiple decoding strategies (CTC, autoregressive, and non-autoregressive) are evaluated under identical training conditions alongside strong neural baselines (VAN, DAN, DANIEL) and widely used off-the-shelf OCR engines (PERO-OCR, Tesseract OCR, TrOCR, Gemini).
Experiments on historical newspapers from the Bibliothèque nationale du Luxembourg, with newly released >99% verified gold-standard annotations, and cross-dataset tests on Fraktur and Antiqua lines, show that all neural models achieve low error rates (~2% CER), making computational efficiency the main differentiator. Mamba-based models maintain competitive accuracy while halving inference time and exhibiting superior memory scaling (1.26x vs 2.30x growth at 1000 chars), reaching 6.07% CER at the severely degraded paragraph level compared to 5.24% for DAN, while remaining 2.05x faster.
We release code, trained models, and standardized evaluation protocols to enable reproducible research and guide practitioners in large-scale cultural heritage OCR.
Subjects:
Computer Vision and Pattern Recognition (cs.CV) ; Machine Learning (cs.LG)
Cite as:
arXiv:2604.00725 [cs.CV]
(or
arXiv:2604.00725v1 [cs.CV] for this version)
https://doi.org/10.48550/arXiv.2604.00725
Focus to learn more
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Merveilles Agbeti-Messan [ view email ]
[v1]
Wed, 1 Apr 2026 10:33:33 UTC (470 KB)
Full-text links:
Access Paper:
View a PDF of the paper titled A Benchmark of State-Space Models vs. Transformers and BiLSTM-based Models for Historical Newspaper OCR, by Merveilles Agbeti-messan and 4 other authors
View PDF
HTML (experimental)
TeX Source
view license
Current browse context: cs.CV
< prev
|
next >
new
|
recent
| 2026-04
Change to browse by:
cs
cs.LG
References & Citations
NASA ADS
Google Scholar
Semantic Scholar
export BibTeX citation
Loading...
BibTeX formatted citation
×
loading...
Data provided by:
Bookmark
Bibliographic Tools
Bibliographic and Citation Tools
Bibliographic Explorer Toggle
Bibliographic Explorer ( What is the Explorer? )
Connected Papers Toggle
Connected Papers ( What is Connected Papers? )
Litmaps Toggle
Litmaps ( What is Litmaps? )
scite.ai Toggle
scite Smart Citations ( What are Smart Citations? )
Code, Data, Media
Code, Data and Media Associated with this Article
alphaXiv Toggle
alphaXiv ( What is alphaXiv? )
Links to Code Toggle
CatalyzeX Code Finder for Papers ( What is CatalyzeX? )
DagsHub Toggle
DagsHub ( What is DagsHub? )
GotitPub Toggle
Gotit.pub ( What is GotitPub? )
Huggingface Toggle
Hugging Face ( What is Huggingface? )
Links to Code Toggle
Papers with Code ( What is Papers with Code? )
ScienceCast Toggle
ScienceCast ( What is ScienceCast? )
Demos
Demos
Replicate Toggle
Replicate ( What is Replicate? )
Spaces Toggle
Hugging Face Spaces ( What is Spaces? )
Spaces Toggle
TXYZ.AI ( What is TXYZ.AI? )
Related Papers
Recommenders and Search Tools
Link to Influence Flower
Influence Flower ( What are Influence Flowers? )
Core recommender toggle
CORE Recommender ( What is CORE? )
Author
Venue
Institution
Topic
About arXivLabs
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .
Which authors of this paper are endorsers? |
Disable MathJax ( What is MathJax? )