| Embedding sim. | 1 |
| Entity overlap | 1 |
| Title sim. | 1 |
| Time proximity | 1 |
| NLP тип | scientific_publication |
| NLP организация | |
| NLP тема | machine learning |
| NLP страна | |
Открыть оригинал
-->
Computer Science > Machine Learning
arXiv:2602.17363 (cs)
[Submitted on 19 Feb 2026 ( v1 ), last revised 2 Apr 2026 (this version, v2)]
Title: 2Mamba2Furious: Linear in Complexity, Competitive in Accuracy
Authors: Gabriel Mongaras , Eric C. Larson
View a PDF of the paper titled 2Mamba2Furious: Linear in Complexity, Competitive in Accuracy, by Gabriel Mongaras and 1 other authors
View PDF
HTML (experimental)
Abstract: Linear attention transformers have become a strong alternative to softmax attention due to their efficiency. However, linear attention tends to be less expressive and results in reduced accuracy compared to softmax attention. To bridge the accuracy gap between softmax attention and linear attention, we manipulate Mamba-2, a very strong linear attention variant. We first simplify Mamba-2 down to its most fundamental and important components, evaluating which specific choices make it most accurate. From this simplified Mamba variant (Mamba-2S), we improve the A-mask and increase the order of the hidden state, resulting in a method, which we call 2Mamba, that is nearly as accurate as softmax attention, yet much more memory efficient for long context lengths. We also investigate elements to Mamba-2 that help surpass softmax attention accuracy. Code is provided for all our experiments
Subjects:
Machine Learning (cs.LG)
ACM classes:
I.2; I.2.6
Cite as:
arXiv:2602.17363 [cs.LG]
(or
arXiv:2602.17363v2 [cs.LG] for this version)
https://doi.org/10.48550/arXiv.2602.17363
Focus to learn more
arXiv-issued DOI via DataCite
Submission history
From: Gabriel Mongaras [ view email ]
[v1]
Thu, 19 Feb 2026 13:45:23 UTC (8,567 KB)
[v2]
Thu, 2 Apr 2026 02:07:56 UTC (20,578 KB)
Full-text links:
Access Paper:
View a PDF of the paper titled 2Mamba2Furious: Linear in Complexity, Competitive in Accuracy, by Gabriel Mongaras and 1 other authors
View PDF
HTML (experimental)
TeX Source
view license
Current browse context: cs.LG
< prev
|
next >
new
|
recent
| 2026-02
Change to browse by:
cs
References & Citations
NASA ADS
Google Scholar
Semantic Scholar
export BibTeX citation
Loading...
BibTeX formatted citation
×
loading...
Data provided by:
Bookmark
Bibliographic Tools
Bibliographic and Citation Tools
Bibliographic Explorer Toggle
Bibliographic Explorer ( What is the Explorer? )
Connected Papers Toggle
Connected Papers ( What is Connected Papers? )
Litmaps Toggle
Litmaps ( What is Litmaps? )
scite.ai Toggle
scite Smart Citations ( What are Smart Citations? )
Code, Data, Media
Code, Data and Media Associated with this Article
alphaXiv Toggle
alphaXiv ( What is alphaXiv? )
Links to Code Toggle
CatalyzeX Code Finder for Papers ( What is CatalyzeX? )
DagsHub Toggle
DagsHub ( What is DagsHub? )
GotitPub Toggle
Gotit.pub ( What is GotitPub? )
Huggingface Toggle
Hugging Face ( What is Huggingface? )
ScienceCast Toggle
ScienceCast ( What is ScienceCast? )
Demos
Demos
Replicate Toggle
Replicate ( What is Replicate? )
Spaces Toggle
Hugging Face Spaces ( What is Spaces? )
Spaces Toggle
TXYZ.AI ( What is TXYZ.AI? )
Related Papers
Recommenders and Search Tools
Link to Influence Flower
Influence Flower ( What are Influence Flowers? )
Core recommender toggle
CORE Recommender ( What is CORE? )
IArxiv recommender toggle
IArxiv Recommender
( What is IArxiv? )
Author
Venue
Institution
Topic
About arXivLabs
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .
Which authors of this paper are endorsers? |
Disable MathJax ( What is MathJax? )