| Embedding sim. | 1 |
| Entity overlap | 1 |
| Title sim. | 1 |
| Time proximity | 1 |
| NLP тип | scientific_publication |
| NLP организация | |
| NLP тема | deep learning |
| NLP страна | |
Открыть оригинал
-->
Computer Science > Machine Learning
arXiv:2603.21991 (cs)
[Submitted on 23 Mar 2026 ( v1 ), last revised 3 Apr 2026 (this version, v2)]
Title: $λ$-GELU: Learning Gating Hardness for Controlled ReLU-ization in Deep Networks
Authors: Cristian Pérez-Corral , Alberto Fernández-Hernández , Jose I. Mestre , Manuel F. Dolz , Enrique S. Quintana-Ortí
View a PDF of the paper titled $\lambda$-GELU: Learning Gating Hardness for Controlled ReLU-ization in Deep Networks, by Cristian P\'erez-Corral and 4 other authors
View PDF
HTML (experimental)
Abstract: Gaussian Error Linear Unit (GELU) is a widely used smooth alternative to Rectifier Linear Unit (ReLU), yet many deployment, compression, and analysis toolchains are most naturally expressed for piecewise-linear (ReLU-type) networks. We study a hardness-parameterized formulation of GELU, f(x;{\lambda})=x{\Phi}({\lambda} x), where {\Phi} is the Gaussian CDF and {\lambda} \in [1, infty) controls gate sharpness, with the goal of turning smooth gated training into a controlled path toward ReLU-compatible models. Learning {\lambda} is non-trivial: naive updates yield unstable dynamics and effective gradient attenuation, so we introduce a constrained reparameterization and an optimizer-aware update scheme.
Empirically, across a diverse set of model--dataset pairs spanning MLPs, CNNs, and Transformers, we observe structured layerwise hardness profiles and assess their robustness under different initializations. We further study a deterministic ReLU-ization strategy in which the learned gates are progressively hardened toward a principled target, enabling a post-training substitution of {\lambda}-GELU by ReLU with reduced disruption. Overall, {\lambda}-GELU provides a minimal and interpretable knob to profile and control gating hardness, bridging smooth training with ReLU-centric downstream pipelines.
Subjects:
Machine Learning (cs.LG) ; Artificial Intelligence (cs.AI)
Cite as:
arXiv:2603.21991 [cs.LG]
(or
arXiv:2603.21991v2 [cs.LG] for this version)
https://doi.org/10.48550/arXiv.2603.21991
Focus to learn more
arXiv-issued DOI via DataCite
Submission history
From: Cristian Pérez-Corral [ view email ]
[v1]
Mon, 23 Mar 2026 13:58:19 UTC (734 KB)
[v2]
Fri, 3 Apr 2026 12:02:29 UTC (734 KB)
Full-text links:
Access Paper:
View a PDF of the paper titled $\lambda$-GELU: Learning Gating Hardness for Controlled ReLU-ization in Deep Networks, by Cristian P\'erez-Corral and 4 other authors
View PDF
HTML (experimental)
TeX Source
view license
Current browse context: cs.LG
< prev
|
next >
new
|
recent
| 2026-03
Change to browse by:
cs
cs.AI
References & Citations
NASA ADS
Google Scholar
Semantic Scholar
export BibTeX citation
Loading...
BibTeX formatted citation
×
loading...
Data provided by:
Bookmark
Bibliographic Tools
Bibliographic and Citation Tools
Bibliographic Explorer Toggle
Bibliographic Explorer ( What is the Explorer? )
Connected Papers Toggle
Connected Papers ( What is Connected Papers? )
Litmaps Toggle
Litmaps ( What is Litmaps? )
scite.ai Toggle
scite Smart Citations ( What are Smart Citations? )
Code, Data, Media
Code, Data and Media Associated with this Article
alphaXiv Toggle
alphaXiv ( What is alphaXiv? )
Links to Code Toggle
CatalyzeX Code Finder for Papers ( What is CatalyzeX? )
DagsHub Toggle
DagsHub ( What is DagsHub? )
GotitPub Toggle
Gotit.pub ( What is GotitPub? )
Huggingface Toggle
Hugging Face ( What is Huggingface? )
ScienceCast Toggle
ScienceCast ( What is ScienceCast? )
Demos
Demos
Replicate Toggle
Replicate ( What is Replicate? )
Spaces Toggle
Hugging Face Spaces ( What is Spaces? )
Spaces Toggle
TXYZ.AI ( What is TXYZ.AI? )
Related Papers
Recommenders and Search Tools
Link to Influence Flower
Influence Flower ( What are Influence Flowers? )
Core recommender toggle
CORE Recommender ( What is CORE? )
IArxiv recommender toggle
IArxiv Recommender
( What is IArxiv? )
Author
Venue
Institution
Topic
About arXivLabs
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .
Which authors of this paper are endorsers? |
Disable MathJax ( What is MathJax? )