Ruka-v2: Tendon Driven Open-Source Dexterous Hand with Wrist and Abduction for Robot Learning
cooling| Тип события | product_launch |
|---|---|
| Тема | robotics |
| Организация | |
| Страна |
| Статей | 2 |
|---|---|
| Уник. источников | 1 |
| Важность / Момент | 0.94 / 0 |
| Период | 30.03.2026 04:00 — 31.03.2026 04:00 |
| Создан | 05.04.2026 20:30:29 |
Статьи в кластере 2
| Заголовок | Источник | Дата публикации | Score | |||||||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| S | Ruka-v2: Tendon Driven Open-Source Dexterous Hand with Wrist and Abduction for Robot Learning | arxiv_cs_ai | 30.03.2026 04:00 | 1 | ||||||||||||||||
-->
Computer Science > Robotics
arXiv:2603.26660 (cs)
[Submitted on 27 Mar 2026 ( v1 ), last revised 30 Mar 2026 (this version, v2)]
Title: Ruka-v2: Tendon Driven Open-Source Dexterous Hand with Wrist and Abduction for Robot Learning
Authors: Xinqi Lucas Liu , Ruoxi Hu , Alejandro Ojeda Olarte , Zhuoran Chen , Kenny Ma , Charles Cheng Ji , Lerrel Pinto , Raunaq Bhirangi , Irmak Guzey
View a PDF of the paper titled Ruka-v2: Tendon Driven Open-Source Dexterous Hand with Wrist and Abduction for Robot Learning, by Xinqi Lucas Liu and 8 other authors
View PDF
HTML (experimental)
Abstract: Lack of accessible and dexterous robot hardware has been a significant bottleneck to achieving human-level dexterity in robots. Last year, we released Ruka, a fully open-sourced, tendon-driven humanoid hand with 11 degrees of freedom - 2 per finger and 3 at the thumb - buildable for under $1,300. It was one of the first fully open-sourced humanoid hands, and introduced a novel data-driven approach to finger control that captures tendon dynamics within the control system. Despite these contributions, Ruka lacked two degrees of freedom essential for closely imitating human behavior: wrist mobility and finger adduction/abduction. In this paper, we introduce Ruka-v2: a fully open-sourced, tendon-driven humanoid hand featuring a decoupled 2-DOF parallel wrist and abduction/adduction at the fingers. The parallel wrist adds smooth, independent flexion/extension and radial/ulnar deviation, enabling manipulation in confined environments such as cabinets. Abduction enables motions such as grasping thin objects, in-hand rotation, and calligraphy. We present the design of Ruka-v2 and evaluate it against Ruka through user studies on teleoperated tasks, finding a 51.3% reduction in completion time and a 21.2% increase in success rate. We further demonstrate its full range of applications for robot learning: bimanual and single-arm teleoperation across 13 dexterous tasks, and autonomous policy learning on 3 tasks. All 3D print files, assembly instructions, controller software, and videos are available at this https URL .
Subjects:
Robotics (cs.RO) ; Artificial Intelligence (cs.AI)
Cite as:
arXiv:2603.26660 [cs.RO]
(or
arXiv:2603.26660v2 [cs.RO] for this version)
https://doi.org/10.48550/arXiv.2603.26660
Focus to learn more
arXiv-issued DOI via DataCite
Submission history
From: Ruoxi Hu [ view email ]
[v1]
Fri, 27 Mar 2026 17:58:03 UTC (47,314 KB)
[v2]
Mon, 30 Mar 2026 16:30:40 UTC (47,314 KB)
Full-text links:
Access Paper:
View a PDF of the paper titled Ruka-v2: Tendon Driven Open-Source Dexterous Hand with Wrist and Abduction for Robot Learning, by Xinqi Lucas Liu and 8 other authors
View PDF
HTML (experimental)
TeX Source
view license
Current browse context: cs.RO
< prev
|
next >
new
|
recent
| 2026-03
Change to browse by:
cs
cs.AI
References & Citations
NASA ADS
Google Scholar
Semantic Scholar
export BibTeX citation
Loading...
BibTeX formatted citation
×
loading...
Data provided by:
Bookmark
Bibliographic Tools
Bibliographic and Citation Tools
Bibliographic Explorer Toggle
Bibliographic Explorer ( What is the Explorer? )
Connected Papers Toggle
Connected Papers ( What is Connected Papers? )
Litmaps Toggle
Litmaps ( What is Litmaps? )
scite.ai Toggle
scite Smart Citations ( What are Smart Citations? )
Code, Data, Media
Code, Data and Media Associated with this Article
alphaXiv Toggle
alphaXiv ( What is alphaXiv? )
Links to Code Toggle
CatalyzeX Code Finder for Papers ( What is CatalyzeX? )
DagsHub Toggle
DagsHub ( What is DagsHub? )
GotitPub Toggle
Gotit.pub ( What is GotitPub? )
Huggingface Toggle
Hugging Face ( What is Huggingface? )
Links to Code Toggle
Papers with Code ( What is Papers with Code? )
ScienceCast Toggle
ScienceCast ( What is ScienceCast? )
Demos
Demos
Replicate Toggle
Replicate ( What is Replicate? )
Spaces Toggle
Hugging Face Spaces ( What is Spaces? )
Spaces Toggle
TXYZ.AI ( What is TXYZ.AI? )
Related Papers
Recommenders and Search Tools
Link to Influence Flower
Influence Flower ( What are Influence Flowers? )
Core recommender toggle
CORE Recommender ( What is CORE? )
Author
Venue
Institution
Topic
About arXivLabs
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .
Which authors of this paper are endorsers? |
Disable MathJax ( What is MathJax? )
|
||||||||||||||||||||
| SutureAgent: Learning Surgical Trajectories via Goal-conditioned Offline RL in Pixel Space | arxiv_cs_ai | 31.03.2026 04:00 | 0.684 | |||||||||||||||||
-->
Computer Science > Robotics
arXiv:2603.26720 (cs)
[Submitted on 19 Mar 2026]
Title: SutureAgent: Learning Surgical Trajectories via Goal-conditioned Offline RL in Pixel Space
Authors: Huanrong Liu , Chunlin Tian , Tongyu Jia , Tailai Zhou , Qin Liu , Yu Gao , Yutong Ban , Yun Gu , Guy Rosman , Xin Ma , Qingbiao Li
View a PDF of the paper titled SutureAgent: Learning Surgical Trajectories via Goal-conditioned Offline RL in Pixel Space, by Huanrong Liu and 10 other authors
View PDF
HTML (experimental)
Abstract: Predicting surgical needle trajectories from endoscopic video is critical for robot-assisted suturing, enabling anticipatory planning, real-time guidance, and safer motion execution. Existing methods that directly learn motion distributions from visual observations tend to overlook the sequential dependency among adjacent motion steps. Moreover, sparse waypoint annotations often fail to provide sufficient supervision, further increasing the difficulty of supervised or imitation learning methods. To address these challenges, we formulate image-based needle trajectory prediction as a sequential decision-making problem, in which the needle tip is treated as an agent that moves step by step in pixel space. This formulation naturally captures the continuity of needle motion and enables the explicit modeling of physically plausible pixel-wise state transitions over time. From this perspective, we propose SutureAgent, a goal-conditioned offline reinforcement learning framework that leverages sparse annotations to dense reward signals via cubic spline interpolation, encouraging the policy to exploit limited expert guidance while exploring plausible future motion paths. SutureAgent encodes variable-length clips using an observation encoder to capture both local spatial cues and long-range temporal dynamics, and autoregressively predicts future waypoints through actions composed of discrete directions and continuous magnitudes. To enable stable offline policy optimization from expert demonstrations, we adopt Conservative Q-Learning with Behavioral Cloning regularization. Experiments on a new kidney wound suturing dataset containing 1,158 trajectories from 50 patients show that SutureAgent reduces Average Displacement Error by 58.6% compared with the strongest baseline, demonstrating the effectiveness of modeling needle trajectory prediction as pixel-level sequential action learning.
Subjects:
Robotics (cs.RO) ; Artificial Intelligence (cs.AI)
Cite as:
arXiv:2603.26720 [cs.RO]
(or
arXiv:2603.26720v1 [cs.RO] for this version)
https://doi.org/10.48550/arXiv.2603.26720
Focus to learn more
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Huanrong Liu [ view email ]
[v1]
Thu, 19 Mar 2026 01:36:07 UTC (2,104 KB)
Full-text links:
Access Paper:
View a PDF of the paper titled SutureAgent: Learning Surgical Trajectories via Goal-conditioned Offline RL in Pixel Space, by Huanrong Liu and 10 other authors
View PDF
HTML (experimental)
TeX Source
view license
Current browse context: cs.RO
< prev
|
next >
new
|
recent
| 2026-03
Change to browse by:
cs
cs.AI
References & Citations
NASA ADS
Google Scholar
Semantic Scholar
export BibTeX citation
Loading...
BibTeX formatted citation
×
loading...
Data provided by:
Bookmark
Bibliographic Tools
Bibliographic and Citation Tools
Bibliographic Explorer Toggle
Bibliographic Explorer ( What is the Explorer? )
Connected Papers Toggle
Connected Papers ( What is Connected Papers? )
Litmaps Toggle
Litmaps ( What is Litmaps? )
scite.ai Toggle
scite Smart Citations ( What are Smart Citations? )
Code, Data, Media
Code, Data and Media Associated with this Article
alphaXiv Toggle
alphaXiv ( What is alphaXiv? )
Links to Code Toggle
CatalyzeX Code Finder for Papers ( What is CatalyzeX? )
DagsHub Toggle
DagsHub ( What is DagsHub? )
GotitPub Toggle
Gotit.pub ( What is GotitPub? )
Huggingface Toggle
Hugging Face ( What is Huggingface? )
Links to Code Toggle
Papers with Code ( What is Papers with Code? )
ScienceCast Toggle
ScienceCast ( What is ScienceCast? )
Demos
Demos
Replicate Toggle
Replicate ( What is Replicate? )
Spaces Toggle
Hugging Face Spaces ( What is Spaces? )
Spaces Toggle
TXYZ.AI ( What is TXYZ.AI? )
Related Papers
Recommenders and Search Tools
Link to Influence Flower
Influence Flower ( What are Influence Flowers? )
Core recommender toggle
CORE Recommender ( What is CORE? )
Author
Venue
Institution
Topic
About arXivLabs
arXivLabs: experimental projects with community collaborators
arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.
Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.
Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .
Which authors of this paper are endorsers? |
Disable MathJax ( What is MathJax? )
|
||||||||||||||||||||