Dyn-HaMR: Recovering 4D Interacting Hand Motion from a Dynamic Camera

Imperial College London

Abstract

We propose Dyn-HaMR, to the best of our knowledge, the first approach to reconstruct 4D global hand motion from monocular videos recorded by dynamic cameras in the wild. Reconstructing accurate 3D hand meshes from monocular videos is a crucial task for understanding human behavior, with significant applications in augmented and virtual reality (AR/VR). However, existing methods for monocular hand reconstruction typically rely on a weak perspective camera model, which simulates hand motion within a limited camera frustum. As a result, these approaches struggle to recover the full 3D global trajectory and often produce noisy or incorrect depth estimations, particularly when the video is captured by dynamic or moving cameras, which is common in egocentric scenarios. Our Dyn-HaMR consists of a multi-stage, multi-objective optimization pipeline, that factors in (i) simultaneous localization and mapping (SLAM) to robustly estimate relative camera motion, (ii) an interacting-hand prior for generative infilling and to refine the interaction dynamics, ensuring plausible recovery under (self-)occlusions, and (iii) hierarchical initialization through a combination of state-of-the-art hand tracking methods. Through extensive evaluations on both in-the-wild and indoor datasets, we show that our approach significantly outperforms state-of-the-art methods in terms of 4D global mesh recovery. This establishes a new benchmark for hand motion reconstruction from monocular video with moving cameras.

AI-generated Podcast (Paper Introduction)

Global Hand Motion Reconstruction Results

Pipeline Overview

Dyn-HaMR as a remedy for the motion entanglement in the wild. The green and red arrows represent the direction of the hand motion. Dyn-HaMR (Ours) can disentangle the camera and object poses to recover the 4D global hand motion in the real world whilst state-of-the-art 3D hand reconstruction methods like HaMeR, IntagHand, and ACR fail to do so since they cannot disentangle the sources of motion.

Method Overview Diagram

Dyn-HaMR is a three-stage optimization pipeline to recover the 4D global hand motion from in-the-wild videos even with dynamic cameras. Our method can disentangle hand and camera motion as well as modelling complex hand interactions.

Method Overview Diagram

Gallery

InterHand2.6M - Example 1

Qualitative comparison with state-of-the-art method HaMeR. The first row is from the H2O dataset, while the second & third rows are from the HOI4D dataset and web videos respectively.

Poster

Presentation Poster

Paper


Dyn-HaMR: Recovering 4D Interacting Hand Motion from a Dynamic Camera
Zhengdi Yu, Stefanos Zafeiriou, Tolga Birdal
arXiv preprint, 2024.

View PDF

BibTeX

@inproceedings{yu2024dynhamr,
  title = {Dyn-HaMR: Recovering 4D Interacting Hand Motion from a Dynamic Camera},
  author = {Yu, Zhengdi and Zafeiriou, Stefanos and Birdal, Tolga},
  booktitle = {arXiv preprint.},
  month     = {December},
  year      = {2024}
  }