The mHealthHub is a virtual forum where technologists, researchers and clinicians connect, learn, share, and innovate on mHealth tools to transform healthcare.

Tools & Datasets

Reach out or find us on Social Media.

365 Innovation Drive, Suite 335, Memphis, TN 38152

mHealthHUB@MD2K.org

Join our Community.

Stay up-to-date on the latest mHealth news and training.

Invalid email address
We promise not to spam you. You can unsubscribe at any time.
// Relative Contrastive Learning for a Motion Foundation Model for Wearable Data

RelCon

We present RelCon, a novel self-supervised Relative Contrastive learning approach for training a motion foundation model from wearable accelerometry sensors. First, a learnable distance measure is trained to capture motif similarity and domain-specific semantic information such as rotation invariance. Then, the learned distance provides a measurement of semantic similarity between a pair of accelerometry time-series, which we use to train our foundation model to model relative relationships across time and across subjects. The foundation model is trained on 1 billion segments from 87,376 participants, and achieves state-of- the-art performance across multiple downstream tasks, including human activity recognition and gait metric regression. To our knowledge, we are the first to show the generalizability of a foundation model with motion data from wearables across distinct evaluation tasks.

Details & Specifications
Published:
Category:
Frameworks, Toolkits, Technologies, Models
Tags:
Motion Foundation Model
Relative Contrastive Learning
Accelerometry Data
Self-Supervised Learning (SSL)
Wearable Data
mHealth
  • Maxwell A. Xu (UIUC)
  • Jaya Narain
  • Gregory Darnell
  • Haraldur Hallgrímsson
  • Hyewon Jeong
  • Darren Forde
  • Richard Fineman
  • Karthik J. Raghuram
  • Dr. James M. Rehg (UIUC)
  • Shirley Ren