RelCon
- Home
- portfolio
- Frameworks
- RelCon
RelCon
We present RelCon, a novel self-supervised Relative Contrastive learning approach for training a motion foundation model from wearable accelerometry sensors. First, a learnable distance measure is trained to capture motif similarity and domain-specific semantic information such as rotation invariance. Then, the learned distance provides a measurement of semantic similarity between a pair of accelerometry time-series, which we use to train our foundation model to model relative relationships across time and across subjects. The foundation model is trained on 1 billion segments from 87,376 participants, and achieves state-of- the-art performance across multiple downstream tasks, including human activity recognition and gait metric regression. To our knowledge, we are the first to show the generalizability of a foundation model with motion data from wearables across distinct evaluation tasks.

Details & Specifications
Relative Contrastive Learning
Accelerometry Data
Self-Supervised Learning (SSL)
Wearable Data
mHealth
- Maxwell A. Xu (UIUC)
- Jaya Narain
- Gregory Darnell
- Haraldur Hallgrímsson
- Hyewon Jeong
- Darren Forde
- Richard Fineman
- Karthik J. Raghuram
- Dr. James M. Rehg (UIUC)
- Shirley Ren