ADMN
ADMN
Multimodal deep learning systems are deployed in dynamic scenarios due to the robustness afforded by multiple sensing modalities. Nevertheless, they struggle with varying compute resource availability (due to multi-tenancy, device heterogeneity, etc.) and fluctuating quality of inputs (from sensor feed corruption, environmental noise, etc.). Current multimodal systems employ static resource provisioning and cannot easily adapt when compute resources change over time. Additionally, their reliance on processing sensor data with fixed feature extractors is ill-equipped to handle variations in modality quality. Consequently, uninformative modalities, such as those with high noise, needlessly consume resources better allocated towards other modalities. We propose ADMN, a layer-wise Adaptive Depth Multimodal Network capable of tackling both challenges – it adjusts the total number of active layers across all modalities to meet compute resource constraints, and continually reallocates layers across input modalities according to their modality quality. Our evaluations showcase ADMN can match the accuracy of state-of-the-art networks while reducing up to 75% of their floating-point operations.
Details & Specifications
Adaptive Networks
Dynamic Inference
Quality-of-Information (QoI)
Resource Allocation
Just-In-Time Adaptivity
Publication: TinyNS: Platform-Aware Neurosymbolic Auto Tiny Machine Learning
MobiVital: Self-supervised Quality Estimation for UWB-based Contactless Respiration Monitoring
ProMind-LLM: Causal Reasoning for Proactive Mental Health Care
STARK: Open-Source Framework for Multimodal Antenna Array Measurements
- Jason Wu
- Kang Yang
- Lance Kaplan
- Mani Srivastava