SensorBench
Effective processing, interpretation, and management of sensor data have emerged as a critical component of cyber-physical systems. Traditionally, processing sensor data requires profound theoretical knowledge and proficiency in signal-processing tools. However, recent works show that Large Language Models (LLMs) have promising capabilities in processing sensory data, suggesting their potential as copilots for developing sensing systems.
To explore this potential, the authors construct a comprehensive benchmark, SensorBench, to establish a quantifiable objective. The benchmark incorporates diverse real-world sensor datasets for various tasks. The results show that while LLMs exhibit considerable proficiency in simpler tasks, they face inherent challenges in processing compositional tasks with parameter selections compared to engineering experts. Additionally, the study investigates four prompting strategies for sensor processing and shows that self-verification can outperform all other baselines in 48% of tasks. The study provides a comprehensive benchmark and prompting analysis for future developments, paving the way toward an LLM-based sensor processing copilot.

Details & Specifications
CPS (Cyber-Physical Systems)
IoT (Internet of Things
Sensing
Natural language processing
Embedded and cyber-physical systems
Publication: TinyNS: Platform-Aware Neurosymbolic Auto Tiny Machine Learning
MobiVital: Self-supervised Quality Estimation for UWB-based Contactless Respiration Monitoring
ProMind-LLM: Causal Reasoning for Proactive Mental Health Care
STARK: Open-Source Framework for Multimodal Antenna Array Measurements
- Pengrui Quan
- Xiaomin Ouyang
- Jeya Vikranth Jeyakumar
- Ziqi Wang
- Yang Xing
- Mani Srivastava