Structure in data can be leveraged to enhance learning. In many perception tasks, the embedded signals arising from physical processes of interest naturally have structure of high semantic relevance. However, traditional forms of remote sensing (e.g., vision) preserve such structure only in limited ways. This paper examines how embedded, form-fitting sensing, referred to as physically integrated (PI) sensing, can preserve such structure in richer ways. While the analysis is agnostic to the particular technology for PI sensing, for which a range of options is emerging, especially driven by the Internet of Things, a particular emerging technology called large-area electronics (LAE) is considered. Using synthetic data from 3-D modeling and rendering of human-activity scenes, LAE-based PI sensing and vision-based remote sensing are emulated and perception systems are formed, showing: 1) enhanced data-efficiency of learning models based on PI sensing; 2) potential for selective deployment of PI sensors in new perception tasks, thanks to robust ranking of their value in such tasks; 3) enhanced data-efficiency of learning models based on vision sensing, by integrating PI sensing; and 4) efficient mapping of PI-sensing features across perception tasks to enhance transferability of learning.
All Science Journal Classification (ASJC) codes
- Signal Processing
- Information Systems
- Hardware and Architecture
- Computer Science Applications
- Computer Networks and Communications