TY - JOUR
T1 - Exploiting emerging sensing technologies toward structure in data for enhancing perception in human-centric applications
AU - Ozatay, Murat
AU - Verma, Naveen
N1 - Funding Information:
Manuscript received July 31, 2018; revised October 15, 2018; accepted November 17, 2018. Date of publication November 29, 2018; date of current version May 8, 2019. This work was supported by the C-BRIC, one of six centers in JUMP, a Semiconductor Research Corporation Program sponsored by DARPA. (Corresponding author: Murat Ozatay.) The authors are with the Department of Electrical Engineering, Princeton University, Princeton, NJ 08544 USA (e-mail: mozatay@princeton.edu; nverma@princeton.edu). Digital Object Identifier 10.1109/JIOT.2018.2883905
Publisher Copyright:
© 2014 IEEE.
PY - 2019/4
Y1 - 2019/4
N2 - Structure in data can be leveraged to enhance learning. In many perception tasks, the embedded signals arising from physical processes of interest naturally have structure of high semantic relevance. However, traditional forms of remote sensing (e.g., vision) preserve such structure only in limited ways. This paper examines how embedded, form-fitting sensing, referred to as physically integrated (PI) sensing, can preserve such structure in richer ways. While the analysis is agnostic to the particular technology for PI sensing, for which a range of options is emerging, especially driven by the Internet of Things, a particular emerging technology called large-area electronics (LAE) is considered. Using synthetic data from 3-D modeling and rendering of human-activity scenes, LAE-based PI sensing and vision-based remote sensing are emulated and perception systems are formed, showing: 1) enhanced data-efficiency of learning models based on PI sensing; 2) potential for selective deployment of PI sensors in new perception tasks, thanks to robust ranking of their value in such tasks; 3) enhanced data-efficiency of learning models based on vision sensing, by integrating PI sensing; and 4) efficient mapping of PI-sensing features across perception tasks to enhance transferability of learning.
AB - Structure in data can be leveraged to enhance learning. In many perception tasks, the embedded signals arising from physical processes of interest naturally have structure of high semantic relevance. However, traditional forms of remote sensing (e.g., vision) preserve such structure only in limited ways. This paper examines how embedded, form-fitting sensing, referred to as physically integrated (PI) sensing, can preserve such structure in richer ways. While the analysis is agnostic to the particular technology for PI sensing, for which a range of options is emerging, especially driven by the Internet of Things, a particular emerging technology called large-area electronics (LAE) is considered. Using synthetic data from 3-D modeling and rendering of human-activity scenes, LAE-based PI sensing and vision-based remote sensing are emulated and perception systems are formed, showing: 1) enhanced data-efficiency of learning models based on PI sensing; 2) potential for selective deployment of PI sensors in new perception tasks, thanks to robust ranking of their value in such tasks; 3) enhanced data-efficiency of learning models based on vision sensing, by integrating PI sensing; and 4) efficient mapping of PI-sensing features across perception tasks to enhance transferability of learning.
KW - Activity detection
KW - Internet of Things (IoT)
KW - artificial intelligence
KW - large-area electronics (LAEs)
KW - machine learning
KW - physically integrated (PI) sensing
UR - http://www.scopus.com/inward/record.url?scp=85057855608&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85057855608&partnerID=8YFLogxK
U2 - 10.1109/JIOT.2018.2883905
DO - 10.1109/JIOT.2018.2883905
M3 - Article
AN - SCOPUS:85057855608
SN - 2327-4662
VL - 6
SP - 3411
EP - 3422
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 2
M1 - 8552441
ER -