TY - GEN
T1 - Autonomous operation of novel elevators for robot navigation
AU - Klingbeil, Ellen
AU - Carpenter, Blake
AU - Russakovsky, Olga
AU - Ng, Andrew Y.
PY - 2010
Y1 - 2010
N2 - Although robot navigation in indoor environments has achieved great success, robots are unable to fully navigate these spaces without the ability to operate elevators, including those which the robot has not seen before. In this paper, we focus on the key challenge of autonomous interaction with an unknown elevator button panel. A number of factors, such as lack of useful 3D features, variety of elevator panel designs, variation in lighting conditions, and small size of elevator buttons, render this goal quite difficult. To address the task of detecting, localizing, and labeling the buttons, we use state-of-the-art vision algorithms along with machine learning techniques to take advantage of contextual features. To verify our approach, we collected a dataset of 150 pictures of elevator panels from more than 60 distinct elevators, and performed extensive offline testing. On this very diverse dataset, our algorithm succeeded in correctly localizing and labeling 86.2% of the buttons. Using a mobile robot platform, we then validate our algorithms in experiments where, using only its on-board sensors, the robot autonomously interprets the panel and presses the appropriate button in elevators never seen before by the robot. In a total of 14 trials performed on 3 different elevators, our robot succeeded in localizing the requested button in all 14 trials and in pressing it correctly in 13 of the 14 trials.
AB - Although robot navigation in indoor environments has achieved great success, robots are unable to fully navigate these spaces without the ability to operate elevators, including those which the robot has not seen before. In this paper, we focus on the key challenge of autonomous interaction with an unknown elevator button panel. A number of factors, such as lack of useful 3D features, variety of elevator panel designs, variation in lighting conditions, and small size of elevator buttons, render this goal quite difficult. To address the task of detecting, localizing, and labeling the buttons, we use state-of-the-art vision algorithms along with machine learning techniques to take advantage of contextual features. To verify our approach, we collected a dataset of 150 pictures of elevator panels from more than 60 distinct elevators, and performed extensive offline testing. On this very diverse dataset, our algorithm succeeded in correctly localizing and labeling 86.2% of the buttons. Using a mobile robot platform, we then validate our algorithms in experiments where, using only its on-board sensors, the robot autonomously interprets the panel and presses the appropriate button in elevators never seen before by the robot. In a total of 14 trials performed on 3 different elevators, our robot succeeded in localizing the requested button in all 14 trials and in pressing it correctly in 13 of the 14 trials.
UR - http://www.scopus.com/inward/record.url?scp=77955809068&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=77955809068&partnerID=8YFLogxK
U2 - 10.1109/ROBOT.2010.5509466
DO - 10.1109/ROBOT.2010.5509466
M3 - Conference contribution
AN - SCOPUS:77955809068
SN - 9781424450381
T3 - Proceedings - IEEE International Conference on Robotics and Automation
SP - 751
EP - 758
BT - 2010 IEEE International Conference on Robotics and Automation, ICRA 2010
T2 - 2010 IEEE International Conference on Robotics and Automation, ICRA 2010
Y2 - 3 May 2010 through 7 May 2010
ER -