Abstract
Pattern-recognition algorithms from the domain of machine learning play a prominent role in embedded sensing systems in order to derive inferences from sensor data. Very often, such systems face severe energy constraints, especially when dealing with high-dimensional data, such as images. The focus of this study is on reducing computational energy by exploiting the concept of transfer learning and energy-efficient dataflow accelerators. We show that the use of convolutional autoencoders can enable various levels of reduction in computational energy and avoid a significant reduction in inference performance when multiple task categories are targeted for obtaining an inference. We validate our approach through a multi-task case study. The study targets a set of pictures with each picture containing four different task categories: gender, smile, glasses, and pose. In order to minimize inference time and computational energy, a convolutional autoencoder is used for learning a generalized representation of the images. Three scenarios are analyzed: transferring layers using convolutional autoencoders, transferring layers using convolutional neural networks trained on different tasks, and no layer transfer. We show that when the convolutional layers with one FC layer are transferred using convolutional autoencoders, we can achieve a reduction of 6.58x in computational energy.
Original language | English (US) |
---|---|
Journal | IEEE Transactions on Emerging Topics in Computing |
DOIs | |
State | Accepted/In press - 2021 |
All Science Journal Classification (ASJC) codes
- Computer Science (miscellaneous)
- Information Systems
- Human-Computer Interaction
- Computer Science Applications
Keywords
- Convolution
- Convolutional codes
- Convolutional neural networks
- Encoding
- Energy reduction
- Machine learning
- Machine learning
- Multi-task images
- Task analysis
- Training
- Transfer learning
- Transfer learning