Abstract
The continued scaling of CMOS technologies and consideration of post-CMOS technologies has elevated hardware reliability to a first-class challenge, particularly in energy- and resource-constrained embedded sensor applications. In such applications, there is an increasing emphasis on inference functions. Machine-learning algorithms play an important role by enabling the construction of data-driven models for inference over data that is too complex to model analytically. This paper explores how data-driven training can be exploited to also overcome computational errors due to hardware faults within an inference stage. FPGA emulation with randomized fault injections shows that the proposed architecture restores system performance to the level of a fault free system, with <1% of the hardware requiring explicit fault protection, and with digital faults affecting >2% of the circuit nodes in the rest of the hardware. To train an error-aware inference model, a training algorithm is presented whose hardware (memory) and energy requirements are reduced by 65 x and 10 x compared to previously reported algorithms (AdaBoost and FilterBoost respectively), thereby enabling model construction entirely on the device.
Original language | English (US) |
---|---|
Article number | 7070874 |
Pages (from-to) | 1136-1145 |
Number of pages | 10 |
Journal | IEEE Transactions on Circuits and Systems I: Regular Papers |
Volume | 62 |
Issue number | 4 |
DOIs | |
State | Published - Apr 1 2015 |
All Science Journal Classification (ASJC) codes
- Electrical and Electronic Engineering
- Hardware and Architecture
Keywords
- Circuit reliability
- fault tolerance
- pattern classification
- pattern recognition