Compressed Signal Processing on Nyquist-Sampled Signals

Research output: Contribution to journalArticle

7 Scopus citations

Abstract

Pattern-recognition algorithms from the domain of machine learning play a prominent role in embedded sensing systems, in order to derive inferences from sensor data. Very often, such systems face severe energy constraints. The focus of this work is to mitigate the computational energy by exploiting a form of compression which preserves a similarity metric widely used for pattern recognition. The form of compression is random projection, and the similarity metric is inner products between source vectors. Given the prominence of random projections within compressive sensing, previous research has explored this idea for application to compressively-sensed signals. In this work, we analyze the error sources faced by such approaches and show that the compressive-sensing setting itself introduces a significant source of feature-computation error ( ∼ 30 percent). We show that random projections can be exploited more generally without compressive sensing, enabling significant reduction in computational energy, and avoiding a significant source of error. The approach is referred to as compressed signal processing (CSP), and it applies to Nyquist-sampled signals. We validate the CSP approach through two case studies. The first focuses on seizure detection using spectral-energy features extracted from electroencephalograms. We show that at a 32 × compression ratio, the number of multiply-accumulate (MAC) and operand-access operations required is reduced by 21.2 ×, while achieving a sensitivity of 100 percent, latency of 4.33 sec, and false alarm rate of 0.22/hr; this compares to a baseline performance of 100 percent, 4.37 sec, and 0.12/hr, respectively. The second case study focuses on neural prosthesis based on extracting wavelet features from a set of detected spikes. We show that at a 32 × compression ratio, the number of MAC and operand access computations required is reduced by 3.3 ×, while spike sorting performance can be maintained within an average error of 4.89 percent for spike count, 3.42 percent for coefficient of variance, and 4.90 percent for firing rate; this compares with a baseline average error of 4.00, 2.75, and 4.00 percent for spike count, coefficient of variance, and firing rate, respectively.

Original languageEnglish (US)
Article number7422045
Pages (from-to)3293-3303
Number of pages11
JournalIEEE Transactions on Computers
Volume65
Issue number11
DOIs
StatePublished - Nov 1 2016

All Science Journal Classification (ASJC) codes

  • Software
  • Theoretical Computer Science
  • Hardware and Architecture
  • Computational Theory and Mathematics

Keywords

  • Classification
  • Nyquist domain
  • compressed signal processing
  • machine learning
  • random projections

Fingerprint Dive into the research topics of 'Compressed Signal Processing on Nyquist-Sampled Signals'. Together they form a unique fingerprint.

Cite this