Deep convolutional neural networks (DCNNs) have recently boosted the performance of image super-resolution (SR) by learning deep non-linear mappings from low-resolution images to their high-resolution counterparts. In general, these methods learn the mapping relationship in image space of a single scale. In this paper, we consider that features across different scales can provide various types of information for SR. Thus, we propose a novel network that extracts features of different spatial resolutions for image super-resolution. We successfully build a model to learn non-linear mappings across feature spaces of various spatial resolutions. Specifically, we propose a dense convolutional auto-encoder block (DCAE), which includes several auto-encoder (AE) units and a squeeze unit, as the basic component of our model. The AE units exploit features of different resolutions through paired encoding and decoding layers. Further, we employ skip connections to combine features of the same spatial scale in one AE unit and dense connections across successive AE units in one DCAE block to establish a temporal feature reuse mechanism. The squeeze units combine features in a DCAE block and the previous DCAE block to achieve long-term temporal feature reuse. Furthermore, we extend our work by performing multi-scale supervised training to build a single framework for SR of all scale factors. Comprehensive experiments show that the proposed method outperforms state-of-the-art methods.
All Science Journal Classification (ASJC) codes
- Computer Science Applications
- Cognitive Neuroscience
- Artificial Intelligence
- Convolutional auto-encoders
- Dense connection
- Super resolution