Semisupervised Learning Based on a Novel Iterative Optimization Model for Saliency Detection

Shuwei Huo, Yuan Zhou, Wei Xiang, Sun Yuan Kung

Research output: Contribution to journalArticlepeer-review

15 Scopus citations


In this paper, we propose a novel iterative optimization model for bottom-up saliency detection. By exploring bottom-up saliency principles and semisupervised learning approaches, we design a high-performance saliency analysis method for wide ranging scenes. The proposed algorithm consists of two stages: 1) we develop a boundary homogeneity model to characterize the general position and the contour of the salient objects and 2) we propose a novel iterative optimization model, termed gradual saliency optimization, for further performance improvement. Our main contribution falls on the second stage, where we propose an iterative framework with self-repairing mechanisms for refining saliency maps. In this framework, we further develop a more comprehensive optimization function applying a novel semisupervised learning scheme to enhance the traditional saliency measure. More elaborately, the iterative method can gradually improve the output in each iteration and finally converge to high-quality saliency maps. Based on our experiments on four different public data sets, it can be demonstrated that our approach significantly outperforms the state-of-the-art methods.

Original languageEnglish (US)
Article number8378042
Pages (from-to)225-241
Number of pages17
JournalIEEE Transactions on Neural Networks and Learning Systems
Issue number1
StatePublished - Jan 2019

All Science Journal Classification (ASJC) codes

  • Software
  • Artificial Intelligence
  • Computer Networks and Communications
  • Computer Science Applications


  • Iterative optimization
  • saliency detection
  • saliency map refinement
  • semi-supervised learning


Dive into the research topics of 'Semisupervised Learning Based on a Novel Iterative Optimization Model for Saliency Detection'. Together they form a unique fingerprint.

Cite this