A learning framework for cognitive interference networks with partial and noisy observations

Marco Levorato, Sina Firouzabadi, Andrea Goldsmith

Research output: Contribution to journalArticlepeer-review

20 Scopus citations


An algorithm for the optimization of secondary user's transmission strategies in cognitive networks with imperfect network state observations is proposed. The secondary user minimizes the time average of a cost function while generating a bounded performance loss to the primary users' network. The state of the primary users' network, defined as a collection of variables describing features of the network (e.g., buffer state, ARQ state) evolves over time according to a homogeneous Markov process. The statistics of the Markov process is dependent on the strategy of the secondary user and, thus, the instantaneous idleness/transmission action of the secondary user has a long-term impact on the temporal evolution of the network. The Markov process generates a sequence of states in the state space of the network that projects onto a sequence of observations in the observation space, that is, the collection of all the observations of the secondary user. Based on the sequence of observations, the proposed algorithm iteratively optimizes the strategy of the secondary users with no a priori knowledge of the statistics of the Markov process and of the state-observation probability map.

Original languageEnglish (US)
Article number6226310
Pages (from-to)3101-3111
Number of pages11
JournalIEEE Transactions on Wireless Communications
Issue number9
StatePublished - 2012
Externally publishedYes

All Science Journal Classification (ASJC) codes

  • Computer Science Applications
  • Electrical and Electronic Engineering
  • Applied Mathematics


  • Cognitive networks
  • Markov decision process
  • imperfect observations
  • online learning


Dive into the research topics of 'A learning framework for cognitive interference networks with partial and noisy observations'. Together they form a unique fingerprint.

Cite this