DocumentCode :
1772725
Title :
Online learning for multi-channel opportunistic access over unknown Markovian channels
Author :
Wenhan Dai ; Yi Gai ; Krishnamachari, Bhuma
Author_Institution :
Massachusetts Inst. of Technol., Cambridge, MA, USA
fYear :
2014
fDate :
June 30 2014-July 3 2014
Firstpage :
64
Lastpage :
71
Abstract :
A fundamental theoretical problem in opportunistic spectrum access is the following: a single secondary user must choose a channel to sense and access at each time, with the availability of each channel (due to primary user behavior) described by a Markov Chain. The problem of maximizing the expected channel usage can be formulated as a restless multi-armed bandit. We present in this paper an online learning algorithm with the best known results to date for this problem in the case when channels are homogeneous and the channel statistics are unknown a priori. Specifically, we show that this policy, that we refer to as CSE, achieves a regret (the gap between the rewards accumulated by a model-aware Genie and the policy) that is bounded in finite time by a function that scales as O(log t). By explicitly learning the underlying statistics over time, this novel policy outperforms a previously proposed scheme shown to provide near-logarithmic regret.
Keywords :
Markov processes; radiocommunication; signal detection; Markov chain; channel statistics; expected channel usage; model aware Genie; multichannel opportunistic access; near logarithmic regret; on-line learning algorithm; opportunistic spectrum access; restless multiarmed bandit; single secondary user; unknown Markovian channels; Bayes methods; Conferences; Markov processes; Sensors; Throughput; Tin; Vectors; Logarithmic Regret; Online Learning; Restless Multi-Armed Bandit;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Sensing, Communication, and Networking (SECON), 2014 Eleventh Annual IEEE International Conference on
Conference_Location :
Singapore
Type :
conf
DOI :
10.1109/SAHCN.2014.6990328
Filename :
6990328
Link To Document :
بازگشت