DocumentCode :
1466970
Title :
Combinatorial Network Optimization With Unknown Variables: Multi-Armed Bandits With Linear Rewards and Individual Observations
Author :
Gai, Yi ; Krishnamachari, Bhaskar ; Jain, Rahul
Author_Institution :
Dept. of Electr. Eng., Univ. of Southern California, Los Angeles, CA, USA
Volume :
20
Issue :
5
fYear :
2012
Firstpage :
1466
Lastpage :
1478
Abstract :
We formulate the following combinatorial multi-armed bandit (MAB) problem: There are N random variables with unknown mean that are each instantiated in an i.i.d. fashion over time. At each time multiple random variables can be selected, subject to an arbitrary constraint on weights associated with the selected variables. All of the selected individual random variables are observed at that time, and a linearly weighted combination of these selected variables is yielded as the reward. The goal is to find a policy that minimizes regret, defined as the difference between the reward obtained by a genie that knows the mean of each random variable, and that obtained by the given policy. This formulation is broadly applicable and useful for stochastic online versions of many interesting tasks in networks that can be formulated as tractable combinatorial optimization problems with linear objective functions, such as maximum weighted matching, shortest path, and minimum spanning tree computations. Prior work on multi-armed bandits with multiple plays cannot be applied to this formulation because of the general nature of the constraint. On the other hand, the mapping of all feasible combinations to arms allows for the use of prior work on MAB with single-play, but results in regret, storage, and computation growing exponentially in the number of unknown variables. We present new efficient policies for this problem that are shown to achieve regret that grows logarithmically with time, and polynomially in the number of unknown variables. Furthermore, these policies only require storage that grows linearly in the number of unknown parameters. For problems where the underlying deterministic problem is tractable, these policies further require only polynomial computation. For computationally intractable problems, we also present results on a different notion of regret that is suitable when a polynomial-time approximation algorithm is used.
Keywords :
combinatorial mathematics; optimisation; combinatorial multiarmed bandit; combinatorial network optimization; deterministic problem; linear objective functions; linear rewards; multiple random variables; polynomial computation; polynomial time approximation algorithm; unknown variables; Algorithm design and analysis; Indexes; Optimization; Polynomials; Radiation detectors; Random variables; Upper bound; Combinatorial network optimization; multi-armed bandits (MABs); online learning;
fLanguage :
English
Journal_Title :
Networking, IEEE/ACM Transactions on
Publisher :
ieee
ISSN :
1063-6692
Type :
jour
DOI :
10.1109/TNET.2011.2181864
Filename :
6166915
Link To Document :
بازگشت