Title :
Information-Theoretic Regret Bounds for Gaussian Process Optimization in the Bandit Setting
Author :
Srinivas, Niranjan ; Krause, Andreas ; Kakade, Sham M. ; Seeger, Matthias W.
Author_Institution :
California Inst. of Technol., Pasadena, CA, USA
fDate :
5/1/2012 12:00:00 AM
Abstract :
Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multiarmed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low norm in a reproducing kernel Hilbert space. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze an intuitive Gaussian process upper confidence bound (GP-UCB) algorithm, and bound its cumulative regret in terms of maximal in- formation gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.
Keywords :
Gaussian processes; Hilbert spaces; information theory; Gaussian process optimization; bandit setting; cumulative regret; information-theoretic regret bounds; intuitive Gaussian process upper confidence bound algorithm; multiarmed bandit problem; payoff function; reproducing kernel Hilbert space; sublinear regret bounds; Bayesian methods; Convergence; Gaussian processes; Kernel; Noise; Optimization; Temperature sensors; Bandit problems; Bayesian prediction; Gaussian process (GP); experimental design; information gain; nonparametric statistics; online learning; regret bound; statistical learning;
Journal_Title :
Information Theory, IEEE Transactions on
DOI :
10.1109/TIT.2011.2182033