Title :
How to Schedule Measurements of a Noisy Markov Chain in Decision Making?
Author :
Krishnamurthy, Vikram
Author_Institution :
Dept. of Electr. & Comput. Eng., Univ. of British Columbia, Vancouver, BC, Canada
Abstract :
A decision maker records measurements of a finite-state Markov chain corrupted by noise. The goal is to decide when the Markov chain hits a specific target state. The decision maker can choose from a finite set of sampling intervals to pick the next time to look at the Markov chain. The aim is to optimize an objective comprising of false alarm, delay cost, and cumulative measurement sampling cost. Taking more frequent measurements yields accurate estimates but incurs a higher measurement cost. Making an erroneous decision too soon incurs a false alarm penalty. Waiting too long to declare the target state incurs a delay penalty. What is the optimal sequential strategy for the decision maker? This paper shows that under reasonable conditions, the optimal strategy has the following intuitive structure: when the Bayesian estimate (posterior distribution) of the Markov chain is away from the target state, look less frequently; while if the posterior is close to the target state, look more frequently. Bounds are derived for the optimal strategy. Also the achievable optimal cost of the sequential detector as a function of transition dynamics and observation distribution is analyzed. The sensitivity of the optimal achievable cost to parameter and strategy variations is bounded in terms of the Kullback divergence. Also structural results are obtained for joint optimal sampling and measurement control (active sensing).
Keywords :
Markov processes; decision making; scheduling; Bayesian estimate; Kullback divergence; active sensing; cumulative measurement sampling cost; decision maker; decision maker records measurements; decision making; delay cost; false alarm; finite-state Markov chain; measurement control; noisy Markov chain; optimal sampling; optimal sequential strategy; posterior distribution; schedule measurements; strategy variations; Bayes methods; Delays; Markov processes; Noise measurement; State estimation; Bayesian filtering; Change detection; active sensing; decision making; optimal sequential sampling; partially observed Markov decision process; quickest state estimation; stochastic dominance; stochastic dynamic programming; submodularity;
Journal_Title :
Information Theory, IEEE Transactions on
DOI :
10.1109/TIT.2013.2253352