Abstract :
Importance sampling is a variance reduction technique for efficient simulation via a change of measure. In particular, it can be applied to rare event simulation of Markov chains. An optimal change of measure yielding a zero variance estimator always exists, but it explicitly depends on the unknown quantity of interest. Thus, it is typically neither known in advance nor available in simulations. In this paper, we investigate the form of optimal importance sampling for estimating state probabilities in discrete-time Markov chains, both over finite horizon and in steady state. We derive the optimal change of measure by utilizing only a general property, without using a priori knowledge of the quantity of interest. Our results show that optima! importance sampling for Markov chains cannot be performed by simulating an alternative Markov chain, since transition probabilities in each step must depend on the history of the already simulated process. This particularly indicates that combined dynamic/adaptive techniques should be applied, and that some traditional approaches are not really promising, which holds not only for Markov chains but also for higher level models such as Petri nets and Markovian queueing models.
Keywords :
Markov processes; importance sampling; optimisation; probability; Markovian queueing model; Petri net; adaptive technique; discrete-time Markov chain; dynamic technique; optimal importance sampling; rare event simulation; state probability; transition probability; variance reduction technique; zero variance estimator; Buffer overflow; Discrete event simulation; History; Monte Carlo methods; Petri nets; State estimation; Steady-state; Stochastic processes; Uncertainty; Yield estimation; Importance Sampling; Markov Chains; Rare Events; Simulation; Variance Reduction;