Title of article :
Most likely paths to error when estimating the mean of a reflected random walk
Author/Authors :
Duffy، نويسنده , , Ken R. and Meyn، نويسنده , , Sean P.، نويسنده ,
Issue Information :
روزنامه با شماره پیاپی سال 2010
Pages :
14
From page :
1290
To page :
1303
Abstract :
It is known that simulation of the mean position of a Reflected Random Walk (RRW) { W n } exhibits non-standard behavior, even for light-tailed increment distributions with negative drift. The Large Deviation Principle (LDP) holds for deviations below the mean, but for deviations at the usual speed above the mean the rate function is null. This paper takes a deeper look at this phenomenon. Conditional on a large sample mean, a complete sample path LDP analysis is obtained. Let I denote the rate function for the one dimensional increment process. If I is coercive, then given a large simulated mean position, under general conditions our results imply that the most likely asymptotic behavior, ψ ∗ , of the paths n − 1 W ⌊ t n ⌋ is to be zero apart from on an interval [ T 0 , T 1 ] ⊂ [ 0 , 1 ] and to satisfy the functional equation ∇ I ( d d t ψ ∗ ( t ) ) = λ ∗ ( T 1 − t ) whenever ψ ( t ) ≠ 0 . If I is non-coercive, a similar, but slightly more involved, result holds. results prove, in broad generality, that Monte Carlo estimates of the steady-state mean position of a RRW have a high likelihood of over-estimation. This has serious implications for the performance evaluation of queueing systems by simulation techniques where steady state expected queue-length and waiting time are key performance metrics. The results show that naïve estimates of these quantities from simulation are highly likely to be conservative.
Keywords :
Queue-length , Waiting time , Simulation mean position , Most likely paths , Reflected random walks , Large deviations
Journal title :
Performance Evaluation
Serial Year :
2010
Journal title :
Performance Evaluation
Record number :
1570505
Link To Document :
بازگشت