Title of article :
Stochastic scheduling and forwards induction Original Research Article
Author/Authors :
K.D. Glazebrook، نويسنده ,
Issue Information :
روزنامه با شماره پیاپی سال 1995
Pages :
21
From page :
145
To page :
165
Abstract :
We consider the problem (J, Γ) of allocating a single machine to the stochastic tasks in J in such a way that precedence constraints Γ are respected. If we have rewards which are discounted and additive then the problem of determining an optimal policy for scheduling in the class of fully preemptive policies can be formulated as a discounted Markov decision process (MDP). Policies are developed by utilising a principle of forwards induction (FI). Such policies may be thought of as quasi-myopic in that they make choices which maximise a natural measure of the reward rate currently available. A condition is given which is (necessary and) sufficient for the optimality of FI policies and which will be satisfied when Γ = {out-forest}. The notion of reward rate used to develop FI policies can also be used to develop performance bounds for general scheduling policies. These bounds can be used to make probabilistic statements about heuristics (i.e. for randomly chosen (J, Γ)). The FI approach can also be used to develop policies for general discounted MDPs. Performance bounds are available which may be used to make probabilistic statements about the performance of FI policies in more complex scheduling environments where optimality results are not available.
Journal title :
Discrete Applied Mathematics
Serial Year :
1995
Journal title :
Discrete Applied Mathematics
Record number :
884178
Link To Document :
بازگشت