Title :
Dynamic lead time promising
Author :
Reindorp, Matthew J. ; Fu, Michael C.
Author_Institution :
Dept. of Ind. Eng. & Innovation Sci., Eindhoven Univ. of Technol., Eindhoven, Netherlands
Abstract :
We consider a make-to-order business that serves customers in multiple priority classes. Orders from customers in higher classes bring greater revenue, but they expect shorter lead times than customers in lower classes. In making lead time promises, the firm must recognize preexisting order commitments, uncertainty over future demand from each class, and the possibility of supply chain disruptions. We model this scenario as a Markov decision problem and use reinforcement learning to determine the firm´s lead time policy. In order to achieve tractability on large problems, we utilize a sequential decision-making approach that effectively allows us to eliminate one dimension from the state space of the system. Initial numerical results from the sequential dynamic approach suggest that the resulting policies more closely approximate optimal policies than static optimization approaches.
Keywords :
Markov processes; decision making; lead time reduction; learning (artificial intelligence); optimisation; supply chain management; supply chains; Markov decision problem; dynamic lead time promising; make-to-order business; multiple priority class; optimal policy approximation; reinforcement learning; sequential decision making; static optimization; supply chain disruption; Learning; Markov processes; Nickel; Q factor; Schedules; Supply chains;
Conference_Titel :
Adaptive Dynamic Programming And Reinforcement Learning (ADPRL), 2011 IEEE Symposium on
Conference_Location :
Paris
Print_ISBN :
978-1-4244-9887-1
DOI :
10.1109/ADPRL.2011.5967376