DocumentCode
1101482
Title
(P, p) Retraining Policies
Author
Katsikopoulos, Konstantinos V.
Author_Institution
Max Planck Inst. for Human Dev., Berlin
Volume
37
Issue
5
fYear
2007
Firstpage
609
Lastpage
613
Abstract
Skills that are practiced infrequently need to be retrained. A retraining policy is optimal if it minimizes the cost of keeping the probability that the skill is learned within two bounds. The (P, p) policy is to retrain only when the probability that the skill is learned has dropped just above the lower bound, so that this probability is brought up just below the upper bound. For minimum assumptions on the cost function, a set of two easy-to-check conditions involving the relearning and forgetting functions guarantees the optimality of the (P, p) policy. The conditions hold for power functions proposed in the psychology of learning and forgetting but not for exponential functions.
Keywords
training; cost function; exponential functions; forgetting functions; lower bound; probability; relearning functions; retraining policy; upper bound; Cost function; Dynamic programming; Earthquakes; Floods; Humans; Inventory management; Memory management; Psychology; Testing; Upper bound; Dynamic programming; instruction; inventory management; memory; optimality;
fLanguage
English
Journal_Title
Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on
Publisher
ieee
ISSN
1083-4427
Type
jour
DOI
10.1109/TSMCA.2007.902620
Filename
4292223
Link To Document