DocumentCode
1049067
Title
Reinforcement learning for reactive power control
Author
Vlachogiannis, John G. ; Hatziargyriou, Nikos D.
Author_Institution
Informatics & Comput. Technol. Dept., Technol. Educ.al Inst. of Lamia, Greece
Volume
19
Issue
3
fYear
2004
Firstpage
1317
Lastpage
1325
Abstract
This paper presents a Reinforcement Learning (RL) method for network constrained setting of control variables. The RL method formulates the constrained load flow problem as a multistage decision problem. More specifically, the model-free learning algorithm (Q-learning) learns by experience how to adjust a closed-loop control rule mapping states (load flow solutions) to control actions (offline control settings) by means of reward values. Rewards are chosen to express how well control actions cause satisfaction of operating constraints. The Q-learning algorithm is applied to the IEEE 14 busbar and to the IEEE 136 busbar system for constrained reactive power control. The results are compared with those given by the probabilistic constrained load flow based on sensitivity analysis demonstrating the advantages and flexibility of the Q-learning algorithm. Computing times with another heuristic method is also compared.
Keywords
closed loop systems; learning (artificial intelligence); load flow control; power engineering computing; probability; reactive power control; sensitivity analysis; IEEE 136 busbar system; Q-learning; closed-loop control; load flow; model-free learning algorithm; multistage decision; reactive power control; reinforcement learning; sensitivity analysis; Constraint optimization; Dynamic programming; Educational technology; Learning; Load flow; Optimal control; Power system analysis computing; Power system dynamics; Reactive power control; Sensitivity analysis; Constrained load flow; Q-learning algorithm; reinforcement learning;
fLanguage
English
Journal_Title
Power Systems, IEEE Transactions on
Publisher
ieee
ISSN
0885-8950
Type
jour
DOI
10.1109/TPWRS.2004.831259
Filename
1318666
Link To Document