DocumentCode :
3576330
Title :
Minimizing expected loss for risk-avoiding reinforcement learning
Author :
Jung-Jung Yeh ; Tsung-Ting Kuo ; Chen, William ; Shou-De Lin
Author_Institution :
Nat. Taiwan Univ., Taipei, Taiwan
fYear :
2014
Firstpage :
11
Lastpage :
17
Abstract :
This paper considers the design of a reinforcement learning (RL) agent that can strike a balance between return and risk. First, we discuss several favorable properties of an RL risk model, and then propose a definition of risk based on expected negative rewards. We also design a Q-decomposition-based framework that allows a reinforcement learning agent to control the balance between risk and profit. The results of experiments on both artificial and real-world stock datasets demonstrate that the proposed risk model satisfies the beneficial properties of an RL-based risk learning model, and also significantly outperforms other approaches in terms of avoiding risks.
Keywords :
learning (artificial intelligence); multi-agent systems; Q-decomposition-based framework; RL agent; RL risk model; RL-based risk learning model; expected loss minimization; expected negative rewards; reinforcement learning agent; risk-avoiding reinforcement learning; Finance; Investment; Learning (artificial intelligence); Legged locomotion; Loss measurement; Reactive power; profit model; reinforcement learning; risk avoiding; risk model;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Data Science and Advanced Analytics (DSAA), 2014 International Conference on
Type :
conf
DOI :
10.1109/DSAA.2014.7058045
Filename :
7058045
Link To Document :
بازگشت