DocumentCode :
2222351
Title :
Can OneMax help optimizing LeadingOnes using the EA+RL method?
Author :
Buzdalov, Maxim ; Buzdalova, Arina
Author_Institution :
ITMO University, 49 Kronverkskiy av., Saint-Petersburg, Russia, 197101
fYear :
2015
fDate :
25-28 May 2015
Firstpage :
1762
Lastpage :
1768
Abstract :
There exist optimization problems with the target objective, which is to be optimized, and several extra objectives, which can be helpful in the optimization process. The EA+RL method is designed to control optimization algorithms which solve problems with extra objectives. The method is based on the use of reinforcement learning for adaptive online selection of objectives. In this paper we investigate whether OneMax helps to optimize LeadingOnes when the EA+RL method is used. We consider LeadingOnes+OneMax problem where the target objective is LeadingOnes and the only extra objective is OneMax. The following theoretical results are proven for the expected running times when optimization starts from a random vector in the case of randomized local search (RLS): n2/2 for LeadingOnes, n2/3 for LeadingOnes+OneMax when reinforcement learning state is equal to the LeadingOnes fitness or when random objective selection is performed, and n2/4+o(n2) when there is one reinforcement learning state and the greedy exploration strategy is used. The case of starting with all bits set to zero is also considered. So, OneMax helps, although not too much, to optimize LeadingOnes when RLS is used. However, it is not true when using the (1 + 1) evolutionary algorithm, which is shown experimentally.
Keywords :
Algorithm design and analysis; Evolutionary computation; Learning (artificial intelligence); Markov processes; Optimization; Silicon; Switches;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Evolutionary Computation (CEC), 2015 IEEE Congress on
Conference_Location :
Sendai, Japan
Type :
conf
DOI :
10.1109/CEC.2015.7257100
Filename :
7257100
Link To Document :
بازگشت