Author :
Gilbert L. Peterson;Christopher B. Mayer;Kevin Couśin
Author_Institution :
Department of Electrical and Computer Engineering, Air Force Institute of Technology, 2950 Hobson Way Wright-Patterson AFB, OH 45431
fDate :
6/1/2011 12:00:00 AM
Abstract :
Ant colony optimization (ACO) algorithms can generate quality solutions to combinatorial optimization problems. However, like many stochastic algorithms, the quality of solutions worsen as problem sizes grow. In an effort to increase performance, we added the variable step size off-policy hill-climbing algorithm called PDWoLF (Policy Dynamics Win or Learn Fast) to several ant colony algorithms: Ant System, Ant Colony System, Elitist-Ant System, Rank-based Ant System, and Max-Min Ant System. Easily integrated into each ACO algorithm, the PDWoLF component maintains a set of policies separate from the ant colony´s pheromone. Similar to pheromone but with different update rules, the PDWoLF policies provide a second estimation of solution quality and guide the construction of solutions. Experiments on large traveling salesman problems (TSPs) show that incorporating PDWoLF with the aforementioned ACO algorithms that do not make use of local optimizations produces shorter tours than the ACO algorithms alone.
Keywords :
"Cities and towns","Heuristic algorithms","Equations","Traveling salesman problems","Optimization","Joining processes","Schedules"
Conference_Titel :
Evolutionary Computation (CEC), 2011 IEEE Congress on
Print_ISBN :
978-1-4244-7834-7
Electronic_ISBN :
1941-0026
DOI :
10.1109/CEC.2011.5949726