Author_Institution :
Dept. of Electr. & Comput. Eng., Univ. of Louisville, Louisville, KY, USA
Abstract :
When developing a robot or other automaton, the efficacy of the agent is highly dependent on the performance of the behaviors which underpin the control system. Especially in the case of agents which must act in real world or disorganized environments, the design of robust behaviors can be both difficult and time consuming, and often requires the use of sensitive tuning. In response to this need, we present a behavioral, goal-oriented, reinforcement-based machine learning strategy which is flexible, simple to implement, and designed for application in real-world environments, but with the capability of software-based training. In this paper, we will explain our design paradigms, the formal implementation thereof, and the algorithm proper. We will show that the algorithm is able to emulate standard reinforcement learning within comparable training time, and to extend the capabilities thereof as well. We also demonstrate extension of learning beyond the scope of training examples, and present an example of a physical robot which learns a sequential action behavior by experimentation.
Keywords :
control engineering computing; learning (artificial intelligence); multi-agent systems; robots; behavioral goal-oriented reinforcement-based machine learning strategy; comparable training time; control system; disorganized environments; formal implementation; modified reinforcement learning; physical robot; reinforcement learning; sensitive tuning; sequential action behaviors; software-based training; Learning (artificial intelligence); Learning automata; Robots; Standards; Three-dimensional displays; Vectors; Algorithms; Behavior Based Robotics; Behaviors; Machine learning; Operant Conditioning; Probabilistic learning; Reinforcement learning; Robot control; Robots;