• DocumentCode
    3756919
  • Title

    A Demonstration of Stability-Plasticity Imbalance in Multi-agent, Decomposition-Based Learning

  • Author

    Sean C. Mondesire;R. Paul Wiegand

  • Author_Institution
    Electr. Eng. &
  • fYear
    2015
  • Firstpage
    1070
  • Lastpage
    1075
  • Abstract
    Layered learning is a machine learning paradigm used in conjunction with direct-policy search reinforcement learning methods to find high performance agent behaviors for complex tasks. At its core, layered learning is a decomposition-based paradigm that shares many characteristics with robot shaping, transfer learning, hierarchical decomposition, and incremental learning. Previous studies have provided evidence that layered learning has the ability to outperform standard monolithic methods of learning in many cases. The dilemma of balancing stability and plasticity is a common problem in machine learning that causes learning agents to compromise between retaining learned information to perform a task with new incoming information. Although existing work implies that there is a stability-plasticity imbalance that greatly limits layered learning agents´ ability to learn optimally, no work explicitly verifies the existence of the imbalance or its causes. This work investigates the stability-plasticity imbalance and demonstrates that indeed, layered learning heavily favors plasticity, which can cause learned subtask proficiency to be lost when new tasks are learned. We conclude by identifying potential causes of the imbalance in layered learning and provide high level advice about how to mitigate the imbalance´s negative effects.
  • Keywords
    "Learning (artificial intelligence)","Fuels","Robots","Training","Navigation","Search problems","Performance evaluation"
  • Publisher
    ieee
  • Conference_Titel
    Machine Learning and Applications (ICMLA), 2015 IEEE 14th International Conference on
  • Type

    conf

  • DOI
    10.1109/ICMLA.2015.106
  • Filename
    7424462