• DocumentCode
    2493910
  • Title

    Incremental and decremental LDA learning with applications

  • Author

    Pang, Shaoning ; Ban, Tao ; Kadobayashi, Youki ; Kasabov, Nikola

  • Author_Institution
    Knowledge Eng. & Discovery Res. Inst., Auckland Univ. of Technol., Auckland, New Zealand
  • fYear
    2010
  • fDate
    18-23 July 2010
  • Firstpage
    1
  • Lastpage
    8
  • Abstract
    To adapt linear discriminant analysis (LDA) to real world applications, there is a pressing necessity to provide it with an incremental learning ability to integrate knowledge presented by one-pass data streams, a functionality to join multiple LDA models to make the knowledge-sharing between independent learning agents more efficient, and a forgetting functionality to avoid reconstruction of the overall discriminant eigenspace caused by some irregular changes. To this end, we introduce two adaptive LDA learning methods: LDA merging and LDA splitting, which show the following merits: ability of online learning with one-pass data streams, retained class separability identical to the batch learning method, high efficiency for knowledge-sharing due to condensed knowledge representation by the eigenspace model, and more preferable time and storage cost than traditional approaches under common application conditions. These properties are validated by the experiments on a benchmark face image dataset. By the case study on application of the proposed method to multi-agent cooperative learning and system alternation of a face recognition system, we further clarified the adaptability of the proposed methods to complex dynamic learning tasks.
  • Keywords
    knowledge representation; learning (artificial intelligence); multi-agent systems; LDA merging; LDA splitting; adaptive LDA learning methods; batch learning method; benchmark face image dataset; complex dynamic learning tasks; face recognition system; incremental learning ability; independent learning agents; knowledge representation; knowledge-sharing; linear discriminant analysis; multi-agent cooperative learning; one-pass data streams; online learning; Computational modeling; ISO standards;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks (IJCNN), The 2010 International Joint Conference on
  • Conference_Location
    Barcelona
  • ISSN
    1098-7576
  • Print_ISBN
    978-1-4244-6916-1
  • Type

    conf

  • DOI
    10.1109/IJCNN.2010.5596727
  • Filename
    5596727