Title :
Parallel Learning of Large Fuzzy Cognitive Maps
Author :
Stach, Wojciech ; Kurgan, Lukasz ; Pedrycz, Witold
Author_Institution :
Univ. of Alberta, Edmonton
Abstract :
Fuzzy cognitive maps (FCMs) are a class of discrete-time artificial neural networks that are used to model dynamic systems. A recently introduced supervised learning method, which is based on real-coded genetic algorithm (RCGA), allows learning high-quality FCMs from historical data. The current bottleneck of this learning method is its scalability, which originates from large continuous search space (of quadratic size with respect to the size of the FCM) and computational complexity of genetic optimization. To this end, the goal of this paper is to explore parallel nature of genetic algorithms to alleviate the scalability problem. We use the global single-population master-slave parallelization method to speed up the FCMs learning method. We investigate the influence of different hardware architectures on the computational time of the learning method by executing a wide range of synthetic and real-life benchmarking tests. We analyze the quality of the proposed parallel learning method in application to both dense and sparse large FCMs, i.e. maps that consist of several dozens of concepts. The parallelization is shown to provide substantial speed-ups, allowing doubling the size of the FCM that can be learned by parallelization with 8 processors.
Keywords :
benchmark testing; cognition; computational complexity; discrete time systems; fuzzy neural nets; genetic algorithms; learning (artificial intelligence); parallel algorithms; FCM; RCGA; benchmarking test; computational complexity; discrete-time artificial neural networks; fuzzy cognitive maps; hardware architecture; parallel learning; real-coded genetic algorithm; single-population master-slave parallelization method; supervised learning method; Artificial neural networks; Computational complexity; Fuzzy cognitive maps; Genetic algorithms; Hardware; Learning systems; Master-slave; Optimization methods; Scalability; Supervised learning;
Conference_Titel :
Neural Networks, 2007. IJCNN 2007. International Joint Conference on
Conference_Location :
Orlando, FL
Print_ISBN :
978-1-4244-1379-9
Electronic_ISBN :
1098-7576
DOI :
10.1109/IJCNN.2007.4371194