Title :
Task decomposition based on output parallelism
Author :
Guan, Sheng-Uei ; Li, Shanchun
Author_Institution :
Dept. of Electr. & Comput. Eng., Nat. Univ. of Singapore, Singapore
fDate :
6/23/1905 12:00:00 AM
Abstract :
In this paper, we propose a new method for task decomposition based on output parallelism, in order to find the appropriate architectures for large-scale real-world problems automatically and efficiently. By using this method, a problem can be divided flexibly into several sub-problems as chosen, each of which is composed of the whole input vector and a fraction of the output vector. Each module (for each subproblem) is responsible for producing a fraction of the output vector of the original problem. This way, the hidden structure for the original problem´s output units is decoupled. These modules can be grown and trained in sequence or in parallel. Incorporated with the constructive learning algorithm, our method does not require excessive computation and any prior knowledge concerning decomposition. The feasibility of output parallelism is analyzed and proved. Several benchmarks are implemented to test the validity of this method. Their results show that this method can reduce computation time, increase learning speed, and improve generalization accuracy for both classification and regression problems
Keywords :
computational complexity; feedforward neural nets; learning (artificial intelligence); multilayer perceptrons; parallel processing; classification problems; computation time; generalization; large-scale real-world problems; output parallelism; regression problems; task decomposition; Benchmark testing; Computer architecture; Concurrent computing; Function approximation; Interference; Multi-layer neural network; Neural networks; Parallel processing; Pattern classification; Training data;
Conference_Titel :
Fuzzy Systems, 2001. The 10th IEEE International Conference on
Conference_Location :
Melbourne, Vic.
Print_ISBN :
0-7803-7293-X
DOI :
10.1109/FUZZ.2001.1007298