DocumentCode :
1769021
Title :
Structural optimization of neural networks and training data selection method for prediction
Author :
Hayashida, T. ; Nishizaki, Ichiro ; Sekizaki, Shinya ; Nishida, Masanori
Author_Institution :
Grad. Sch. of Eng., Hiroshima Univ., Higashi-Hiroshima, Japan
fYear :
2014
fDate :
7-8 Nov. 2014
Firstpage :
171
Lastpage :
176
Abstract :
The neural networks are well known as that they have an ability of approximation of any nonlinear function, and they are applied for data prediction in many fields. The parameters of neural networks, the thresholds and the weights between nodes, are updated by using given data. The performance of a neural network, for example prediction accuracy, is evaluated by the degree of the amount of the prediction error for multiple kinds of unknown data. To increase the performance, the appropriate structure of the neural network should be determined including the parameters. Cross-validation is one solution for this problem, and it is often applied in the existing literature. Through the cross-validation, the neural network may minimize not only error for the data for training, but also any unknown data, in other words, the neural network obtains generality capability by using cross-validation. The training data and the performance verification data are randomly selected in the procedure of the cross-validation, therefore, this paper proposes another method for the selection. In particular, the given data is mapped on 2 dimensional surface by using SOM(Self-Organization Map), and the mapped data are clustered into k clusters by using k-means method. From each cluster, certain fraction of the data are selected as the training data, and the rest data are selected as the performance verification data. Additionally, the proposed method includes the structural optimization method of a feedforward neural network consists of 4 layers, an input layer, a dimensional compressing layer, a hidden layer, and an output layer, by using tabu search. From the experimental result, it is observed that the appropriate structure of neural networks are obtained by using the proposed method.
Keywords :
data analysis; feedforward neural nets; generalisation (artificial intelligence); learning (artificial intelligence); optimisation; pattern clustering; prediction theory; search problems; self-organising feature maps; SOM; cross-validation; data clustering; data prediction; dimensional compressing layer; error minimization; feedforward neural network; generality capability; hidden layer; input layer; k clusters; k-means method; neural network parameters; neural network structure; neural networks; node threshold; node weight; nonlinear function approximation; output layer; performance verification data; prediction accuracy; prediction error; self-organization map; structural optimization method; tabu search; training data selection method; Accuracy; Neural networks; Optimization methods; Training; Training data; Vectors; Feedforward neural networks; data selection; structural optimization; tabu search;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Computational Intelligence and Applications (IWCIA), 2014 IEEE 7th International Workshop on
Conference_Location :
Hiroshima
ISSN :
1883-3977
Print_ISBN :
978-1-4799-4771-3
Type :
conf
DOI :
10.1109/IWCIA.2014.6988101
Filename :
6988101
Link To Document :
بازگشت