Title of article :
Reducing Prediction Error by Transforming Input Data for Neural Networks
Author/Authors :
Shi، Jonathan Jingsheng نويسنده ,
Issue Information :
روزنامه با شماره پیاپی سال 2000
Abstract :
The primary purpose of data transformation is to modify the distribution of input variables so that they can better match outputs. The performance of a neural network is often improved through data transformations. There are three existing data transformation methods: (1) Linear transformation; (2) statistical standardization; and (3) mathematical functions. This paper presents another data transformation method using cumulative distribution functions, simply addressed as distribution transformation. This method can transform a stream of random data distributed in any range to data points uniformly distributed on [0,1]. Therefore, all neural input variables can be transformed to the same ground-uniform distributions on [0,1]. The transformation can also serve the specific need of neural computation that requires all input data to be scaled to the range [-1,1] or [0,1]. The paper applies distribution transformation to two examples. Example 1 fits a cowboy hat surface because it provides a controlled environment for generating accurate input and output data patterns. The results show that distribution transformation improves the network performance by 50% over linear transformation. Example 2 is a real tunneling project, in which distribution transformation has reduced the prediction error by more than 13% compared with linear transformation.
Keywords :
Hardy space , admissible majorant , inner function , subspace , model , Hilbert transform , shift operator
Journal title :
COMPUTING IN CIVIL ENGINEERING
Journal title :
COMPUTING IN CIVIL ENGINEERING