Title :
Improved Classification and Reconstruction by Introducing Independence and Randomization in Deep Neural Networks
Author :
Gaurush Hiranandani;Harish Karnick
Author_Institution :
Adobe Res., Bangalore, India
Abstract :
This paper deals with a novel way of improving classification as well as reconstructions obtained from deep neural networks. The underlying ideas that have been used throughout are Independence and Randomization. The idea is to expose the inherent properties of neural network architectures and to make simpler models that are easy to implement rather than creating highly fine-tuned and complex neural network architectures. For the most basic type of deep neural network i.e. fully connected, it has been shown that dividing the data into independent components and training each component separately not only reduces the parameters to be learned but also the training is more efficient. And if the predictions are fused appropriately the overall accuracy also increases. Using the orthogonality of LAB colour space, it is shown that L,A and B components trained separately produce better reconstructions than RGB components taken together which in turn produce better reconstructions than LAB components taken together. Based on a similar approach, randomization has been injected into the networks so as to make different networks as independent as possible. Again fusing predictions appropriately increases accuracy. The best error on MNIST´s test data set was 1.91% which is a drop by 1.05% in comparison to architectures that we created similar to [1]. As the technique is architecture independent it can be applied to other networks - for example CNNs or RNNs.
Keywords :
"Image reconstruction","Biological neural networks","Computer architecture","Training","Neurons","Vegetation"
Conference_Titel :
Digital Image Computing: Techniques and Applications (DICTA), 2015 International Conference on
DOI :
10.1109/DICTA.2015.7371270