• DocumentCode
    276598
  • Title

    A comparison of two neural network architectures for vector quantization

  • Author

    Naraghi-Pour, Mort ; Hedge, M. ; Bourge, Fabrice

  • Author_Institution
    Dept. of Electr. & Comput. Eng., Louisiana State Univ., Baton Rouge, LA, USA
  • Volume
    i
  • fYear
    1991
  • fDate
    8-14 Jul 1991
  • Firstpage
    391
  • Abstract
    The authors investigate the performance of two neural network architectures for vector quantization. The two architectures are the multilayer feedforward network and the Hopfield analog neural network. If is found that for the feedforward network to have reasonably good performance, the number of hidden units must be unrealistically high: exponential in the number of dimensions and codewords. For the Hopfield analog model, on the other hand, the number of processors required is equal to the number of codewords and the resulting performance is very close to the optimum mean squared error
  • Keywords
    data compression; encoding; neural nets; Hopfield analog neural network; codewords; dimensions; hidden units; multilayer feedforward network; neural network architectures; optimum mean squared error; performance; vector quantization; Computer architecture; Costs; Distortion measurement; Encoding; Feedforward neural networks; Hopfield neural networks; Image storage; Multi-layer neural network; Neural networks; Vector quantization;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks, 1991., IJCNN-91-Seattle International Joint Conference on
  • Conference_Location
    Seattle, WA
  • Print_ISBN
    0-7803-0164-1
  • Type

    conf

  • DOI
    10.1109/IJCNN.1991.155209
  • Filename
    155209