• DocumentCode
    1909397
  • Title

    Generalization and maximum likelihood from small data sets

  • Author

    Byrne, William

  • Author_Institution
    Dept. of Electr. Eng., Maryland Univ., College Park, MD, USA
  • fYear
    1993
  • fDate
    6-9 Sep 1993
  • Firstpage
    197
  • Lastpage
    206
  • Abstract
    A technique is described which can be used to prevent overtraining and encourage generalization in training under a maximum likelihood criterion. Applications to Boltzmann machines and hidden Markov models (HMMs) are discussed. While the confidence constraint may slow the training algorithm, in general it should involve very little additional calculation. The results presented for HMMs are for training under a maximum likelihood criterion based on the marginal distribution. Similar modifications can be made to the segmental K-means and N-best algorithms
  • Keywords
    Boltzmann machines; generalisation (artificial intelligence); hidden Markov models; learning (artificial intelligence); neural nets; Boltzmann machines; K-means algorithm; N-best algorithms; generalization; hidden Markov models; maximum likelihood criterion; Counting circuits; Distributed computing; Educational institutions; Hidden Markov models; Iterative algorithms; Probability; Q measurement; Quadratic programming; Statistical distributions; Training data;
  • fLanguage
    English
  • Publisher
    ieee
  • Conference_Titel
    Neural Networks for Processing [1993] III. Proceedings of the 1993 IEEE-SP Workshop
  • Conference_Location
    Linthicum Heights, MD
  • Print_ISBN
    0-7803-0928-6
  • Type

    conf

  • DOI
    10.1109/NNSP.1993.471869
  • Filename
    471869