• DocumentCode
    1328016
  • Title

    Some contributions to fixed-distribution learning theory

  • Author

    Vidyasagar, M. ; Kulkarni, Sanjeev R.

  • Author_Institution
    Centre for Artificial Intelligence & Robotics, Bangalore, India
  • Volume
    45
  • Issue
    2
  • fYear
    2000
  • fDate
    2/1/2000 12:00:00 AM
  • Firstpage
    217
  • Lastpage
    234
  • Abstract
    We consider some problems in learning with respect to a fixed distribution. We introduce two new notions of learnability; these are probably uniformly approximately correct (PUAC) learnability which is a stronger requirement than the widely studied PAC learnability, and minimal empirical risk (MER) learnability, which is a stronger requirement than the previously defined notions of “solid” or “potential” learnability. It is shown that, although the motivations for defining these two notions of learnability are entirely different, these two notions are in fact equivalent to each other and, in turn, equivalent to a property introduced here, referred to as the shrinking width property. It is further shown that if the function class to be learned has the property that empirical means converge uniformly to their true values, then all of these learnability properties hold. In the course of proving conditions for these forms of learnability, we also obtain a new estimate for the VC-dimension of a collection of sets obtained by performing Boolean operations on a given collection; this result is of independent interest. We consider both the case in which there is an underlying target function, as well as the case of “model-free” (or agnostic) learning. Finally, we consider the issue of representation of a collection of sets by its subcollection of equivalence classes. It is shown by example that, by suitably choosing representatives of each equivalence class, it is possible to affect the property of uniform convergence of empirical probabilities
  • Keywords
    convergence; equivalence classes; learning (artificial intelligence); probability; Boolean operations; VC-dimension; agnostic learning; empirical probabilities; fixed-distribution learning theory; function class; minimal empirical risk learnability; model-free learning; potential learnability; probably uniformly approximately correct learnability; shrinking width property; solid learnability; uniform convergence; Artificial intelligence; Computer science; Convergence; Intelligent robots; Machine learning; Mathematical model;
  • fLanguage
    English
  • Journal_Title
    Automatic Control, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    0018-9286
  • Type

    jour

  • DOI
    10.1109/9.839945
  • Filename
    839945