• DocumentCode
    31806
  • Title

    Sample Complexity of Dictionary Learning and Other Matrix Factorizations

  • Author

    Gribonval, Remi ; Jenatton, Rodolphe ; Bach, Francis ; Kleinsteuber, Martin ; Seibert, Matthias

  • Author_Institution
    Inst. de Rech. en Syst. Aleatoires (Inria), Rennes, France
  • Volume
    61
  • Issue
    6
  • fYear
    2015
  • fDate
    Jun-15
  • Firstpage
    3469
  • Lastpage
    3486
  • Abstract
    Many modern tools in machine learning and signal processing, such as sparse dictionary learning, principal component analysis, non-negative matrix factorization, K-means clustering, and so on, rely on the factorization of a matrix obtained by concatenating high-dimensional vectors from a training collection. While the idealized task would be to optimize the expected quality of the factors over the underlying distribution of training vectors, it is achieved in practice by minimizing an empirical average over the considered collection. The focus of this paper is to provide sample complexity estimates to uniformly control how much the empirical average deviates from the expected cost function. Standard arguments imply that the performance of the empirical predictor also exhibit such guarantees. The level of genericity of the approach encompasses several possible constraints on the factors (tensor product structure, shift-invariance, sparsity...), thus providing a unified perspective on the sample complexity of several widely used matrix factorization schemes. The derived generalization bounds behave proportional to (log (n)/n)1/2 with respect to the number of samples n for the considered matrix factorization techniques.
  • Keywords
    learning (artificial intelligence); matrix algebra; principal component analysis; signal processing; K-means clustering; dictionary learning; expected cost function; machine learning; matrix factorization schemes; matrix factorization techniques; nonnegative matrix factorization; principal component analysis; sample complexity; shift-invariance; signal processing; tensor product structure; training collection; training vectors; underlying distribution; Complexity theory; Dictionaries; Principal component analysis; Probability distribution; Sparse matrices; Standards; Training; Dictionary learning; K-means clustering; non-negative matrix factorization; principal component analysis; sample complexity; sparse coding; structured learning;
  • fLanguage
    English
  • Journal_Title
    Information Theory, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    0018-9448
  • Type

    jour

  • DOI
    10.1109/TIT.2015.2424238
  • Filename
    7088631