• DocumentCode
    3603983
  • Title

    When Can Dictionary Learning Uniquely Recover Sparse Data From Subsamples?

  • Author

    Hillar, Christopher J. ; Sommer, Friedrich T.

  • Author_Institution
    Redwood Center for Theor. Neurosci., Univ. of California at Berkeley, Berkeley, CA, USA
  • Volume
    61
  • Issue
    11
  • fYear
    2015
  • Firstpage
    6290
  • Lastpage
    6297
  • Abstract
    Sparse coding or sparse dictionary learning has been widely used to recover underlying structure in many kinds of natural data. Here, we provide conditions guaranteeing when this recovery is universal; that is, when sparse codes and dictionaries are unique (up to natural symmetries). Our main tool is a useful lemma in combinatorial matrix theory that allows us to derive bounds on the sample sizes guaranteeing such uniqueness under various assumptions for how training data are generated. Whenever the conditions to one of our theorems are met, any sparsity-constrained learning algorithm that succeeds in reconstructing the data recovers the original sparse codes and dictionary. We also discuss potential applications to neuroscience and data analysis.
  • Keywords
    codes; combinatorial mathematics; data analysis; learning (artificial intelligence); matrix decomposition; signal processing; combinatorial matrix theory; data analysis; neuroscience; sparse codes; sparse data recovery; sparse dictionary learning; sparse matrix factorization; sparsity-constrained learning algorithm; Dictionaries; Encoding; Image coding; Matrices; Polynomials; Sparks; Sparse matrices; Dictionary learning; combinatorial matrix theory; compressed sensing; sparse coding; sparse matrix factorization; uniqueness;
  • fLanguage
    English
  • Journal_Title
    Information Theory, IEEE Transactions on
  • Publisher
    ieee
  • ISSN
    0018-9448
  • Type

    jour

  • DOI
    10.1109/TIT.2015.2460238
  • Filename
    7165675