Author :
Freifeld, Oren ; Hauberg, Soren ; Black, Michael J.
Abstract :
We consider the intersection of two research fields: transfer learning and statistics on manifolds. In particular, we consider, for manifold-valued data, transfer learning of tangent-space models such as Gaussians distributions, PCA, regression, or classifiers. Though one would hope to simply use ordinary Rn-transfer learning ideas, the manifold structure prevents it. We overcome this by basing our method on inner-product-preserving parallel transport, a well-known tool widely used in other problems of statistics on manifolds in computer vision. At first, this straight-forward idea seems to suffer from an obvious shortcoming: Transporting large datasets is prohibitively expensive, hindering scalability. Fortunately, with our approach, we never transport data. Rather, we show how the statistical models themselves can be transported, and prove that for the tangent-space models above, the transport "commutes" with learning. Consequently, our compact framework, ap- plicable to a large class of manifolds, is not restricted by the size of either the training or test sets. We demonstrate the approach by transferring PCA and logistic-regression models of real-world data involving 3D shapes and image descriptors.
Keywords :
computer vision; learning (artificial intelligence); principal component analysis; regression analysis; 3D shapes; PCA model; Rn-transfer learning ideas; computer vision; image descriptors; inner-product-preserving parallel transport; logistic-regression model; manifold-valued data; model transport; scalable transfer learning; statistical models; tangent-space models; test sets; training sets; Computational modeling; Data models; Manifolds; Principal component analysis; Shape; Total quality management; Vectors; Computer Vision; Manifold-Valued Data; PGA; Riemannian Manifolds; Scalable; Statistics on Manifolds; Transfer Learning;