DocumentCode :
1791556
Title :
Distributed class dependent feature analysis — A big data approach
Author :
Khoa Luu ; Chenchen Zhu ; Savvides, Marios
Author_Institution :
Dept. of Electr. & Comput. Eng., Carnegie Mellon Univ., Pittsburgh, PA, USA
fYear :
2014
fDate :
27-30 Oct. 2014
Firstpage :
201
Lastpage :
206
Abstract :
Big data has been becoming ubiquitous and applied in numerous fields recently. The challenges to solve a large-scale machine learning problem in big data scenario generally lie in three aspects. Firstly, a proposed machine learning algorithm has to be appropriated for the distributed optimization problem. Secondly, it needs a platform for the distributed implementation. Finally, the communication delays different machines may cause problems in convergence even though the non-distributed algorithm shows a good convergence rate. In order to solve these challenges, we propose a new machine learning approach named Distributed Class-dependent Feature Analysis (DCFA), to combine the advantages of sparse representation in an over-complete dictionary. The classifier is based on the estimation of class-specific optimal filters, by solving an l1-norm optimization problem. We demonstrate how this problem is solved using the Alternating Direction Method of Multipliers and also explore relevant convergency details. More importantly, our proposed framework can be efficiently implemented on a robust distributed framework. Thus, it improves both accuracy and computational time in large-scale databases. Our method achieves very high classification accuracies in face recognition in the presence of occlusions on AR database. It also outperforms the state of the art methods in object recognition on two challenging large-scale object databases, i.e. Caltech101 and Caltech256. It hence shows its applicability to general computer vision and pattern recognition problems. In addition, computational time experiments show our distributed method achieves high speedup of 7.85x on Caltech256 databases with just 10 machine nodes compared to the non-distributed version and can gain even more with more computing resources.
Keywords :
Big Data; learning (artificial intelligence); optimisation; pattern classification; AR database; Big Data; Caltech101; Caltech256; DCFA; alternating direction method of multipliers; class-specific optimal filters; classification accuracies; classifier; communication delays; computer vision; convergence rate; distributed class dependent feature analysis; distributed optimization problem; face recognition; l1-norm optimization problem; large-scale machine learning problem; large-scale object databases; machine learning algorithm; nondistributed algorithm; object recognition; occlusions; over-complete dictionary; pattern recognition; robust distributed framework; sparse representation; Big data; Databases; Dictionaries; Face; Machine learning algorithms; Training; Virtual machining; Alternative Direction Method of Multipliers; Big Data; Class-dependence Feature Analysis; Distributed Optimization Problems; Spare Representation;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Big Data (Big Data), 2014 IEEE International Conference on
Conference_Location :
Washington, DC
Type :
conf
DOI :
10.1109/BigData.2014.7004233
Filename :
7004233
Link To Document :
بازگشت