DocumentCode
3079368
Title
Accelerating Machine Learning Kernel in Hadoop Using FPGAs
Author
Neshatpour, Katayoun ; Malik, Maria ; Homayoun, Houman
fYear
2015
fDate
4-7 May 2015
Firstpage
1151
Lastpage
1154
Abstract
Big data applications share inherent characteristics that are fundamentally different from traditional desktop CPU, parallel and web service applications. They rely on deep machine learning and data mining applications. A recent trend for big data analytics is to provide heterogeneous architectures to allow support for hardware specialization to construct the right processing engine for analytics applications. However, these specialized heterogeneous architectures require extensive exploration of design aspects to find the optimal architecture in terms of performance and cost. % Considering the time dedicated to create such specialized architectures, a model that estimates the potential speedup achievable through offloading various parts of the algorithm to specialized hardware would be necessary. This paper analyzes how offloading computational intensive kernels of machine learning algorithms to a heterogeneous CPU+FPGA platform enhances the performance. We use the latest Xilinx Signboards for implementation and result analysis. Furthermore, we perform a comprehensive analysis of communication and computation overheads such as data I/O movements, and calling several standard libraries that can not be offloaded to the accelerator to understand how the speedup of each application will contribute to its overall execution in an end-to-end Hadoop MapReduce environment.
Keywords
Big Data; data analysis; data mining; field programmable gate arrays; learning (artificial intelligence); parallel processing; Big data applications; FPGA; Hadoop; Web service applications; Xilinx Zynq boards; data I/O movements; data mining applications; deep machine learning application; desktop CPU; end-to-end Hadoop MapReduce environment; heterogeneous CPU+FPGA platform; heterogeneous architectures; machine learning algorithms; machine-learning kernels; optimal architecture; processing engine; Acceleration; Big data; Computer architecture; Field programmable gate arrays; Hardware; Kernel; Machine learning algorithms; Acceleration; Big Data; FPGA;
fLanguage
English
Publisher
ieee
Conference_Titel
Cluster, Cloud and Grid Computing (CCGrid), 2015 15th IEEE/ACM International Symposium on
Conference_Location
Shenzhen
Type
conf
DOI
10.1109/CCGrid.2015.165
Filename
7152609
Link To Document