DocumentCode :
737810
Title :
Supporting HPC Analytics Applications with Access Patterns Using Data Restructuring and Data-Centric Scheduling Techniques in MapReduce
Author :
Sehrish, Saba ; Mackey, Grant ; Shang, Pengju ; Wang, Jun ; Bent, John
Author_Institution :
Northwestern Univ., Evanston, IL, USA
Volume :
24
Issue :
1
fYear :
2013
Firstpage :
158
Lastpage :
169
Abstract :
Current High Performance Computing (HPC) applications have seen an explosive growth in the size of data in recent years. Many application scientists have initiated efforts to integrate data-intensive computing into computational-intensive HPC facilities, particularly for data analytics. We have observed several scientific applications which must migrate their data from an HPC storage system to a data-intensive one for analytics. There is a gap between the data semantics of HPC storage and data-intensive system, hence, once migrated, the data must be further refined and reorganized. This reorganization must be performed before existing data-intensive tools such as MapReduce can be used to analyze data. This reorganization requires at least two complete scans through the data set and then at least one MapReduce program to prepare the data before analyzing it. Running multiple MapReduce phases causes significant overhead for the application, in the form of excessive I/O operations. That is for every MapReduce phase, a distributed read and write operation on the file system must be performed. Our contribution is to develop a MapReduce-based framework for HPC analytics to eliminate the multiple scans and also reduce the number of data preprocessing MapReduce programs. We also implement a data-centric scheduler to further improve the performance of HPC analytics MapReduce programs by maintaining the data locality. We have added additional expressiveness to the MapReduce language to allow application scientists to specify the logical semantics of their data such that 1) the data can be analyzed without running multiple data preprocessing MapReduce programs, and 2) the data can be simultaneously reorganized as it is migrated to the data-intensive file system. Using our augmented Map-Reduce system, MapReduce with Access Patterns (MRAP), we have demonstrated up to 33 percent throughput improvement in one real application, and up to 70 percent in an I/O kernel of another appl- cation. Our results for scheduling show up to 49 percent improvement for an I/O kernel of a prevalent HPC analysis application.
Keywords :
data analysis; formal specification; scheduling; storage management; HPC analytics application; HPC storage system; MapReduce language; MapReduce program; access pattern; application overhead for; computational-intensive HPC facility; data locality; data migration; data preparation; data refinement; data reorganization; data restructuring; data semantics; data-centric scheduling technique; data-intensive computing; data-intensive file system; distributed read-write operation; excessive I/O operation; high performance computing; logical semantics specification; scientific application; Distributed databases; Kernel; Layout; Pattern matching; Processor scheduling; Scheduling; Semantics; HPC analytics framework; MapReduce; data-intensive systems; scheduling;
fLanguage :
English
Journal_Title :
Parallel and Distributed Systems, IEEE Transactions on
Publisher :
ieee
ISSN :
1045-9219
Type :
jour
DOI :
10.1109/TPDS.2012.88
Filename :
6171166
Link To Document :
بازگشت