Title :
RCMP: Enabling Efficient Recomputation Based Failure Resilience for Big Data Analytics
Author :
Dinu, Florin ; Ng, T. S. Eugene
Author_Institution :
EPFL, Lausanne, Switzerland
Abstract :
Data replication, the main failure resilience strategy used for big data analytics jobs, can be unnecessarily inefficient. It can cause serious performance degradation when applied to intermediate job outputs in multi-job computations. For instance, for I/O-intensive big data jobs, data replication is especially expensive because very large datasets need to be replicated. Reducing the number of replicas is not a satisfactory solution as it only aggravates a fundamental limitation of data replication: its failure resilience guarantees are limited by the number of available replicas. When all replicas of some piece of intermediate job output are lost, cascading job recomputations may be required for recovery. In this paper we show how job recomputation can be made a first-order failure resilience strategy for big data analytics. The need for data replication can thus be significantly reduced. We present RCMP, a system that performs efficient job recomputation. RCMP can persist task outputs across jobs and leverage them to minimize the work performed during job recomputations. More importantly, RCMP addresses two important challenges that appear during job recomputations. The first is efficiently utilizing the available compute node parallelism. The second is dealing with hot-spots. RCMP handles both by switching to a finer-grained task scheduling granularity for recomputations. Our experiments show that RCMP´s benefits hold across two different clusters, for job inputs as small as 40GB or as large as 1.2TB. Compared to RCMP, data replication is 30%-100% worse during failure-free periods. More importantly, by efficiently performing recomputations, RCMP is comparable or better even under single and double data loss events.
Keywords :
data analysis; very large databases; I-O-intensive big data jobs; RCMP; big data analytics; data replication; double data loss events; finer-grained task scheduling granularity; first-order failure resilience strategy; hot-spots; intermediate job outputs; job recomputation; single data loss events; very large datasets; Big data; Educational institutions; Market research; Middleware; Processor scheduling; Resilience; Throughput; Data Replication; Failure Reslience; Recomputation;
Conference_Titel :
Parallel and Distributed Processing Symposium, 2014 IEEE 28th International
Conference_Location :
Phoenix, AZ
Print_ISBN :
978-1-4799-3799-8
DOI :
10.1109/IPDPS.2014.102