DocumentCode :
3716647
Title :
Prototyping and In-Depth Analysis of Big Data Benchmarking
Author :
Divya Pandove;Shivani Goel
Author_Institution :
CSED, Thapar Univ., Patiala, India
fYear :
2015
Firstpage :
1222
Lastpage :
1229
Abstract :
Today´s digital age has witnessed an explosion of data and information. This has resulted into changing the nature of data from being a medium of supporting transactions to becoming a transactional commodity itself. The consequential increase in the value of data has led to many innovations in both academic and industrial circles. The main focus remains on finding efficient ways to analyse data and derive meaningful results out of it. An efficient way of doing so is constructing benchmarks in order to effectively evaluate the performances of existing and upcoming data systems. A successful benchmark should cover all the major big data system application domains and there workloads. A prototype, outlining a small and diverse benchmark, which takes minimum time to cover a wide range of applications needs to be developed. In designing this prototype the four cornerstones of big data namely volume, veracity, velocity and variety should also be maintained. In addition to this the workload of a benchmark set should be carefully selected. It should represent a wide spectrum of application domains, have diversity of data characteristics and should not have any redundancy. Lastly, there should be a metric to evaluate the benchmarks so as to give them validity.
Keywords :
"Benchmark testing","Big data","Prototypes","Stakeholders","Data models","Pipelines","Feature extraction"
Publisher :
ieee
Conference_Titel :
Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing (CIT/IUCC/DASC/PICOM), 2015 IEEE International Conference on
Type :
conf
DOI :
10.1109/CIT/IUCC/DASC/PICOM.2015.182
Filename :
7363226
Link To Document :
بازگشت