Author/Authors :
Song, Aibo School of Computer Science and Engineering - Southeast University,China , Zhao, Maoxian College of Mathematics and Systems Science - Shandong University of Science and Technology, China , Xue, Yingying School of Computer Science and Engineering - Southeast University,China , Luo, Junzhou School of Computer Science and Engineering - Southeast University,China
Abstract :
Hadoop distributed file system (HDFS) is undoubtedly the most popular framework for storing and processing large amount of data on clusters of machines. Although a plethora of practices have been proposed for improving the processing efficiency and resource utilization, traditional HDFS still suffers from the overhead of disk-based low throughput and I/O rate. In this paper, we attempt to address this problem by developing a memory-based Hadoop framework called MHDFS. Firstly, a strategy for allocating and configuring reasonable memory resources for MHDFS is designed and RAMFS is utilized to develop the framework. Then, we propose a new method to handle the data replacement to disk when memory resource is excessively occupied. An algorithm for estimating and updating the replacement is designed based on the metrics of file heat. Finally, substantial experiments are conducted which demonstrate the effectiveness of MHDFS and its advantage against conventional HDFS.