DocumentCode :
1791678
Title :
An improved memory management scheme for large scale graph computing engine GraphChi
Author :
Yifang Jiang ; Diao Zhang ; Kai Chen ; Qu Zhou ; Yi Zhou ; Jianhua He
Author_Institution :
Sch. of Inf. Security & Eng., Shanghai Jiao Tong Univ., Shanghai, China
fYear :
2014
fDate :
27-30 Oct. 2014
Firstpage :
58
Lastpage :
63
Abstract :
GraphChi is the first reported disk-based graph engine that can handle billion-scale graphs on a single PC efficiently. GraphChi is able to execute several advanced data mining, graph mining and machine learning algorithms on very large graphs. With the novel technique of parallel sliding windows (PSW) to load subgraph from disk to memory for vertices and edges updating, it can achieve data processing performance close to and even better than those of mainstream distributed graph engines. GraphChi mentioned that its memory is not effectively utilized with large dataset, which leads to suboptimal computation performances. In this paper we are motivated by the concepts of “pin ” from TurboGraph and “ghost” from GraphLab to propose a new memory utilization mode for GraphChi, which is called Part-in-memory mode, to improve the GraphChi algorithm performance. The main idea is to pin a fixed part of data inside the memory during the whole computing process. Part-in-memory mode is successfully implemented with only about 40 additional lines of code to the original GraphChi engine. Extensive experiments are performed with large real datasets (including Twitter graph with 1.4 billion edges). The preliminary results show that Part-in-memory mode memory management approach effectively reduces the GraphChi running time by up to 60% in PageRank algorithm. Interestingly it is found that a larger portion of data pinned in memory does not always lead to better performance in the case that the whole dataset cannot be fitted in memory. There exists an optimal portion of data which should be kept in the memory to achieve the best computational performance.
Keywords :
data mining; graph theory; learning (artificial intelligence); mathematics computing; GraphChi engine; GraphLab; PC; PSW; PageRank algorithm; TurboGraph; billion-scale graphs; data mining; disk-based graph engine; distributed graph engines; edges updating; graph mining; large scale graph computing engine; machine learning algorithms; memory management scheme; memory utilization mode; parallel sliding windows; part-in-memory mode; Educational institutions; Electrical engineering; Engines; Heuristic algorithms; Memory management; Random access memory; Twitter; Big data; Graph process; GraphChi; Part-in-memory mode;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
Big Data (Big Data), 2014 IEEE International Conference on
Conference_Location :
Washington, DC
Type :
conf
DOI :
10.1109/BigData.2014.7004357
Filename :
7004357
Link To Document :
بازگشت