Title :
APP: Minimizing Interference Using Aggressive Pipelined Prefetching in Multi-level Buffer Caches
Author :
Patrick, Christina M. ; Voshell, Nicholas ; Kandemir, Mahmut
Author_Institution :
Pennsylvania State Univ., University Park, PA, USA
Abstract :
As services become more complex with multiple interactions, and storage servers are shared by multiple services, the different I/O streams arising from these multiple services compete for disk attention. Aggressive Pipelined Prefetching (APP) enabled storage clients are designed to manage the buffer cache and I/O streams to minimize the disk I/O-interference arising from competing streams. Due to the large number of streams serviced by a storage server, most of the disk time is spent seeking, leading to degradation in response times. The goal of APP is to decrease application execution time by increasing the throughput of individual I/O streams and utilizing idle capacity on remote nodes along with idle network times thus effectively avoiding alternating bursts of activity followed by periods of inactivity. APP significantly increases overall I/O throughput and decreases overall messaging overhead between servers. In APP, the intelligence is embedded in the clients and they automatically infer parameters in order to achieve the maximum throughput. APP clients make use of aggressive prefetching and data offloading to remote buffer caches in multi-level buffer cache hierarchies in an effort to minimize disk interference and tranquilize the effects of aggressive prefetching. We used an extremely I/O-intensive Radix-k application employed in studies on the scalability of parallel image composition and particle tracing developed at the Argonne National Laboratory with data sets of up to 128 GB and implemented our scheme on a 16-node Linux cluster. We observed that the execution time of the application decreased by 68% on average when using our scheme.
Keywords :
artificial intelligence; cache storage; disc storage; interference; pipeline processing; 16-node Linux cluster; APP clients; Argonne national laboratory; aggressive pipelined prefetching; application execution time; automatically infer parameter; disk I/O-interference; embedded intelligence; extremely I/O-intensive Radix-k application; interference minimization; messaging overhead; multilevel buffer caches; parallel image composition scalability; particle tracing; remote buffer caches; storage client; storage server; Buffer storage; Interference; Pipeline processing; Pipelines; Prefetching; Servers; Throughput; Aggressive prefetching; Automatic configuration of parameters; Cache partitioning; Data offloading; Disk throughput; I/O; Interference; Performance; Pipeling;
Conference_Titel :
Cluster, Cloud and Grid Computing (CCGrid), 2011 11th IEEE/ACM International Symposium on
Conference_Location :
Newport Beach, CA
Print_ISBN :
978-1-4577-0129-0
Electronic_ISBN :
978-0-7695-4395-6
DOI :
10.1109/CCGrid.2011.47