Title :
An automatic prefetching and caching system
Author :
Lewis, J. ; Alghamdi, M. ; Assaf, M.A. ; Xiaojun Ruan ; Zhiyang Ding ; Xiao Qin
Author_Institution :
Dept. of Comput. Sci. & Software Eng., Auburn Univ., Auburn, AL, USA
Abstract :
Steady improvements in storage capacities and CPU clock speeds intensify the performance bottleneck at the I/O subsystem of modern computers. Caching data can efficiently short circuit costly delays associated with disk accesses. Recent studies have shown that disk I/O performance gains provided by a cache buffer do not scale with cache size. Therefore, new algorithms have to be investigated to better utilize cache buffer space. Predictive prefetching and caching solutions have been shown to improve I/O performance in an efficient and scalable manner in simulation experiments. However, most predictive prefetching algorithms have not yet been implemented in real-world storage systems due to two main limitations: first, the existing prefetching solutions are unable to self regulate based on changing I/O workload; second, excessive number of unneeded blocks are prefetched. Combined, these drawbacks make predictive prefetching and caching a less attractive solution than the simple LRU management. To address these problems, in this paper we propose an automatic prefetching and caching system (or APACS for short), which mitigates all of these shortcomings through three unique techniques, namely: (1) dynamic cache partitioning, (2) prefetch pipelining, and (3) prefetch buffer management. APACS dynamically partitions the buffer cache memory, used for prefetched and cached blocks, by automatically changing buffer/cache sizes in accordance to global I/O performance. The adaptive partitioning scheme implemented in APACS optimizes cache hit ratios, which subsequently accelerates application execution speeds. Experimental results obtained from trace-driven simulations show that APACS outperforms the LRU cache management and existing prefetching algorithms by an average of over 50%.
Keywords :
cache storage; adaptive partitioning scheme; automatic prefetching system; buffer cache memory; cache buffer space; caching system; dynamic cache partitioning; prefetch buffer management; prefetch pipelining; Cache memory; Linux; Markov processes; Partitioning algorithms; Pipeline processing; Prediction algorithms; Prefetching;
Conference_Titel :
Performance Computing and Communications Conference (IPCCC), 2010 IEEE 29th International
Conference_Location :
Albuquerque, NM
Print_ISBN :
978-1-4244-9330-2
DOI :
10.1109/PCCC.2010.5682310