DocumentCode :
2414056
Title :
Using idle workstations to implement predictive prefetching
Author :
Wang, Jasmine Y Q ; Ong, J.S. ; Coady, Yvonne ; Feeley, Michael J.
Author_Institution :
Seagate Software Inc., Vancouver, BC, Canada
fYear :
2000
fDate :
2000
Firstpage :
87
Lastpage :
94
Abstract :
The benefits of Markov-based predictive prefetching have been largely overshadowed by the overhead required to produce high-quality predictions. While both theoretical and simulation results for prediction algorithms appear promising, substantial limitations exist in practice. This outcome can be partially attributed to the fact that practical implementations ultimately make compromises in order to reduce overhead. These compromises limit the level of algorithm complexity, the variety of access patterns and the granularity of trace data that the implementation supports. This paper describes the design and implementation of GMS-3P (Global Memory System with Parallel Predictive Prefetching), an operating system kernel extension that offloads prediction overhead to idle network nodes. GMS-3P builds on the GMS global memory system, which pages to and from remote workstation memory. In GMS-3P, the target node sends an online trace of an application´s page faults to an idle node that is running a Markov-based prediction algorithm. The prediction node then uses GMS to prefetch pages to the target node from the memory of other workstations in the network. Our preliminary results show that predictive prefetching can reduce the remote-memory page fault time by 60% or more and that, by offloading prediction overhead to an idle node, GMS-3P can reduce this improved latency by between 24% and 44%, depending on the Markov model order
Keywords :
Markov processes; distributed memory systems; network operating systems; operating system kernels; paged storage; workstation clusters; GMS-3P; Markov model order; Markov-based predictive prefetching; access patterns; algorithm complexity; application page faults; compromises; global memory system; high-quality predictions; idle network nodes; idle workstations; latency; online trace; operating system kernel extension; parallel predictive prefetching; prediction overhead; remote workstation memory paging; remote-memory page fault time; trace data granularity; Application software; Collision mitigation; Computational modeling; Computer science; Pattern matching; Prediction algorithms; Predictive models; Prefetching; Runtime; Workstations;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
High-Performance Distributed Computing, 2000. Proceedings. The Ninth International Symposium on
Conference_Location :
Pittsburgh, PA
ISSN :
1082-8907
Print_ISBN :
0-7695-0783-2
Type :
conf
DOI :
10.1109/HPDC.2000.868638
Filename :
868638
Link To Document :
بازگشت