DocumentCode :
3591097
Title :
Balancing context switch penalty and response time with elastic time slicing
Author :
Jammula, Nagakishore ; Qureshi, Moinuddin ; Gavrilovska, Ada ; Kim, Jongman
Author_Institution :
Georgia Inst. of Technol., Atlanta, GA, USA
fYear :
2014
Firstpage :
1
Lastpage :
10
Abstract :
Virtualization allows the platform to have increased number of logical processors by multiplexing the underlying resources across different virtual machines. The hardware resources get time shared not only between different virtual machines, but also between different workloads of the same virtual machine. An important source of performance degradation in such a scenario comes from the cache warmup penalties a workload experiences when it gets scheduled, as the working set belonging to the workload gets displaced by other concurrently running workloads. We show that a virtual machine that time switches between four workloads can cause some of the workloads a slowdown of as much as 54%. However, such performance degradation depends on the workload behavior, with some workloads experiencing negligible degradation and some severe degradation. We propose Elastic Time Slicing (ETS) to reduce the context switch overhead for the most affected workloads. We demonstrate that by taking the workload-specific context switch overhead into consideration, the CPU scheduler can make better decisions to minimize the context switch penalty for the most affected workloads, thereby resulting in substantial performance improvements. ETS enhances performance without compromising on response time, thereby achieving dual benefits. To facilitate ETS, we develop a low-overhead hardware-based mechanism that dynamically estimates the sensitivity of a given workload to context switching. We evaluate the accuracy of the mechanism under various cache management policies and show that it is very reliable. Context switch related warmup penalties increase as optimizations are applied to address traditional cache misses. For the first time, we assess the impact of advanced replacement policies and establish that it is significant.
Keywords :
cache storage; virtual machines; CPU scheduler; cache management policy; cache warmup penalty; context switch penalty; elastic time slicing; low-overhead hardware; virtual machine; Benchmark testing; Context; Degradation; Program processors; Schedules; Scheduling algorithms; Switches;
fLanguage :
English
Publisher :
ieee
Conference_Titel :
High Performance Computing (HiPC), 2014 21st International Conference on
Print_ISBN :
978-1-4799-5975-4
Type :
conf
DOI :
10.1109/HiPC.2014.7116707
Filename :
7116707
Link To Document :
بازگشت