Abstract :
In a chip-multiprocessor (CMP) system, the DRAM system is shared among cores. In a shared DRAM system, requests from a thread can not only delay requests from other threads by causing bank/bus/row-buffer conflicts but they can also destroy other threadspsilaDRAM-bank-level parallelism. Requests whose latencies would otherwise have been overlapped could effectively become serialized. As are sult both fairness and system throughput degrade, and some thread scan starve for long time periods. This paper proposes a fundamentally new approach to designing a shared DRAM controller that provides quality of service to threads,while also improving system throughput. Our parallelism-aware batch scheduler (PAR-BS) design is based on two key ideas. First, PARBS processes DRAM requests in batches to provide fairness and to avoid starvation of requests. Second, to optimize system throughput,PAR-BS employs a parallelism-aware DRAM scheduling policy that aims to process requests from a thread in parallel in the DRAM banks, thereby reducing the memory-related stall-time experienced by the thread. PAR-BS seamlessly incorporates support for system-level thread priorities and can provide different service levels, including purely opportunistic service, to threads with different priorities.We evaluate the design trade-offs involved in PAR-BS and compare it to four previously proposed DRAM scheduler designs on 4-, 8-, and16-core systems. Our evaluations show that, averaged over 100 4-core workloads, PAR-BS improves fairness by 1.11X and system through put by 8.3% compared to the best previous scheduling technique, Stall-Time Fair Memory (STFM) scheduling. Based on simple request prioritization rules, PAR-BS is also simpler to implement than STFM.
Keywords :
DRAM chips; microcontrollers; processor scheduling; shared memory systems; chip-multiprocessor system; memory-related stall-time fair memory scheduling; parallelism-aware batch scheduling; shared DRAM controller; shared DRAM system; Computer architecture; Control systems; Degradation; Delay; Interference; Random access memory; Scheduling algorithm; System software; Throughput; Yarn; DRAM systems; Memory systems; chip multiprocessors; fairness; memory scheduling; memory-level parallelism; multi-core systems; quality of service;