Title :
Fast context switching by hierarchical task allocation and reconfigurable cache
Author :
Tanaka, Kiyofumi
Author_Institution :
Sch. of Inf. Sci., Japan Adv. Inst. of Sci. & Technol., Ishikawa, Japan
Abstract :
A multithreaded processor architecture enables fast context switching for tolerating memory access latency and bridging synchronization gap, and thus enables efficient utilization of execution pipelines. However, it cannot prevent all pipeline stalls; stalls will still occur when all processor built-in threads are in a wait state or there are not enough threads in a task to fill up all available context slots, since the mechanism for switching active threads is effective only for processor built-in contexts. In this paper, we propose an architecture that increases the virtual number of built-in contexts and enables seamless task switching by allocating and swapping task contexts hierarchically between processor and memory in a multitasking environment. At the same time, we aim at supporting real-time applications through hierarchical task allocation based on task priority and fast response mechanisms for interrupt requests exploiting the multiple-context architecture. Moreover, we propose two reconfigurable cache applications, a priority-based partitioning cache and a FIFO buffer, and their implementation methods. We have extended the general-purpose RISC processor architecture and are developing a new RISC core which can be used to implement the seamless task switching, fast response to interrupt requests, and the reconfigurable caches, for supporting real-time processing in a multi-tasking environment. we describe the design of the RISC core in this paper.
Keywords :
cache storage; multi-threading; multiprocessing systems; pipeline processing; processor scheduling; real-time systems; reconfigurable architectures; reduced instruction set computing; FIFO buffer; RISC core; RISC processor architecture; execution pipelines; fast context switching; fast response mechanisms; hierarchical task allocation; memory access latency; multiple-context architecture; multitasking environment; multithreaded processor architecture; partitioning cache; pipeline stalls; processor built-in contexts; processor built-in threads; real-time applications; real-time processing; reconfigurable cache; synchronization gap; task priority; task switching; Delay; Multitasking; Pipelines; Reduced instruction set computing; Yarn;
Conference_Titel :
Innovative Architecture for Future Generation High-Performance Processors and Systems, 2003
Print_ISBN :
0-7695-2019-7
DOI :
10.1109/IWIA.2003.1262779