Title of article :
CUDA’s Mapped Memory to Support I/O Functions on GPU
Author/Authors :
Wu, Wei Jiangnan Institute of Computing Technology, China , Qi, Fengbin Jiangnan Institute of Computing Technology, China , He, Wangquan Jiangnan Institute of Computing Technology, China , Wang, Shanshan Jiangnan Institute of Computing Technology, China
From page :
588
To page :
598
Abstract :
The API interfaces provided by CUDA help programmers to get high performance CUDA applications in GPU, but they cannot support most I/O operations in device codes. The characteristics of CUDA’s mapped memory are used here to create a dynamic polling service model in the host which can satisfy most I/O functions such as read/write file and “printf”. The technique to implement these I/O functions has some influence on the performance of the original applications. These functions quickly respond to the users’ I/O requirements with the “printf” performance better than CUDA’s. An easy and effective real-time method is given for users to debug their programs using the I/O functions. These functions improve productivity of converting legacy C/C++ codes to CUDA and broaden CUDA’s functions.
Keywords :
CUDA , I , O functions , mapped memory , dynamic polling service model
Journal title :
Tsinghua Science and Technology
Journal title :
Tsinghua Science and Technology
Record number :
2535573
Link To Document :
بازگشت