Abstract :
The author models the internal structure of memory by a tree, where nodes represent memory modules (like cache, disks), and edges represent buses between them. The modules have smaller access time, capacity, and block size the nearer they are to the root. All buses may transmit blocks of data in parallel. The author gives a deterministic sorting algorithm based on greed-sort. Its running time is shown to be optimal up to a constant factor. The bound implies the number of parallel modules necessary at each hierarchy level to overcome the I/O bottlenecks of sorting. The proposed algorithm also applies to the less general models UMH (uniform memory hierarchies) and P-UMH
Keywords :
sorting; storage management; I/O bottlenecks; P-UMH; access time; block size; capacity; deterministic sorting algorithm; fast sort; internal structure of memory; memory modules; parallel modules; parallelism; tree; uniform memory hierarchies; Concurrent computing; Costs; Disk drives; Parallel processing; Registers; Sorting;