Title :
An analysis of scatter decomposition
Author :
Nicol, David M. ; Saltz, Joel H.
Author_Institution :
Dept. of Comput. Sci., Coll. of William & Mary, Williamsburg, VA, USA
fDate :
11/1/1990 12:00:00 AM
Abstract :
A formal analysis of a powerful mapping technique known as scatter decomposition is provided. Scatter decomposition divides an irregular computational domain into a large number of equally sized pieces and distributes them modularly among processors. A probabilistic model of workload in one dimension is used to formally explain why and when scatter decomposition works. The first result is that if a correlation in workload is a convex function of distance, then scattering a more finely decomposed domain yields a lower average processor workload variance. The second result shows that if the workload process is a stationary Gaussian and the correlation function decreases linearly in distance until becoming zero and then remain zero, scattering a more finely decomposed domain yields a lower expected maximum processor workload. It is shown that if the correlation function decreases linearly across the entire domain, then among all mappings that assign an equal number of domain pieces to each processor, scatter decomposition minimizes the average processor workload variance. The dependence of these results on the assumption of decreasing correlation is illustrated with situations where a coarser granularity actually achieves better load balance
Keywords :
parallel processing; performance evaluation; probability; coarser granularity; computational domain; convex function; correlation function; formal analysis; mapping technique; probabilistic model; scatter decomposition; stationary Gaussian; Distributed computing; Geometry; NASA; Numerical simulation; Parallel processing; Partial differential equations; Performance analysis; Processor scheduling; Scattering; Strips;
Journal_Title :
Computers, IEEE Transactions on