Jun 22, 2015
Jun 8, 2015
Parallel Programming
Amdahl's law
The speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program. For example, if a program needs 20 hours using a single processor core, and a particular portion of the program which takes one hour to execute cannot be parallelized, while the remaining 19 hours (95%) of execution time can be parallelized, then regardless of how many processors are devoted to a parallelized execution of this program, the minimum execution time cannot be less than that critical one hour. Hence the speedup is limited to at most 20×.
Gustafson–Barsis' law
Says that computations involving arbitrarily large data sets can be efficiently parallelized.
The speedup of a program using multiple processors in parallel computing is limited by the time needed for the sequential fraction of the program. For example, if a program needs 20 hours using a single processor core, and a particular portion of the program which takes one hour to execute cannot be parallelized, while the remaining 19 hours (95%) of execution time can be parallelized, then regardless of how many processors are devoted to a parallelized execution of this program, the minimum execution time cannot be less than that critical one hour. Hence the speedup is limited to at most 20×.
Gustafson–Barsis' law
Says that computations involving arbitrarily large data sets can be efficiently parallelized.
Labels:
design patterns,
laws,
parallel programming,
principles
Jun 2, 2015
Subscribe to:
Posts (Atom)