• dez
    #29
    "Growing Returns: Gustafson’s Law

    Based on Amdahl’s work, the viability of massive parallelism was
    questioned for a number of years. Then, in the late 1980s, at the Sandia
    National Lab, impressive linear speedups in three practical applications
    were observed on a 1,024-processor hypercube. The results (Gustafson
    1988) demonstrated that near linear speedup was possible in many
    practical cases, even when Amdahl’s Law predicted otherwise.


    Built into Amdahl’s Law are several assumptions that may not hold true
    in real-world implementations.
    First, Amdahl’s Law assumes that the best
    performing serial algorithm is strictly limited by the availability of CPU
    cycles. This may not be the case. A multi-core processor may implement a
    separate cache on each core. Thus, more of the problem’s data set may be
    stored in cache, reducing memory latency. The second flaw is that
    Amdahl’s Law assumes that the serial algorithm is the best possible
    solution for a given problem. However, some problems lend themselves to
    a more efficient parallel solution. The number of computational steps may
    be significantly less in the parallel implementation.

    Perhaps the biggest weakness, however, is the assumption that
    Amdahl’s Law makes about the problem size. Amdahl’s Law assumes that
    as the number of processor cores increases, the problem size stays the
    same. In most cases, this is not valid. Generally speaking, when given
    more computing resources, the problem generally grows to meet the
    resources available. In fact, it is more often the case that the run time of
    the application is constant.

    Based on the work at Sandia, an alternative formulation for speedup,
    referred to as scaled speedup was developed by E. Barsis.

    Scaled speedup = N + (1−N) *s

    where N = is the number of processor cores and s is the ratio of the time
    spent in the serial port of the program versus the total execution time.

    Scaled speedup is commonly referred to as Gustafson’s Law. From this
    equation, one can see that the speedup in this case is linear.


    Gustafson’s Law has been shown to be equivalent to Amdahl’s Law
    (Shi 1996). However, Gustafson’s Law offers a much more realistic look
    at the potential of parallel computing on multi-core processors."