Modern high performance computers are built with a combination of resources including: multi-core processors, many core processors, large caches, high speed memory, high bandwidth inter-processor communications fabric, and high speed I/O capabilities. High performance software needs to be designed to take full advantage of these wealth of resources. Whether re-architecting and/or tuning existing applications for maximum performance or architecting new applications for existing or future machines, it is critical to be aware of the interplay between programming models and the efficient use of these resources. Consider this a starting point for information regarding Code Modernization. When it comes to performance, your code matters!
Building parallel versions of software can enable applications to run
a given data set in less time, run multiple data sets in a fixed amount
of time, or run large-scale data sets that are prohibitive with
un-optimized software. The success of parallelization is typically
quantified by measuring the speedup of the parallel version relative to
the serial version. In addition to that comparison, however, it is also
useful to compare that speedup relative to the upper limit of the
potential speedup. That issue can be addressed using Amdahl's Law and Gustafson's Law.
Good code design takes into consideration several levels of parallelism.
Read the full post