Article: Parallelism at HotPar 2010
By: hobold (hobold.delete@this.vectorizer.org), July 30, 2010 6:54 am
Room: Moderated Discussions
Mark Christiansen (aliasundercover@nospam.net) on 7/30/10 wrote:
---------------------------
[...]
>How much performance can the GPU give while allowing software to go on working
>and go on gaining performance with new generations for 15 years?
Given that GPUs have only just begun to delve into the same kind of microarchitectural sophistication that CPUs have harvested for a few decades (Nvidia's GF104 is the first superscalar GPU), I would like to think that GPUs do have several promising avenues to follow. CPUs in turn are left with few options beyond more cores and wider SIMD.
It will be interesting to see how far the two converge. Software wise, GPU programmers are forced to do the most difficult step first (make massive data parallelism explicit), while CPU programmers still tend to viewing parallelism as a late optimization step. I think that difference in mentality is the real reason why papers too often quote 100-fold performance gains for GPUs.
---------------------------
[...]
>How much performance can the GPU give while allowing software to go on working
>and go on gaining performance with new generations for 15 years?
Given that GPUs have only just begun to delve into the same kind of microarchitectural sophistication that CPUs have harvested for a few decades (Nvidia's GF104 is the first superscalar GPU), I would like to think that GPUs do have several promising avenues to follow. CPUs in turn are left with few options beyond more cores and wider SIMD.
It will be interesting to see how far the two converge. Software wise, GPU programmers are forced to do the most difficult step first (make massive data parallelism explicit), while CPU programmers still tend to viewing parallelism as a late optimization step. I think that difference in mentality is the real reason why papers too often quote 100-fold performance gains for GPUs.