Article: Parallelism at HotPar 2010
By: AM (myname4rwt.delete@this.jee-male.com), August 3, 2010 11:50 pm
Room: Moderated Discussions
none (none@none.com) on 8/3/10 wrote:
---------------------------
>AM (myname4rwt@jee-male.com) on 8/3/10 wrote:
>---------------------------
>[...]
>>Here is a very simple reality check for you (and David): get a machine with win7
>>on and check how fast warp can render, say, Crysis (use the benchmark tool). GTX
>>460 (available from $200 these days) cranks out over 30 fps in 1680x1050, VHD and
>>over 60 fps (GASP) in SLI, same mode. And very short of 30/60 fps in 1920x1080, VHD from the report I saw.
>>
>>How fast do you think CPU can handle this task (btw, a representative of a very
>>widespread class of workloads), even the Intel's 6 core you mentioned? And how many
>>Intel's 6-core crown jewels selling for $1k+ a pop will it take to get the same performance?
>>
>>Have a nice time reevaluating your claims (or better yet, running the test and reporting the results).
>>
>>PS Reportedly, Warp provides good scalability with core count and makes good use
>>even of SSE 4.1, so I suggest you should pull some evidence before you start talking about poorly-written code here.
>
>Funny... or perhaps not that much. Are you aware that
>GPU have graphic units that will certainly crush any
>general-purpose CPU? But that's not what is being discussed
>here, the subject is *GP*GPU.
Ugh, no.
The poster I replied to was talking not about characteristics of GP-computing workloads, but about bw and FP capacity advantages (apparently failing to notice or understand that it's just two factors, which may not even count for certain codes), CPU codes that don't utilize multiple cores and vector insns, using different algorithms for comparison, and then pulling some math and claiming that if compared against well-optimized CPU code, the gap will shrink to 2.5x-5x.
Which is complete and utter BS for a general claim.
---------------------------
>AM (myname4rwt@jee-male.com) on 8/3/10 wrote:
>---------------------------
>[...]
>>Here is a very simple reality check for you (and David): get a machine with win7
>>on and check how fast warp can render, say, Crysis (use the benchmark tool). GTX
>>460 (available from $200 these days) cranks out over 30 fps in 1680x1050, VHD and
>>over 60 fps (GASP) in SLI, same mode. And very short of 30/60 fps in 1920x1080, VHD from the report I saw.
>>
>>How fast do you think CPU can handle this task (btw, a representative of a very
>>widespread class of workloads), even the Intel's 6 core you mentioned? And how many
>>Intel's 6-core crown jewels selling for $1k+ a pop will it take to get the same performance?
>>
>>Have a nice time reevaluating your claims (or better yet, running the test and reporting the results).
>>
>>PS Reportedly, Warp provides good scalability with core count and makes good use
>>even of SSE 4.1, so I suggest you should pull some evidence before you start talking about poorly-written code here.
>
>Funny... or perhaps not that much. Are you aware that
>GPU have graphic units that will certainly crush any
>general-purpose CPU? But that's not what is being discussed
>here, the subject is *GP*GPU.
Ugh, no.
The poster I replied to was talking not about characteristics of GP-computing workloads, but about bw and FP capacity advantages (apparently failing to notice or understand that it's just two factors, which may not even count for certain codes), CPU codes that don't utilize multiple cores and vector insns, using different algorithms for comparison, and then pulling some math and claiming that if compared against well-optimized CPU code, the gap will shrink to 2.5x-5x.
Which is complete and utter BS for a general claim.