By: Joe Chang (jchang6.delete@this.Xyahoo.com), August 28, 2007 6:17 pm
Room: Moderated Discussions
This is why Xeon 7140 with the old NetBurst cores can beat Opteron in 4-socket TPC-C, evening giving up 64G to 128G
for MS SQL on Windows
Xeon 7140 is 318K with 64G mem vs 269K for Opteron 2.8GHz with 128G mem
even if Opteron were scaled to 3.2GHz, assuming 0.8% gain per each 1% freq, quad Opt 3.2 would be 293K
Opteron 2.8G win handily over Xeon 7140 in TPC-H, which is not as cache depend, or rather, outright insensitive to cache size
I suspect this is also a deliberate AMD decision, not to compete on cache size,
even though a large cache version could sell for a higher price, disposing of large cache down bins is a tricky matter
and the revenue per wafer could actually decrease
Intel can play this game because they have luxury of 4 big fabs on each of the last 2 process
>
>>I don't know how Opteron gets away with its L2 caches being small. Some of it is
>>probably declaring that it's a NUMA system to the OS so it takes CPU affinity into
>>account when making scheduling decisions.
>
>Low memory latency for local stuff is probably the answer. Although some cache-happy
>benchmarks like SPECjbb really do suffer.
>
for MS SQL on Windows
Xeon 7140 is 318K with 64G mem vs 269K for Opteron 2.8GHz with 128G mem
even if Opteron were scaled to 3.2GHz, assuming 0.8% gain per each 1% freq, quad Opt 3.2 would be 293K
Opteron 2.8G win handily over Xeon 7140 in TPC-H, which is not as cache depend, or rather, outright insensitive to cache size
I suspect this is also a deliberate AMD decision, not to compete on cache size,
even though a large cache version could sell for a higher price, disposing of large cache down bins is a tricky matter
and the revenue per wafer could actually decrease
Intel can play this game because they have luxury of 4 big fabs on each of the last 2 process
>
>>I don't know how Opteron gets away with its L2 caches being small. Some of it is
>>probably declaring that it's a NUMA system to the OS so it takes CPU affinity into
>>account when making scheduling decisions.
>
>Low memory latency for local stuff is probably the answer. Although some cache-happy
>benchmarks like SPECjbb really do suffer.
>