Article: Harpertown Performance Preview
By: Joe Chang (jchang6.delete@this.Xyahoo.com), November 11, 2007 6:15 pm
Room: Moderated Discussions
Well, a rather lame effort by Intel on generating performance results for a new product,
(Good product => no funding for performance work?)
1 TPC-C result, 2 QC 3.16GHz X5460 at 273,666 tpm-C
(Oracle/Linux) vs 251,300 for 2x x5365 3.0GHz (SQL/Win)
Some SPEC CPU rates, no SPEC CPU, no TPC-H,
didn't check the other results on Intels web site
this is the X5460 on the old 5000P chipset, not the new 5400 Seaburg, forget the IPC gains, I would have expected more than 8.9% on 5% freq from the cache alone,
Joe Chang (jchang6@Xyahoo.com) on 11/11/07 wrote:
---------------------------
>Considering that between the Xeon X5355 2.66GHz, Xeon 7140 (3.4GHz NetBurst) and the Opteron DC 2.8GHz
>the Core 2 based X5355 has the best SPEC CPU 2006 int
>for 8 core TPC-C & H results (2xX5355, 4x7140, & 4x Opt)
>the 7140 has the best TPC-C (30% over Core2)
>the Opteron has the best TPC-H by about 10% over Core2
>
>Considering the TPC-C results aimed at best performance (as opposed to price-performance)
>typically run at 175-200 IOPS per data disk, then the 528 disks in the X5355 241K
>TPC-C result is probably on the order of 100K IOPS, at 8KB per IOP, makes for 800MB/s,
>which is not really heavy for the 5000P chipset.
>
>I am not sure what is limiting in TPC-H, as I do not have a recent generation perfmon log,
>but look at the HP ML370G5 result with X5355,
>6 P800 SAS RAID controllers, 25 SAS disk on each controller.
>You can get about 800MB/sec from a single first gen SAS RAID controller. There
>was an Intel/LSI Logic press announcement claiming 3GB/s IO from a system with the
>5000P chipset. I think the reference system has 1x8 and 3x4 PCI-E slots. The ML370G5
>with 6x4 PCI-E might be able to do more.
>
>The new Seaburg chipset is listed as capable of 44 PCI-E (gen 1) lanes, so this could allow for 10 x4 PCI-E slots,
>so I think this could benefit TPC-H
>
>It is clear that TPC-C is more influenced by cache size.
>Hence the weakest CPU (in terms of SPEC CPU int) had the best TPC-C by a huge amount with the help of the 16M L3,
>TPC-C is also helped by HT, which is not a factor in TPC-H or SPEC CPU
>
>Anyways, hopefully we will have published TPC-C & H results this coming week to scrutinize
>
>Marcin Dalecki (martin@dalecki.de) on 11/11/07 wrote:
>---------------------------
>>For a DB load the cache snoop handling should have no really noticable impact.
>>Those loads tend to pin tasks to particular CPUs anyway and tend to be IO bound
>>- and there you will gain the most from the huge improvement on PCI throughput availibity.
>>Depending on how you stuff your system with IO subsystems this could have an overallimpact
>>even on the order of hunderd of percents. And then of course remember that you can staff more RAM in to the system.
>
(Good product => no funding for performance work?)
1 TPC-C result, 2 QC 3.16GHz X5460 at 273,666 tpm-C
(Oracle/Linux) vs 251,300 for 2x x5365 3.0GHz (SQL/Win)
Some SPEC CPU rates, no SPEC CPU, no TPC-H,
didn't check the other results on Intels web site
this is the X5460 on the old 5000P chipset, not the new 5400 Seaburg, forget the IPC gains, I would have expected more than 8.9% on 5% freq from the cache alone,
Joe Chang (jchang6@Xyahoo.com) on 11/11/07 wrote:
---------------------------
>Considering that between the Xeon X5355 2.66GHz, Xeon 7140 (3.4GHz NetBurst) and the Opteron DC 2.8GHz
>the Core 2 based X5355 has the best SPEC CPU 2006 int
>for 8 core TPC-C & H results (2xX5355, 4x7140, & 4x Opt)
>the 7140 has the best TPC-C (30% over Core2)
>the Opteron has the best TPC-H by about 10% over Core2
>
>Considering the TPC-C results aimed at best performance (as opposed to price-performance)
>typically run at 175-200 IOPS per data disk, then the 528 disks in the X5355 241K
>TPC-C result is probably on the order of 100K IOPS, at 8KB per IOP, makes for 800MB/s,
>which is not really heavy for the 5000P chipset.
>
>I am not sure what is limiting in TPC-H, as I do not have a recent generation perfmon log,
>but look at the HP ML370G5 result with X5355,
>6 P800 SAS RAID controllers, 25 SAS disk on each controller.
>You can get about 800MB/sec from a single first gen SAS RAID controller. There
>was an Intel/LSI Logic press announcement claiming 3GB/s IO from a system with the
>5000P chipset. I think the reference system has 1x8 and 3x4 PCI-E slots. The ML370G5
>with 6x4 PCI-E might be able to do more.
>
>The new Seaburg chipset is listed as capable of 44 PCI-E (gen 1) lanes, so this could allow for 10 x4 PCI-E slots,
>so I think this could benefit TPC-H
>
>It is clear that TPC-C is more influenced by cache size.
>Hence the weakest CPU (in terms of SPEC CPU int) had the best TPC-C by a huge amount with the help of the 16M L3,
>TPC-C is also helped by HT, which is not a factor in TPC-H or SPEC CPU
>
>Anyways, hopefully we will have published TPC-C & H results this coming week to scrutinize
>
>Marcin Dalecki (martin@dalecki.de) on 11/11/07 wrote:
>---------------------------
>>For a DB load the cache snoop handling should have no really noticable impact.
>>Those loads tend to pin tasks to particular CPUs anyway and tend to be IO bound
>>- and there you will gain the most from the huge improvement on PCI throughput availibity.
>>Depending on how you stuff your system with IO subsystems this could have an overallimpact
>>even on the order of hunderd of percents. And then of course remember that you can staff more RAM in to the system.
>
Topic | Posted By | Date |
---|---|---|
Harpertown Preview Online | David Kanter | 2007/11/09 06:44 AM |
Stoakley vs. Bensley | Anonymous1 | 2007/11/09 08:58 AM |
Harpertown Preview Online | Marcin Niewiadomski | 2007/11/09 11:59 AM |
Harpertown Preview Online | David Kanter | 2007/11/09 06:51 PM |
So... | anonymous | 2007/11/09 07:47 PM |
So... | anon | 2007/11/09 08:49 PM |
So... | anonymous | 2007/11/09 09:25 PM |
So... | anon | 2007/11/09 10:50 PM |
That's just how I write | David Kanter | 2007/11/10 12:44 PM |
That's just how I write | anonymous | 2007/11/10 04:58 PM |
That's just how I write | Dean Kent | 2007/11/10 07:27 PM |
;-) | anonymous | 2007/11/10 07:31 PM |
Harpertown Preview Online | anonymous | 2007/11/09 11:10 PM |
Harpertown Preview Online | David Kanter | 2007/11/10 01:49 PM |
Harpertown Preview Online | Marcin Niewiadomski | 2007/11/10 03:10 AM |
Harpertown Preview Online | Marcin Dalecki | 2007/11/11 04:01 AM |
Harpertown Preview Online | Joe Chang | 2007/11/11 09:56 AM |
Harpertown Preview Online | Joe Chang | 2007/11/11 06:15 PM |
xmlmark and jbb | Henrik S | 2007/11/09 12:30 PM |
also | Henrik S | 2007/11/09 12:43 PM |
also | David Kanter | 2007/11/09 06:52 PM |
also | Henrik S | 2007/11/10 12:09 AM |
Harpertown Preview Online | *(&^ | 2007/11/09 11:38 PM |
Harpertown Preview Online | David Kanter | 2007/11/10 01:53 PM |
Supermicro MB | gpriatko | 2007/11/11 12:48 PM |
Supermicro MB | gpriatko | 2007/11/12 01:48 PM |
Supermicro MB | David Kanter | 2007/11/13 12:52 AM |
800 MHz vs 667 MHz FB-DIMMs | gpriatko | 2007/11/13 08:09 PM |
800 MHz vs 667 MHz FB-DIMMs | David Kanter | 2007/11/14 12:38 AM |
800 MHz vs 667 MHz FB-DIMMs | gpriatko | 2007/11/15 08:29 PM |
New O-O-O transaction Mode? | blaine | 2007/11/12 05:37 PM |
New O-O-O transaction Mode? | Andi Kleen | 2007/11/12 05:57 PM |
New O-O-O transaction Mode? | Michael S | 2007/11/13 01:45 AM |
New O-O-O transaction Mode? | Michael S | 2007/11/13 02:05 AM |
New O-O-O transaction Mode? | David Kanter | 2007/11/13 03:46 AM |