By: David Kanter (dkanter.delete@this.realworldtech.com), August 17, 2014 9:23 am
Room: Moderated Discussions
> From the Microprocessor Report article (bold from mine):
>
>
If you want to engage in a productive discussion, it would be helpful if you could try and understand the context. Since I actually write and edit MPR, I have access to the articles...just to review some facts:
1. ThunderX is estimated for production in 2H15, which means actual sales to customers in 2016.
2. Xeon E5-2470 v2 is a low-end crippled IVB-EP (only 24 PCIe 3.0 lanes, 1 QPI link, 3 memory interfaces)
3. The E5-24xx line launched in 1Q14, and the E5-26xx line launched in 3Q13
4. The highest-end Xeon E5-2697 has an estimated SPECint_rate2006 of around 430 (after de-rating by 10% to account for ICC)
So while I agree that Cavium will have some really impressive integrated networking (supposedly, 300gbps total!!!!!!), the reality is that the compute capabilities are about 20% behind one of the best shipping Xeons. By the time that 2016 rolls around, Intel will have upgraded their server line twice (HSW-EP later this year, and BDW-EP next year).
So while it's possible that Cavium might get to 80% of a BDW-EP, it will probably only be a low-end model...not the mid-range or top-end ones.
> And about total power consumption and efficiency:
>
>
>
> Therein they are mentioning the advantage of SoC vs a CPU. As I said before, the 80W are for
> the whole SoC. The 95W are only for Xeon CPU, adds the TDP of rest of components to the
> Intel platform and you will need up to double power to do the same work than ARM SoC.
Your calculations are wrong, but your point is good.
Intel does not (currently) integrate networking, and that leaves an opportunity for a competitor to offer a different system architecture that is differentiated. That being said, it only costs about 10-20W to add 4x10G ethernet MACs. I'm not sure about the actual $ cost, but Intel would simply need to lower their prices.
> > > Also for the SPECint
> > > scores they used non-biased benchmark. No sure that compiler was used for your scores.
> > >
> >
> > That's crap argument. It wouldn't lead us to fruitful discussion. Either we are beliving in SPECInt_rate
> > or we are not. You can't believe in Cavium and at the same time disbelieve to Intel.
>
> I was mentioning the 10% gap (mentioned in page 1 of the report) for the compilers.
I'm fine with derating Intel's SPEC scores by 10%, but to be honest, I'd prefer to just compare the GCC subtest (which doesn't need any derating).
David
>
>
The rest of the chip is where ThunderX shows its advantages. Xeon E5 offers up to 40 lanes
> of PCI Express Gen3, but for the server to have networking and storage connections, these lanes
> must connect to external Ethernet and SATA adapters. In contrast, ThunderX integrates these
> important I/O connections, as Figure 2 shows, eliminating the extra adapter cost.
If you want to engage in a productive discussion, it would be helpful if you could try and understand the context. Since I actually write and edit MPR, I have access to the articles...just to review some facts:
1. ThunderX is estimated for production in 2H15, which means actual sales to customers in 2016.
2. Xeon E5-2470 v2 is a low-end crippled IVB-EP (only 24 PCIe 3.0 lanes, 1 QPI link, 3 memory interfaces)
3. The E5-24xx line launched in 1Q14, and the E5-26xx line launched in 3Q13
4. The highest-end Xeon E5-2697 has an estimated SPECint_rate2006 of around 430 (after de-rating by 10% to account for ICC)
So while I agree that Cavium will have some really impressive integrated networking (supposedly, 300gbps total!!!!!!), the reality is that the compute capabilities are about 20% behind one of the best shipping Xeons. By the time that 2016 rolls around, Intel will have upgraded their server line twice (HSW-EP later this year, and BDW-EP next year).
So while it's possible that Cavium might get to 80% of a BDW-EP, it will probably only be a low-end model...not the mid-range or top-end ones.
> And about total power consumption and efficiency:
>
>
Compared with Xeon, ThunderX could deliver 50% to 100% more performance per watt and per dollar, particularly
> when considering the additional chips that Intel needs to complete the server design.
>
> Therein they are mentioning the advantage of SoC vs a CPU. As I said before, the 80W are for
> the whole SoC. The 95W are only for Xeon CPU, adds the TDP of rest of components to the
> Intel platform and you will need up to double power to do the same work than ARM SoC.
Your calculations are wrong, but your point is good.
Intel does not (currently) integrate networking, and that leaves an opportunity for a competitor to offer a different system architecture that is differentiated. That being said, it only costs about 10-20W to add 4x10G ethernet MACs. I'm not sure about the actual $ cost, but Intel would simply need to lower their prices.
> > > Also for the SPECint
> > > scores they used non-biased benchmark. No sure that compiler was used for your scores.
> > >
> >
> > That's crap argument. It wouldn't lead us to fruitful discussion. Either we are beliving in SPECInt_rate
> > or we are not. You can't believe in Cavium and at the same time disbelieve to Intel.
>
> I was mentioning the 10% gap (mentioned in page 1 of the report) for the compilers.
I'm fine with derating Intel's SPEC scores by 10%, but to be honest, I'd prefer to just compare the GCC subtest (which doesn't need any derating).
David