By: wumpus (lost.delete@this.in-a.cave.net), January 26, 2017 4:25 pm
Room: Moderated Discussions
David Kanter (dkanter.delete@this.realworldtech.com) on January 26, 2017 7:33 am wrote:
> Gabriele Svelto (gabriele.svelto.delete@this.gmail.com) on January 26, 2017 3:15 am wrote:
> > David Kanter (dkanter.delete@this.realworldtech.com) on January 25, 2017 6:10 pm wrote:
> > > SATA controllers aren't exactly magical or high value. PCIe is often
> > > quite tricky, especially new versions. 10GbE is pretty easy today.
> >
> > Plain 10GbE might be easy, but there's quite a bit of difference between a fully featured 10GbE
> > NIC - with proper DCB, virtualization support, offloads, traffic steering and possibly a form
> > of RDMA support such as iWARP or RoCE - and a baseline implementation. Additionally if you're
> > integrating the controller on your SoC you probably want it to be able to tap into the processor
> > caches directly with all the associated requirements on the interconnect.
> >
> > In short, I'm sure that IP for a 10GbE implementation is readily available, I'm
> > not sure if it's on par with what's required for a proper server-side deployment.
>
> That's very much true, so thank you for pointing that out. To be a bit more explicit,
> those are all features that are not useful in phones or client devices. So yet again,
> the "we get high volumes from phones" fails to carryover into the data center.
>
> Also, a lot of that requires different cache controllers that are more intelligent than normal.
>
> David
Isn't Intel the biggest server CPU supplier by both revenue and volume? It seems pretty weird to claim they aren't "server cores". Even then I'd believe it to be pretty minimal for a server (although don't underestimate the ability of fast single-threaded code to do well under situations where Ahmdal's law is enforced more rigorously than you might expect). It would take a surprisingly strong core to bridge the gap that Intel so far can't bridge.
I'm easily convinced that the design practices need to be too far apart to share much in either.
> Gabriele Svelto (gabriele.svelto.delete@this.gmail.com) on January 26, 2017 3:15 am wrote:
> > David Kanter (dkanter.delete@this.realworldtech.com) on January 25, 2017 6:10 pm wrote:
> > > SATA controllers aren't exactly magical or high value. PCIe is often
> > > quite tricky, especially new versions. 10GbE is pretty easy today.
> >
> > Plain 10GbE might be easy, but there's quite a bit of difference between a fully featured 10GbE
> > NIC - with proper DCB, virtualization support, offloads, traffic steering and possibly a form
> > of RDMA support such as iWARP or RoCE - and a baseline implementation. Additionally if you're
> > integrating the controller on your SoC you probably want it to be able to tap into the processor
> > caches directly with all the associated requirements on the interconnect.
> >
> > In short, I'm sure that IP for a 10GbE implementation is readily available, I'm
> > not sure if it's on par with what's required for a proper server-side deployment.
>
> That's very much true, so thank you for pointing that out. To be a bit more explicit,
> those are all features that are not useful in phones or client devices. So yet again,
> the "we get high volumes from phones" fails to carryover into the data center.
>
> Also, a lot of that requires different cache controllers that are more intelligent than normal.
>
> David
Isn't Intel the biggest server CPU supplier by both revenue and volume? It seems pretty weird to claim they aren't "server cores". Even then I'd believe it to be pretty minimal for a server (although don't underestimate the ability of fast single-threaded code to do well under situations where Ahmdal's law is enforced more rigorously than you might expect). It would take a surprisingly strong core to bridge the gap that Intel so far can't bridge.
I'm easily convinced that the design practices need to be too far apart to share much in either.