By: Doug S (foo.delete@this.bar.bar), October 12, 2018 2:35 pm
Room: Moderated Discussions
Wes Felter (wmf.delete@this.felter.org) on October 12, 2018 2:30 pm wrote:
> Doug S (foo.delete@this.bar.bar) on October 12, 2018 2:01 am wrote:
>
> > I see no reason why a cloud computing company would EVER want multi socket servers in today's
> > world of dozens of cores per socket. It adds additional cost while providing no additional
> > benefit. In order to succeed an ARM server vendor does not need to support more than one socket
> > per system. There's a ton of market to exploit that has no need for multi socket.
>
> Single socket is easier to deal with because of the lack of NUMA (except AMD), but most of the
> cost factors favor 2S, such as needing half as many BMCs, NICs, etc. Ultimately the price of
> processors is artificial so vendors can make 1S or 2S cheaper; Intel is steering customers towards
> 2S while AMD is trying to make 1S and 2S equally competitive. And of course you can always buy
> a 2S server and only populate one socket as long as you watch out for the I/O gotchas.
Most of those cost differences could be ameliorated by using an SoC that has most of that stuff built in (i.e. you aren't paying for packaging all those separate components etc.) Less board space too, so theoretically you could install multiple SoCs on a single board but act as multiple independent PCs. Some resources could be shared, like a single 100GbE interface on the board that connects to all "PCs" via an internal switch.
I think the main reason an ARM vendor trying to sell servers might prefer this approach over the current default of supporting multiple sockets in a single system image is reduced complexity. Once you start going off chip coherence and other stuff gets a lot harder - look how long it took Intel to become good at this. Reduced validation would help with time to market and cost, too.
> Doug S (foo.delete@this.bar.bar) on October 12, 2018 2:01 am wrote:
>
> > I see no reason why a cloud computing company would EVER want multi socket servers in today's
> > world of dozens of cores per socket. It adds additional cost while providing no additional
> > benefit. In order to succeed an ARM server vendor does not need to support more than one socket
> > per system. There's a ton of market to exploit that has no need for multi socket.
>
> Single socket is easier to deal with because of the lack of NUMA (except AMD), but most of the
> cost factors favor 2S, such as needing half as many BMCs, NICs, etc. Ultimately the price of
> processors is artificial so vendors can make 1S or 2S cheaper; Intel is steering customers towards
> 2S while AMD is trying to make 1S and 2S equally competitive. And of course you can always buy
> a 2S server and only populate one socket as long as you watch out for the I/O gotchas.
Most of those cost differences could be ameliorated by using an SoC that has most of that stuff built in (i.e. you aren't paying for packaging all those separate components etc.) Less board space too, so theoretically you could install multiple SoCs on a single board but act as multiple independent PCs. Some resources could be shared, like a single 100GbE interface on the board that connects to all "PCs" via an internal switch.
I think the main reason an ARM vendor trying to sell servers might prefer this approach over the current default of supporting multiple sockets in a single system image is reduced complexity. Once you start going off chip coherence and other stuff gets a lot harder - look how long it took Intel to become good at this. Reduced validation would help with time to market and cost, too.