By: anon (spam.delete.delete@this.this.spam.com), October 12, 2018 8:55 am
Room: Moderated Discussions
rwessel (robertwessel.delete@this.yahoo.com) on October 12, 2018 8:29 am wrote:
> anon (spam.delete.delete@this.this.spam.com) on October 12, 2018 4:04 am wrote:
> > Doug S (foo.delete@this.bar.bar) on October 12, 2018 2:01 am wrote:
> > > Paul A. Clayton (paaronclayton.delete@this.gmail.com) on October 11, 2018 7:49 pm wrote:
> > > > anon (spam.delete.delete@this.this.spam.com) on October 11, 2018 7:02 am wrote:
> > > > > Michael S (already5chosen.delete@this.yahoo.com) on October 11, 2018 4:06 am wrote:
> > > > [snip]
> > > > > > What prevents Cloudflare from ordering 1S servers with 1.5x or 2.x # of nodes
> > > > > > per unit of volume relatively to their current setup (4 nodes per 2U) ?
> > > > > >
> > > > >
> > > > > They'd have to completely redesign the sleds.
> > > > > It's doable, half width, DIMM slots and socket in a row instead of next to each other,
> > > > > PSU configuration will change, but doable. Considering the extra components it might
> > > > > work out once you factor in the lower power consumption, but it's not great.
> > > > >
> > > > > The problem is right now they're using some standard Quanta
> > > > > 2S boards and they don't offer anything like the
> > > > > half width 1S boards they'd need. There's no market for it. No one in their right mind would pay for twice
> > > > > the boards and PSUs they need since everyone except QC supports 2S, so all the high density options are 2S.
> > > >
> > > > Is there some fundamental reason why two computers could not share a single motherboard, power supply, etc.?
> > > >
> > > > Obviously such an arrangement would not be as useful as being able to share memory contents
> > > > (even without cache coherence) and capacity as well as network (and other I/O) interfaces,
> > > > but it would appear to address the density/form factor and PSU count issues.
> > >
> > >
> > > No, there's no reason at all and I don't know why anon makes it sound as if it would be difficult.
> > > This is the whole point of blade servers, after all - the servers slot in vertically rather
> > > than horizontally so aren't restricted by the width of the rack. Then it is down to how tall
> > > the blades need to be (how many rack units) and how closely you can space them together within
> > > the standard rack width and still achieve the necessary cooling.
> > >
> >
> > I said it's doable. The same layout has been used before. There's just nothing available that would
> > just slot in. The width is more or less fixed though. If you want the same density per volume you
> > have to change at least one out of width, height and length. If you don't want 1U or shorter blades
> > the width is set. Vertical with 1.5U "height" (now width) and corresponding width works too.
> >
> > Either way it's either completely incompatible with their current servers or a custom design.
> >
> > > I see no reason why a cloud computing company would EVER want multi socket servers in today's
> > > world of dozens of cores per socket. It adds additional cost while providing no additional
> > > benefit. In order to succeed an ARM server vendor does not need to support more than one socket
> > > per system. There's a ton of market to exploit that has no need for multi socket.
> >
> > Strangely enough they all seem to be using 2S anyway.
>
>
> There's little, if any, cost disadvantage compared to two separate systems on a board, and
> have a bigger pie for the virtualization to slice up inherently makes management easier.
Didn't you read? There's "no reason why a cloud company would EVER want multi socket servers".
How dare you suggest that free 2S support which reduces the component count doesn't add a lot of extra cost.
And more cores don't make management easy. Didn't you read? "Dozens of cores per socket". Surely anything more than one socket would be too confusing for anyone to handle.
> anon (spam.delete.delete@this.this.spam.com) on October 12, 2018 4:04 am wrote:
> > Doug S (foo.delete@this.bar.bar) on October 12, 2018 2:01 am wrote:
> > > Paul A. Clayton (paaronclayton.delete@this.gmail.com) on October 11, 2018 7:49 pm wrote:
> > > > anon (spam.delete.delete@this.this.spam.com) on October 11, 2018 7:02 am wrote:
> > > > > Michael S (already5chosen.delete@this.yahoo.com) on October 11, 2018 4:06 am wrote:
> > > > [snip]
> > > > > > What prevents Cloudflare from ordering 1S servers with 1.5x or 2.x # of nodes
> > > > > > per unit of volume relatively to their current setup (4 nodes per 2U) ?
> > > > > >
> > > > >
> > > > > They'd have to completely redesign the sleds.
> > > > > It's doable, half width, DIMM slots and socket in a row instead of next to each other,
> > > > > PSU configuration will change, but doable. Considering the extra components it might
> > > > > work out once you factor in the lower power consumption, but it's not great.
> > > > >
> > > > > The problem is right now they're using some standard Quanta
> > > > > 2S boards and they don't offer anything like the
> > > > > half width 1S boards they'd need. There's no market for it. No one in their right mind would pay for twice
> > > > > the boards and PSUs they need since everyone except QC supports 2S, so all the high density options are 2S.
> > > >
> > > > Is there some fundamental reason why two computers could not share a single motherboard, power supply, etc.?
> > > >
> > > > Obviously such an arrangement would not be as useful as being able to share memory contents
> > > > (even without cache coherence) and capacity as well as network (and other I/O) interfaces,
> > > > but it would appear to address the density/form factor and PSU count issues.
> > >
> > >
> > > No, there's no reason at all and I don't know why anon makes it sound as if it would be difficult.
> > > This is the whole point of blade servers, after all - the servers slot in vertically rather
> > > than horizontally so aren't restricted by the width of the rack. Then it is down to how tall
> > > the blades need to be (how many rack units) and how closely you can space them together within
> > > the standard rack width and still achieve the necessary cooling.
> > >
> >
> > I said it's doable. The same layout has been used before. There's just nothing available that would
> > just slot in. The width is more or less fixed though. If you want the same density per volume you
> > have to change at least one out of width, height and length. If you don't want 1U or shorter blades
> > the width is set. Vertical with 1.5U "height" (now width) and corresponding width works too.
> >
> > Either way it's either completely incompatible with their current servers or a custom design.
> >
> > > I see no reason why a cloud computing company would EVER want multi socket servers in today's
> > > world of dozens of cores per socket. It adds additional cost while providing no additional
> > > benefit. In order to succeed an ARM server vendor does not need to support more than one socket
> > > per system. There's a ton of market to exploit that has no need for multi socket.
> >
> > Strangely enough they all seem to be using 2S anyway.
>
>
> There's little, if any, cost disadvantage compared to two separate systems on a board, and
> have a bigger pie for the virtualization to slice up inherently makes management easier.
Didn't you read? There's "no reason why a cloud company would EVER want multi socket servers".
How dare you suggest that free 2S support which reduces the component count doesn't add a lot of extra cost.
And more cores don't make management easy. Didn't you read? "Dozens of cores per socket". Surely anything more than one socket would be too confusing for anyone to handle.