By: Richard Cownie (tich.delete@this.pobox.com), January 30, 2013 2:39 pm
Room: Moderated Discussions
rwessel (robertwessel.delete@this.yahoo.com) on January 29, 2013 10:15 pm wrote:
> Right, but unless you think that the microserver vendors are likely to be going head to
> head with Intel 2S and larger systems, the relevant datum is the $252 ASP for Xeon UPs.
It's not exactly "head-to-head" in the sense that 1 ARM-based server node would
exactly match the capability of 1 2S-Xeon node. The interesting use case is where
you have hundreds or thousands of servers. With Intel's current products, you have
a bunch of "system cost" like chipset, power supply, board, and it might well
turn out that the cheapest way to pack compute power into X square feet of datacenter
is to have 2 cpu sockets in each system (even if those cpu's are relatively expensive,
you get better amortization of the board/PSU/network costs). Isn't that what Google's
systems look like ?
It's quite possible that the competing ARM-based solution would look very different,
with different numbers of cores, different numbers of sockets, different numbers of
nodes, possibly much less per-board cost.
Obviously this only works if you have a cheap scalable way of running many systems
with few sysadmins. But that's just what cloud software does. So the constraints
are very different than they were 5 years ago, or even 3 years ago.
> Right, but unless you think that the microserver vendors are likely to be going head to
> head with Intel 2S and larger systems, the relevant datum is the $252 ASP for Xeon UPs.
It's not exactly "head-to-head" in the sense that 1 ARM-based server node would
exactly match the capability of 1 2S-Xeon node. The interesting use case is where
you have hundreds or thousands of servers. With Intel's current products, you have
a bunch of "system cost" like chipset, power supply, board, and it might well
turn out that the cheapest way to pack compute power into X square feet of datacenter
is to have 2 cpu sockets in each system (even if those cpu's are relatively expensive,
you get better amortization of the board/PSU/network costs). Isn't that what Google's
systems look like ?
It's quite possible that the competing ARM-based solution would look very different,
with different numbers of cores, different numbers of sockets, different numbers of
nodes, possibly much less per-board cost.
Obviously this only works if you have a cheap scalable way of running many systems
with few sysadmins. But that's just what cloud software does. So the constraints
are very different than they were 5 years ago, or even 3 years ago.