By: Aaron Spink (aaronspink.delete@this.notearthlink.net), January 24, 2017 8:00 pm
Room: Moderated Discussions
Gabriele Svelto (gabriele.svelto.delete@this.gmail.com) on January 24, 2017 12:40 pm wrote:
> Aaron Spink (aaronspink.delete@this.notearthlink.net) on January 24, 2017 7:20 am wrote:
> > Also, the networking at 10k nodes get very expensive, you aren't going to do that for $100 per
> > node. A 48 port 10G switch will set you back 5k+ easy and you are going to need a lot of them,
> > a whole whole lot of them depending on topology. At a minimum, >1 for every 48 nodes and likely
> > >2 per 48 nodes. Realistically you are looking at ~210 48p 10G + 6p 40G switches and then another
> > 100 48p 40g switches at 10-20k per. Total would be ~$2-3M for the switches. Most supercomputers
> > end up spending roughly the same on networking as they do on the nodes.
>
> Very good point. Most proponents of the flock-of-chickens approach seem to forget that
> you need to wire the whole thing together if you want it to do anything useful.
Pretty much. If you don't care about network performance, and you don't care about CPU performance, you can already build that system today for cheap using things like Xeon-D. Given the price points for Xeon-D, the cost floor is extremely low to get into the market. As an example, you can pretty much buy a top end Xeon-D board for less cost than any ARM based server board. And that Xeon-D board will give you comparable or better performance than ANY ARM SoC that is coming to market.
> Aaron Spink (aaronspink.delete@this.notearthlink.net) on January 24, 2017 7:20 am wrote:
> > Also, the networking at 10k nodes get very expensive, you aren't going to do that for $100 per
> > node. A 48 port 10G switch will set you back 5k+ easy and you are going to need a lot of them,
> > a whole whole lot of them depending on topology. At a minimum, >1 for every 48 nodes and likely
> > >2 per 48 nodes. Realistically you are looking at ~210 48p 10G + 6p 40G switches and then another
> > 100 48p 40g switches at 10-20k per. Total would be ~$2-3M for the switches. Most supercomputers
> > end up spending roughly the same on networking as they do on the nodes.
>
> Very good point. Most proponents of the flock-of-chickens approach seem to forget that
> you need to wire the whole thing together if you want it to do anything useful.
Pretty much. If you don't care about network performance, and you don't care about CPU performance, you can already build that system today for cheap using things like Xeon-D. Given the price points for Xeon-D, the cost floor is extremely low to get into the market. As an example, you can pretty much buy a top end Xeon-D board for less cost than any ARM based server board. And that Xeon-D board will give you comparable or better performance than ANY ARM SoC that is coming to market.