By: RichardC (tich.delete@this.pobox.com), January 22, 2017 10:45 pm
Room: Moderated Discussions
Aaron Spink (aaronspink.delete@this.notearthlink.net) on January 22, 2017 12:03 pm wrote:
> Problem with the phone/tablet SoCs is you are going to have to run at least twice. I'm not
> aware of a single phone soc with even minimal support for ECC memory. Beyond that, you have
> the myriad issues of memory capacity, interconnect, etc. Not really a viable direction.
> The needs for an HPC server simply add cost and power to the phone/tablet SoC market.
I didn't mean to suggest that you would use an existing phone SoC with no modification, but
that large parts of the design (CPU, GPU, caches, maybe power management) could be the same,
and that borrowing those in a form already optimized for high-yield low-cost production on
foundry processes would be a huge step towards making it cheap.
And I'm also assuming that such a machine would be optimized for embarrassingly-parallel
apps which work ok with a large number of small(ish)-DRAM nodes, e.g. 8-16GB per node.
If your problem has unfavorable communication/compute ratio when split across many small
nodes, a smaller number of large-DRAM x86's is better. But I think CFD is a niche where
the flock-of-chickens approach can work.
> Problem with the phone/tablet SoCs is you are going to have to run at least twice. I'm not
> aware of a single phone soc with even minimal support for ECC memory. Beyond that, you have
> the myriad issues of memory capacity, interconnect, etc. Not really a viable direction.
> The needs for an HPC server simply add cost and power to the phone/tablet SoC market.
I didn't mean to suggest that you would use an existing phone SoC with no modification, but
that large parts of the design (CPU, GPU, caches, maybe power management) could be the same,
and that borrowing those in a form already optimized for high-yield low-cost production on
foundry processes would be a huge step towards making it cheap.
And I'm also assuming that such a machine would be optimized for embarrassingly-parallel
apps which work ok with a large number of small(ish)-DRAM nodes, e.g. 8-16GB per node.
If your problem has unfavorable communication/compute ratio when split across many small
nodes, a smaller number of large-DRAM x86's is better. But I think CFD is a niche where
the flock-of-chickens approach can work.