By: Patrick Chase (patrickjchase.delete@this.gmail.com), July 2, 2013 7:43 am
Room: Moderated Discussions
anon (anon.delete@this.anon.com) on June 30, 2013 10:41 am wrote:
> No it doesn't. GPU can do more flops/watt than a CPU, and more flops/area. Just put a
> little A7 core in one corner to run the OS, and dedicate the rest to a GPGPU array.
As I noted in another post, this is only an optimal solution for a workload that maps well onto the GPU's stream processor idiom (read the Stanford Imagine papers if you don't know what I'm talking about - http://cva.stanford.edu/publications/2002/imagine-overview-iccd/. Bill Dally is now CTO of Nvidia and John Owens is a "GPU luminary" in academia).
The architecture you suggest above is viable for some workloads (including ones I deal with professionally) but isn't universally applicable to all FP-heavy loads. Not even close...
> No it doesn't. GPU can do more flops/watt than a CPU, and more flops/area. Just put a
> little A7 core in one corner to run the OS, and dedicate the rest to a GPGPU array.
As I noted in another post, this is only an optimal solution for a workload that maps well onto the GPU's stream processor idiom (read the Stanford Imagine papers if you don't know what I'm talking about - http://cva.stanford.edu/publications/2002/imagine-overview-iccd/. Bill Dally is now CTO of Nvidia and John Owens is a "GPU luminary" in academia).
The architecture you suggest above is viable for some workloads (including ones I deal with professionally) but isn't universally applicable to all FP-heavy loads. Not even close...