By: Salvatore De Dominicis (no.thanks.delete@this.example.com), December 9, 2014 8:51 am
Room: Moderated Discussions
Linus Torvalds (torvalds.delete@this.linux-foundation.org) on December 8, 2014 1:34 pm wrote:
> It does not necessarily make sense elsewhere. Even in completely new areas that we don't
> do today because you cant' afford it. If you want to do low-power ubiquotous computer vision
> etc, I can pretty much guarantee that you're not going to do it with code on a GP CPU. You're
> likely not even going to do it on a GPU because even that is too expensive (power wise),
> but with specialized hardware, probably based on some neural network model.
There is one aspect I think is not apparent from your post regarding computer vision.
Designing/building/testing new hardware is expensive, while (comparatively) powerful GPUs are becoming more and more common in the mobile market.
A GPU implementation of some computer vision algorithm could be "Good Enough"© to run on existing hardware, avoiding the need for specialized hardware, you already need to design (or license)/validate the GPU and everything related to it.
I guess it really depends on the workload and how much low is "low power".
> Give it up. The whole "parallel computing is the future" is a bunch of crock.
>
> Linus
For the foreseeable future in the general case, absolutely.
Salvatore
> It does not necessarily make sense elsewhere. Even in completely new areas that we don't
> do today because you cant' afford it. If you want to do low-power ubiquotous computer vision
> etc, I can pretty much guarantee that you're not going to do it with code on a GP CPU. You're
> likely not even going to do it on a GPU because even that is too expensive (power wise),
> but with specialized hardware, probably based on some neural network model.
There is one aspect I think is not apparent from your post regarding computer vision.
Designing/building/testing new hardware is expensive, while (comparatively) powerful GPUs are becoming more and more common in the mobile market.
A GPU implementation of some computer vision algorithm could be "Good Enough"© to run on existing hardware, avoiding the need for specialized hardware, you already need to design (or license)/validate the GPU and everything related to it.
I guess it really depends on the workload and how much low is "low power".
> Give it up. The whole "parallel computing is the future" is a bunch of crock.
>
> Linus
For the foreseeable future in the general case, absolutely.
Salvatore