By: Jukka Larja (roskakori2006.delete@this.gmail.com), October 31, 2015 9:21 am
Room: Moderated Discussions
dmcq (dmcq.delete@this.fano.co.uk) on October 31, 2015 8:19 am wrote:
> Well I know games often just use floats in the GPUs an some AI people say 8-bit integers are enough
> for any useful AI problem - but it is amazing how fast a sequence of float operations can start to give
> obviously wrong results. If one wants half a chance of something approximating a reasonable result and
> aren't an expert at error analysis there's nothing to beat just doing the work using doubles.
In games you usually either don't need much accuracy (floats) or you need bit-for-bit accuracy (integers). Doubles would give more leeway, but you still need to use algorithms that can handle inaccuracies. People who can't deal with floats most likely can't deal with doubles either.
Add to that the lower performance on CPU (especially on last generation consoles) and much lower performance on GPU, and it gets hard to come up with any significant[1] situation where doubles look like a good idea.
Also, you don't usually care too much about correctness of some calculation. It's good enough if incorrectness is mostly unnoticeable. Only problem with denormals is that you might forget to turn on flush to zero.
HPC, at least as far as I understand, is quite different.
[1] There are, of course, lots of code that isn't performance critical. It could be a good idea to use doubles there, just so you don't need to think about whether floats are accurate enough or not. I haven't run into any such code though.
-JLarja
> Well I know games often just use floats in the GPUs an some AI people say 8-bit integers are enough
> for any useful AI problem - but it is amazing how fast a sequence of float operations can start to give
> obviously wrong results. If one wants half a chance of something approximating a reasonable result and
> aren't an expert at error analysis there's nothing to beat just doing the work using doubles.
In games you usually either don't need much accuracy (floats) or you need bit-for-bit accuracy (integers). Doubles would give more leeway, but you still need to use algorithms that can handle inaccuracies. People who can't deal with floats most likely can't deal with doubles either.
Add to that the lower performance on CPU (especially on last generation consoles) and much lower performance on GPU, and it gets hard to come up with any significant[1] situation where doubles look like a good idea.
Also, you don't usually care too much about correctness of some calculation. It's good enough if incorrectness is mostly unnoticeable. Only problem with denormals is that you might forget to turn on flush to zero.
HPC, at least as far as I understand, is quite different.
[1] There are, of course, lots of code that isn't performance critical. It could be a good idea to use doubles there, just so you don't need to think about whether floats are accurate enough or not. I haven't run into any such code though.
-JLarja