By: Jouni Osmala (a.delete@this.b.com), June 24, 2020 9:23 am
Room: Moderated Discussions
> > > > You realize Apple's graphics are Apple-developed and totally
> > > > different from the GPUs in any Android system, yes?
> > >
> > > Doesn't matter as the performance of Apple tested in that
> > > article was far lower than even standard AMD iGPUs
> > > of more than a few generations ago. Even if the others ARM GPUs are 2-5 times slower than that. For years
> > > Intel iGPUs were laughed at because they either pulled shenanigans to look faster or were too slow.
> > > >
> > > > Also, care to offer some evidence for your claim of using 16-bit ints?
> > >
> > > It seems like the Apple performance article I read was wrong in certain details. However
> > > Apple GPU uses 16 bit floating point instead of others using 32 bit SP Floats:
> > >
> > > https://www.realworldtech.com/apple-custom-gpu/
> >
> > PowerVR has unified shaders since MBX/SGX.
> > Historically it can run arbitrary code on the shader units, even firmware (SGX Micro Kernel).
> > http://cdn.imgtec.com/sdk-documentation/PowerVR+Series5.Architecture+Guide+for+Developers.pdf
> > Shader cores have multiple ALUs with different precision - both FP32 and FP16.
> >
> > This is for Series 6:
> > https://www.anandtech.com/show/7793/imaginations-powervr-rogue-architecture-exposed/2
> >
> > > This of course would bite Apple in the ass, if image quality standards were applied
> > > as they are and were to AMD, nVidia, and Intel GPUs.
> >
> > You're totally misguided. You think that FP16 rendering is bad, but in reality it is fast and
> > power efficient. This is why FP16 ALUs were reintroduced in both AMD and Nvidia cards.
> > FP16 is used in places where limited range does not cause artifacts.
> >
>
> You had better learn something about things before shooting your mouth off. 16 bit HP FP on
> both nVidia and Radeon GPUs is for AI and not graphics. Graphics IQ with 16 bit rendering is
> worse than with 32 bit. Banding and distance are some areas that show differences between the
> methods. Software rendering which uses 32 bit SP on those AMD64 CPUs which was used for comparisons
> in the old IQ wars and 16 bit FP is not available on those CPUs. AI is moving to 8 bit FP or
> even 4 bit integers for more performance so those GPUs are adding those too.
>
> You sound like those guys about good enough being great. 640K was one such statement that has been
> shown to be ridiculous. 320x240 was good enough (NOT!). It has been shown time and time again that
> good enough because of hardware limitations fails at some point. Good enough because of physical
> attributes endures. The latter of human eye properties works for AA and AF. Human eye properties
> don't change much over time (except for old people which they get worse as they age). The former
> like 16 bit FP is good enough for 8 bit displays. Well displays are at 8 or 10 bits due to limitations
> of LCDs. With OLEDs they can go to 12, 14, or even 16 bits. Then you get banding using 16 bit floating
> point because eyes will see it. So good enough becomes not good at all.
It is optimization that is used in limited amounts on PC:s atleast AMD got 20% improvement from using it. Where they could. The precision issues limited where they could use it. Nvidia had only compatibility implementation with very limited hardware for it before Turing for the consumer side so it was not used with Nvidia cards on games. The use of 16bit's for rendering isn't really all or nothing more like lets use it in places where precision doesn't matter and get some performance boost.
Graphics is all about approximating something that is so computationally intensive that it would take supercomputer quite a long time to calculate. So the 16bit precision is just another tool in a toolbox for getting good enough performance by approximating some parts of the result. Doing twice as many computations for same amount of execution units and memory bandwidth is something you shouldn't completely ignore as a possible tool in the toolbox.
> > > > different from the GPUs in any Android system, yes?
> > >
> > > Doesn't matter as the performance of Apple tested in that
> > > article was far lower than even standard AMD iGPUs
> > > of more than a few generations ago. Even if the others ARM GPUs are 2-5 times slower than that. For years
> > > Intel iGPUs were laughed at because they either pulled shenanigans to look faster or were too slow.
> > > >
> > > > Also, care to offer some evidence for your claim of using 16-bit ints?
> > >
> > > It seems like the Apple performance article I read was wrong in certain details. However
> > > Apple GPU uses 16 bit floating point instead of others using 32 bit SP Floats:
> > >
> > > https://www.realworldtech.com/apple-custom-gpu/
> >
> > PowerVR has unified shaders since MBX/SGX.
> > Historically it can run arbitrary code on the shader units, even firmware (SGX Micro Kernel).
> > http://cdn.imgtec.com/sdk-documentation/PowerVR+Series5.Architecture+Guide+for+Developers.pdf
> > Shader cores have multiple ALUs with different precision - both FP32 and FP16.
> >
> > This is for Series 6:
> > https://www.anandtech.com/show/7793/imaginations-powervr-rogue-architecture-exposed/2
> >
> > > This of course would bite Apple in the ass, if image quality standards were applied
> > > as they are and were to AMD, nVidia, and Intel GPUs.
> >
> > You're totally misguided. You think that FP16 rendering is bad, but in reality it is fast and
> > power efficient. This is why FP16 ALUs were reintroduced in both AMD and Nvidia cards.
> > FP16 is used in places where limited range does not cause artifacts.
> >
>
> You had better learn something about things before shooting your mouth off. 16 bit HP FP on
> both nVidia and Radeon GPUs is for AI and not graphics. Graphics IQ with 16 bit rendering is
> worse than with 32 bit. Banding and distance are some areas that show differences between the
> methods. Software rendering which uses 32 bit SP on those AMD64 CPUs which was used for comparisons
> in the old IQ wars and 16 bit FP is not available on those CPUs. AI is moving to 8 bit FP or
> even 4 bit integers for more performance so those GPUs are adding those too.
>
> You sound like those guys about good enough being great. 640K was one such statement that has been
> shown to be ridiculous. 320x240 was good enough (NOT!). It has been shown time and time again that
> good enough because of hardware limitations fails at some point. Good enough because of physical
> attributes endures. The latter of human eye properties works for AA and AF. Human eye properties
> don't change much over time (except for old people which they get worse as they age). The former
> like 16 bit FP is good enough for 8 bit displays. Well displays are at 8 or 10 bits due to limitations
> of LCDs. With OLEDs they can go to 12, 14, or even 16 bits. Then you get banding using 16 bit floating
> point because eyes will see it. So good enough becomes not good at all.
It is optimization that is used in limited amounts on PC:s atleast AMD got 20% improvement from using it. Where they could. The precision issues limited where they could use it. Nvidia had only compatibility implementation with very limited hardware for it before Turing for the consumer side so it was not used with Nvidia cards on games. The use of 16bit's for rendering isn't really all or nothing more like lets use it in places where precision doesn't matter and get some performance boost.
Graphics is all about approximating something that is so computationally intensive that it would take supercomputer quite a long time to calculate. So the 16bit precision is just another tool in a toolbox for getting good enough performance by approximating some parts of the result. Doing twice as many computations for same amount of execution units and memory bandwidth is something you shouldn't completely ignore as a possible tool in the toolbox.