By: Gionatan Danti (g.danti.delete@this.assyoma.it), August 1, 2016 3:16 am
Room: Moderated Discussions
David Kanter (dkanter.delete@this.realworldtech.com) on August 1, 2016 12:01 am wrote:
> This is my first new article in a while, but its a treat. It's the first video I've done for the site.
>
> Starting with the Maxwell and Pascal architectures, Nvidia high-performance GPUs use tile-based
> immediate-mode rasterizers, instead of conventional full-screen immediate-mode rasterizers. Using
> simple DirectX shaders, we demonstrate the tile-based rasterization in Nvidia's Maxwell and Pascal
> GPUs and contrast this behavior to the immediate-mode rasterizer used by AMD.
>
> http://www.realworldtech.com/tile-based-rasterization-nvidia-gpus/
>
> I look forward to the discussion.
>
> David
Very interesting find David!
In the article, you talk about how PowerVR chips were (and are) tile based deferred rendering device. Actually, they went so far in exploiting tile rendering that they did complete on-tile overdraw rejection and did not use a DRAM-stored zbuffer, greatly lowering the need for DRAM bandwidth.
Do Nvidia's Maxwell/Pascal chips the same? Or they "simply" use tile based rendering for exploiting space/temporary data location? I strongly suspect the latter, as both Nvidia and AMD have other techniques to deal with overdraw and zbuffer (namely, early-z rejection and zbuffer compression).
Thanks.
> This is my first new article in a while, but its a treat. It's the first video I've done for the site.
>
> Starting with the Maxwell and Pascal architectures, Nvidia high-performance GPUs use tile-based
> immediate-mode rasterizers, instead of conventional full-screen immediate-mode rasterizers. Using
> simple DirectX shaders, we demonstrate the tile-based rasterization in Nvidia's Maxwell and Pascal
> GPUs and contrast this behavior to the immediate-mode rasterizer used by AMD.
>
> http://www.realworldtech.com/tile-based-rasterization-nvidia-gpus/
>
> I look forward to the discussion.
>
> David
Very interesting find David!
In the article, you talk about how PowerVR chips were (and are) tile based deferred rendering device. Actually, they went so far in exploiting tile rendering that they did complete on-tile overdraw rejection and did not use a DRAM-stored zbuffer, greatly lowering the need for DRAM bandwidth.
Do Nvidia's Maxwell/Pascal chips the same? Or they "simply" use tile based rendering for exploiting space/temporary data location? I strongly suspect the latter, as both Nvidia and AMD have other techniques to deal with overdraw and zbuffer (namely, early-z rejection and zbuffer compression).
Thanks.