By: Peter McGuinness (firstname.lastname@example.org), August 2, 2016 9:05 am
Room: Moderated Discussions
> The distinction here is that they are keeping the tile data in
> on-chip buffers. Normally, that would be streamed out to DRAM.
They are not. That's why you can see partly rendered tiles; they ARE writing out the results pixel by pixel as soon as each triangle is rasterised and not waiting for tile completion. This is characteristic of immediate mode and also the reason for the flickering someone mentioned, as your counter is only approximately synchronised to the actual rasterisation. Incidentally, try marking each triangle as opaque and re-run your test. You'll see a big difference as the heirarchical Z kicks in (you might need to make the front triangle a few pixels larger).
> While this is probably common for mobile, its not for desktop. That's what
> makes it interesting! We are seeing mobile techniques moving upwards.
It would be interesting but your test does not show that.
> You are welcome to call it what you wish, but I chose the term that seemed most appropriate
> to me. How would you distinguish between a TBDR and a TBR in your mind?
There is an established taxonomy of GPU architectures. AMD and nvidia both use immediate mode, intel has used both immediate and TBR in various GPUs, they currently seem to use immediate. IMG uses TBDR with the 'deferred' designator to indicate that pixel shading is delayed until after visibility determination is complete (where possible) and ARM uses TBIR with the 'immediate' designator to indicate that they don't use that optimisation so the term 'tile based immediate mode' is already taken.