By: vvid (no.delete@this.thanks.com), August 5, 2016 3:30 am
Room: Moderated Discussions
VertexMaster (nope.delete@this.nope.com) on August 4, 2016 4:38 pm wrote:
> > What do you mean by "stepping over scan lines"?
> > Scanline as 1 pixel line?
>
> I should have just said "stepping over pixels". Historically tile-rendering is clipping triangles
> to a tile, rendering the complete tile etc. However I have no idea how GCN renders blocks ("tiles"),
> so I just meant for all we know, they step thru (scanning) every triangle intersecting the block
> and just skips pixels outside of the block. Not suggesting they do, but my point is that what appears
> in the frame buffer doesn't tell us much about how the rasterization process works.
Tile is just a 2D block of pixels. You think of tile based deferred rendering (TBDR).
But it is just a rendering method built on top of tiled memory layout.
Coarse rasterizer steps at tile granularity and feed the results to fine rasterizer.
After triangle setup all you need is to map tile screen coords to barycentric coords for shader.
AMD coarse rasterizer samples 8x8 pixel grid == HSR block resolution == SIMD width.
> > In 21 century GPUs always render tiles. Also depth buffer tiles on GCN are already compressed.
>
> Um, the TeraScale (VLIW) card Kanter tested appeared to be doing scanline rendering (more
> or less) and it's not that old (2009), so that's decidedly "21st century". I imagine the
> Cayman (V series (6000) was the same (2010). And yes, the z-buffer compression started back
> in the 90s with the Radeon 256 (7000 series), although I don't know if it was block-level
> ("tiling") until "Hyper-Z", or they were just handling clear-all cases like zero/one.
Look closer. It is the same 8x8 tile (actually 4x4 quads).

Tiles seems to be placed in memory sequentially in case of David's card.
GCN tiling is much more complex than this (as seen in my images).
> > What do you mean by "stepping over scan lines"?
> > Scanline as 1 pixel line?
>
> I should have just said "stepping over pixels". Historically tile-rendering is clipping triangles
> to a tile, rendering the complete tile etc. However I have no idea how GCN renders blocks ("tiles"),
> so I just meant for all we know, they step thru (scanning) every triangle intersecting the block
> and just skips pixels outside of the block. Not suggesting they do, but my point is that what appears
> in the frame buffer doesn't tell us much about how the rasterization process works.
Tile is just a 2D block of pixels. You think of tile based deferred rendering (TBDR).
But it is just a rendering method built on top of tiled memory layout.
Coarse rasterizer steps at tile granularity and feed the results to fine rasterizer.
After triangle setup all you need is to map tile screen coords to barycentric coords for shader.
AMD coarse rasterizer samples 8x8 pixel grid == HSR block resolution == SIMD width.
> > In 21 century GPUs always render tiles. Also depth buffer tiles on GCN are already compressed.
>
> Um, the TeraScale (VLIW) card Kanter tested appeared to be doing scanline rendering (more
> or less) and it's not that old (2009), so that's decidedly "21st century". I imagine the
> Cayman (V series (6000) was the same (2010). And yes, the z-buffer compression started back
> in the 90s with the Radeon 256 (7000 series), although I don't know if it was block-level
> ("tiling") until "Hyper-Z", or they were just handling clear-all cases like zero/one.
Look closer. It is the same 8x8 tile (actually 4x4 quads).

Tiles seems to be placed in memory sequentially in case of David's card.
GCN tiling is much more complex than this (as seen in my images).