By: RichardC (tich.delete@this.pobox.com), January 26, 2017 3:24 pm
Room: Moderated Discussions
Ireland (boh.delete@this.outlook.ie) on January 26, 2017 12:49 pm wrote:
> But I still take your point, that for basic 'HD' 1080p, the bandwidth
> is not the issue. 4K is a different beast though, isn't it?
Not down on the rendering machines, because the compute/communication ratio -
and hence the amount of network bandwidth needed for a single machine - is
just about the same whether you're rendering a 1920x1080 frame in 1 second or
a 3840x2160 frame in 4 seconds (doubtless there are some second-order effects to
do with different hit rate in caches etc, but to first order 4x more pixels needs
4x more rendering time).
And that's really where this discussion started out: the claim was that all
"supercomputer" apps required a reasonably high network bandwidth/compute ratio.
And I suggested high-quality 3D rendering as a significant app which can use
a huge amount of compute, is easily distributable across a cluster of shared-nothing
machines (even a cluster of unreliable machines), and requires only very low
network bandwidth to/from each machine (though very possibly high bandwidth for
some of the ways you might want to use the rendered frames).
I rest my case.
> But I still take your point, that for basic 'HD' 1080p, the bandwidth
> is not the issue. 4K is a different beast though, isn't it?
Not down on the rendering machines, because the compute/communication ratio -
and hence the amount of network bandwidth needed for a single machine - is
just about the same whether you're rendering a 1920x1080 frame in 1 second or
a 3840x2160 frame in 4 seconds (doubtless there are some second-order effects to
do with different hit rate in caches etc, but to first order 4x more pixels needs
4x more rendering time).
And that's really where this discussion started out: the claim was that all
"supercomputer" apps required a reasonably high network bandwidth/compute ratio.
And I suggested high-quality 3D rendering as a significant app which can use
a huge amount of compute, is easily distributable across a cluster of shared-nothing
machines (even a cluster of unreliable machines), and requires only very low
network bandwidth to/from each machine (though very possibly high bandwidth for
some of the ways you might want to use the rendered frames).
I rest my case.