By: Ireland (boh.delete@this.outlook.ie), January 25, 2017 9:33 am
Room: Moderated Discussions
Gabriele Svelto (gabriele.svelto.delete@this.gmail.com) on January 25, 2017 8:16 am wrote:
> RichardC (tich.delete@this.pobox.com) on January 25, 2017 4:26 am wrote:
> > And googling around, I found a description of Pixar's render farm network from 2010 which mentioned
> > 300 10Gbit ports and 1500 1Gbit ports, which sounds very much like 1Gbit ports for most of the
> > rendering boxes. Maybe they have some shared data on fileserver boxes which need the 10Gbit ?
> > Or maybe those are just for the higher-level interconnect between switches. Anyhow,
> > this is the creme de la creme, and it is (or recently was) predominantly 1Gbit.
>
> Without knowing how many boxes are involved it's hard to tell. In 2010 10GbE hardware was quite
> expensive. Two or event four ganged 1GbE links were commonplace when connecting machines.
I'd hazard a guess, in the case of Pixar and submit that they weren't even ganged up that much. It was literally about providing as many sockets for rendering computation as possible. Rendering was always that way, back in the day. An awful lot of it, will have transferred up to the cloud even since 2010. It was always about sheer number of processors thrown at the problem, as opposed to connection speeds. That's just the nature of rendering 30 frames per second, 60 seconds per minute, 60 minutes per hour - equals 'feature' length.
What tended to happen too, was they introduce faster processors gradually into the system - and gradually phase out older stuff at the other end. So the entire rendering 'farm' is constantly going through a re-fresh cycle. One takes advantage of buying some new processors at one end, for a price. While getting rid of processors at the other end, that are just burning up too much electricity, relative to what they are doing. One could almost build an Excel model or the like, to show how this signal of when to buy some new processors, throw away some old ones would work, to make 'rendering' per frame unit costs the lowest.
I do know one thing about rendering at Pixar though. Back in the 1990's it took ten hours or whatever it took, to render a final production quality frame. In the present day, it still takes ten hours or whatever it takes. Not because the processors haven't gotten faster, they have. But the animators and movie directors just keep on adding more effects and added layers into the job of rendering - which means the time per rendering per frame at Pixar, over a whole swathe or years and decades - has remained more or less constant. I do know that much, for a fact.
> RichardC (tich.delete@this.pobox.com) on January 25, 2017 4:26 am wrote:
> > And googling around, I found a description of Pixar's render farm network from 2010 which mentioned
> > 300 10Gbit ports and 1500 1Gbit ports, which sounds very much like 1Gbit ports for most of the
> > rendering boxes. Maybe they have some shared data on fileserver boxes which need the 10Gbit ?
> > Or maybe those are just for the higher-level interconnect between switches. Anyhow,
> > this is the creme de la creme, and it is (or recently was) predominantly 1Gbit.
>
> Without knowing how many boxes are involved it's hard to tell. In 2010 10GbE hardware was quite
> expensive. Two or event four ganged 1GbE links were commonplace when connecting machines.
I'd hazard a guess, in the case of Pixar and submit that they weren't even ganged up that much. It was literally about providing as many sockets for rendering computation as possible. Rendering was always that way, back in the day. An awful lot of it, will have transferred up to the cloud even since 2010. It was always about sheer number of processors thrown at the problem, as opposed to connection speeds. That's just the nature of rendering 30 frames per second, 60 seconds per minute, 60 minutes per hour - equals 'feature' length.
What tended to happen too, was they introduce faster processors gradually into the system - and gradually phase out older stuff at the other end. So the entire rendering 'farm' is constantly going through a re-fresh cycle. One takes advantage of buying some new processors at one end, for a price. While getting rid of processors at the other end, that are just burning up too much electricity, relative to what they are doing. One could almost build an Excel model or the like, to show how this signal of when to buy some new processors, throw away some old ones would work, to make 'rendering' per frame unit costs the lowest.
I do know one thing about rendering at Pixar though. Back in the 1990's it took ten hours or whatever it took, to render a final production quality frame. In the present day, it still takes ten hours or whatever it takes. Not because the processors haven't gotten faster, they have. But the animators and movie directors just keep on adding more effects and added layers into the job of rendering - which means the time per rendering per frame at Pixar, over a whole swathe or years and decades - has remained more or less constant. I do know that much, for a fact.