By: Ireland (boh.delete@this.outlook.ie), January 26, 2017 1:49 pm
Room: Moderated Discussions
RichardC (tich.delete@this.pobox.com) on January 26, 2017 11:39 am wrote:
> You need the higher bandwidths elsewhere - where you put together the output of
> several/many rendering machines to serve full-speed uncompressed 24fps video to many
> desktops simultaneously.
>
Okay, so if they wanted to get even final production quality frames at 24 frames per second, maybe they could even still do that using 1Gbit networking. But I don't think they leave it at just that any longer. They do final production quality frames, and then they don't stop working. They keep re-working on top of already rendered final quality footage.
So, you're probably getting into some part of that process, that needs something above 1Gbit.
From what I can gather, from what I've listened to in their interviews - they're previewing other rendering effects, on-top-of previously finished rendered footage, at that 24fps. They seem to have found a lot more intermediary stages, between rendering the frames initially, and putting the finishing touches on it. My guess, is that they've done a large amount of rendering of final footage, and they continue to re-render and develop parts of those finished image frames, making a lot more passes and working on parts of the image, for a long time.
That could get pretty intensive.
There was that old, old expression in traditional film movie making - it's 'in the can'. I suspect that nowadays with the medium, they do get it into the can, but they go back inside the can a lot more times, and play around with what's contained inside the can. And I don't mean, that old two-dimensional, 'post-production', that was basically about using photoshop on images, frame by frame. I mean, actual three dimensional adjustment and rendering, on top of pre-existing rendered footage.
The other thing too, is that when you build up a lot of separate 'channels' on top of those pre-rendered frames, to adjust certain things, you're adding a lot more un-compressed data that is travelling through at that twenty-four frames per second rate - in addition to just the frame images.
> So it seems clear to me - the description of Pixar's render farm says most of the links
> are/were 1Gbit/s, and a reasonable analysis shows that under reasonable assumptions,
> 1Gbit/s is enough for the vast majority of rendering problems.
>
> The numbers don't support your argument.
>
> As for the idea that story-telling is critical, well, sure. But that's just the same as
> live-action movies: you need a good script with a good story and good characters, and good
> direction and acting, and all of those are hard to put together. But once you've spent
> $100M or more on all that stuff and you've got the raw footage, then you're in a heck of a hurry
> to do the editing and the score and the soundtrack so that you get it in the theaters
> and start the payback on your huge investment.
I've listened to some movie directors who describe the editing process often, as a script re-write. The extreme example of that, was Adrien Brody who showed up to the opening night of a Terrence Malick movie expecting to be in the starring role - and he was hardly in the movie at all. Ridley Scott talks about having to 'make the movie' no less than three times - once as a script, once during production camera shooting stage, and then a third time in the editing suite.
Listening to some of 'The Future of RenderMan' (Pixar's RenderMan channel on Vimeo), and similar PR talk from Pixar available to listen to, what I'm noticing is that they're are beginning to play around with the 'final footage' so to speak, later and later in the process. I.e. Take something that is rendered to final production stage rendering, and they are still working on re-rendering those images. Maybe that does account for the increase in bandwidth appetite.
But I still take your point, that for basic 'HD' 1080p, the bandwidth is not the issue. 4K is a different beast though, isn't it?
Some other things of interest, that Ed Catmull had to say about it, was his desire to maintain a separation between what Pixar is doing, what Disney is doing, what Lucas is doing, and several other branches of the enterprise, including the Renderman division, which stands on it's own too. Even within the Disney part, there are even separate divisions in it, which do research on technology that Aaron mentioned, the Hyperion etc.
That makes a lot of sense to me, that Ed Catmull would see the future in their industry, as supporting different workstreams and supporting a wealth of different ideas and approaches to technology, and to the creative direction also. I mean, it's easier said than done. How to have so many different creative projects joined together, and still have each exploring different avenues. But, these guys are good at supporting a creative process, and backing that up with technology in a way that blends it all together, in an intelligent way.
As I say, we've got something to learn from them, on that level.
One other thing that Ed Catmull did emphasize in the PR video from two years ago, is that the people who license Pixar's Renderman engine from them to use - lots and lots of other different production companies - are throwing a heck of a lot more processors at the task of rendering now. What it meant from Pixar's point of view he explained, is they're having to re-think the way that they license out the Renderman, because there are so many more processors getting thrown at the job, than there were before.
It would seem to suggest that the idea of more computation connected by modest bandwidth, is the key. He and others have mentioned something called bi-directional ray tracing too. I haven't looked into what that is, but it sounds painful (ray tracing by itself is horrible on computation). Ed Catmull talked about global illumintation and he talked about a lot of other stuff too. There's about ten different short videos by Catmull, if you search through the whole collection on that channel.
The technical expert from Disney, Andy Hendrickson, who described the Hyperion technology in a dumbed down way - he explained a little about how they tried to 'contain' the ray tracing within quite a confined part of the world/scene, and not have too many rays chasing off into broader space - so as to conserve on computation. He hinted at the ways they had found, to divide up the job of rendering, in a way that runs better on hardware. But, given it was an interview for the Adam Savage tech channel, they didn't get into too much details.
> Doing the final rendering in a reasonably
> short elapsed time is financially valuable. But having 10Gbit vs 1Gbit on each rendering
> machine makes precisely no difference to that, because it is completely compute-bound
> - much better to spend the money on a few more machines than on an expensive
> network infrastructure which will have ridiculously low utilization.
>
> Heck, a whole 90-minute movie in 4K resolution uncompressed is about 3800GB, and a single
> 10Gbit link could transfer that in about 1 hour. That can't possibly be critical.
>
I.e. Faster than real time, at un-compressed image format/quality. Yeah.
> You need the higher bandwidths elsewhere - where you put together the output of
> several/many rendering machines to serve full-speed uncompressed 24fps video to many
> desktops simultaneously.
>
Okay, so if they wanted to get even final production quality frames at 24 frames per second, maybe they could even still do that using 1Gbit networking. But I don't think they leave it at just that any longer. They do final production quality frames, and then they don't stop working. They keep re-working on top of already rendered final quality footage.
So, you're probably getting into some part of that process, that needs something above 1Gbit.
From what I can gather, from what I've listened to in their interviews - they're previewing other rendering effects, on-top-of previously finished rendered footage, at that 24fps. They seem to have found a lot more intermediary stages, between rendering the frames initially, and putting the finishing touches on it. My guess, is that they've done a large amount of rendering of final footage, and they continue to re-render and develop parts of those finished image frames, making a lot more passes and working on parts of the image, for a long time.
That could get pretty intensive.
There was that old, old expression in traditional film movie making - it's 'in the can'. I suspect that nowadays with the medium, they do get it into the can, but they go back inside the can a lot more times, and play around with what's contained inside the can. And I don't mean, that old two-dimensional, 'post-production', that was basically about using photoshop on images, frame by frame. I mean, actual three dimensional adjustment and rendering, on top of pre-existing rendered footage.
The other thing too, is that when you build up a lot of separate 'channels' on top of those pre-rendered frames, to adjust certain things, you're adding a lot more un-compressed data that is travelling through at that twenty-four frames per second rate - in addition to just the frame images.
> So it seems clear to me - the description of Pixar's render farm says most of the links
> are/were 1Gbit/s, and a reasonable analysis shows that under reasonable assumptions,
> 1Gbit/s is enough for the vast majority of rendering problems.
>
> The numbers don't support your argument.
>
> As for the idea that story-telling is critical, well, sure. But that's just the same as
> live-action movies: you need a good script with a good story and good characters, and good
> direction and acting, and all of those are hard to put together. But once you've spent
> $100M or more on all that stuff and you've got the raw footage, then you're in a heck of a hurry
> to do the editing and the score and the soundtrack so that you get it in the theaters
> and start the payback on your huge investment.
I've listened to some movie directors who describe the editing process often, as a script re-write. The extreme example of that, was Adrien Brody who showed up to the opening night of a Terrence Malick movie expecting to be in the starring role - and he was hardly in the movie at all. Ridley Scott talks about having to 'make the movie' no less than three times - once as a script, once during production camera shooting stage, and then a third time in the editing suite.
Listening to some of 'The Future of RenderMan' (Pixar's RenderMan channel on Vimeo), and similar PR talk from Pixar available to listen to, what I'm noticing is that they're are beginning to play around with the 'final footage' so to speak, later and later in the process. I.e. Take something that is rendered to final production stage rendering, and they are still working on re-rendering those images. Maybe that does account for the increase in bandwidth appetite.
But I still take your point, that for basic 'HD' 1080p, the bandwidth is not the issue. 4K is a different beast though, isn't it?
Some other things of interest, that Ed Catmull had to say about it, was his desire to maintain a separation between what Pixar is doing, what Disney is doing, what Lucas is doing, and several other branches of the enterprise, including the Renderman division, which stands on it's own too. Even within the Disney part, there are even separate divisions in it, which do research on technology that Aaron mentioned, the Hyperion etc.
That makes a lot of sense to me, that Ed Catmull would see the future in their industry, as supporting different workstreams and supporting a wealth of different ideas and approaches to technology, and to the creative direction also. I mean, it's easier said than done. How to have so many different creative projects joined together, and still have each exploring different avenues. But, these guys are good at supporting a creative process, and backing that up with technology in a way that blends it all together, in an intelligent way.
As I say, we've got something to learn from them, on that level.
One other thing that Ed Catmull did emphasize in the PR video from two years ago, is that the people who license Pixar's Renderman engine from them to use - lots and lots of other different production companies - are throwing a heck of a lot more processors at the task of rendering now. What it meant from Pixar's point of view he explained, is they're having to re-think the way that they license out the Renderman, because there are so many more processors getting thrown at the job, than there were before.
It would seem to suggest that the idea of more computation connected by modest bandwidth, is the key. He and others have mentioned something called bi-directional ray tracing too. I haven't looked into what that is, but it sounds painful (ray tracing by itself is horrible on computation). Ed Catmull talked about global illumintation and he talked about a lot of other stuff too. There's about ten different short videos by Catmull, if you search through the whole collection on that channel.
The technical expert from Disney, Andy Hendrickson, who described the Hyperion technology in a dumbed down way - he explained a little about how they tried to 'contain' the ray tracing within quite a confined part of the world/scene, and not have too many rays chasing off into broader space - so as to conserve on computation. He hinted at the ways they had found, to divide up the job of rendering, in a way that runs better on hardware. But, given it was an interview for the Adam Savage tech channel, they didn't get into too much details.
> Doing the final rendering in a reasonably
> short elapsed time is financially valuable. But having 10Gbit vs 1Gbit on each rendering
> machine makes precisely no difference to that, because it is completely compute-bound
> - much better to spend the money on a few more machines than on an expensive
> network infrastructure which will have ridiculously low utilization.
>
> Heck, a whole 90-minute movie in 4K resolution uncompressed is about 3800GB, and a single
> 10Gbit link could transfer that in about 1 hour. That can't possibly be critical.
>
I.e. Faster than real time, at un-compressed image format/quality. Yeah.