By: David Kanter (dkanter.delete@this.realworldtech.com), April 23, 2013 3:04 pm
Room: Moderated Discussions
Formula350 (burban502.delete@this.gmail.com) on April 23, 2013 2:58 pm wrote:
> David Kanter (dkanter.delete@this.realworldtech.com) on April 23, 2013 8:14 am wrote:
>
> Eh, I don't quite think that's how it will pan out. Lets assume that by some freak occurrence the GT3e
> actually is faster then a GT650M, that doesn't mean by any stretch of the imagination that it is as
> capable as one.
I think it will probably depend on the software.
>That wouldn't happen until you see Intel cramming 1GB of that eDRAM into their chips,
> because what that memory "unlocks" on a discrete GPU is the ability to store and/or >quickly process textures.
But it's not necessary to access all the memory that fast.
> I can only presume this might work similar in cases of video editing where the GPU is being leveraged
> for rendering or compute work, but I'll refer more specifically to gaming. When you run a graphics card
> of higher performance with less RAM then that of a slightly lower performance card that is equipped with
> more RAM, the latter will actually pull ahead in instances where there is a call for texture storage space,
It depends on the working set. If the working set is 64GB/s, maybe 75 or 80GB/s.
> such as when running high resolutions or with Anti-Aliasing being enabled. I believe it was the GTX480
> going up against the HD5870 where this occured, though it might have been GTX580 vs HD6970, but nevertheless...
> The nVidia card came equipped with 1.5GB, where as AMD was outfitting theirs with 2GB. Despite nV having
> a quicker product most of the time, when multi-screen (high resolution) gaming or increase levels of AA
> at even 1080p was tested, that "mere" 512MB was allowing the underdog to achieve higher frame rates. Or
> rather, more importantly, it was able to achieve playable frame rates!
That's right. If your working set spills out, the performance will drop from ~60-80GB/s dedicated to something lower. But it also depends on how well the eDRAM acts as a cache. Even with an 80% hit rate, that would translate into effective bandwidth of ~ 57GB/s.
> As such, I just don't see it taking anything away from the discrete card market (and the aforementioned
> memory makers), not unless Intel lets you supplement that 128MB eDRAM with a configurable amount of system
> DRAM, similar to what AMD had done with their 128MB of Side-port memory (DDR3) that the Northbridge IGPs
> used.
How do you think the GPU works today? It uses system memory and is controlled by the driver.
>Yet that again is imparting a parasitic loss on system memory performance, which I suspect is why
> AMD ditched the Side-port when they came out with the APUs, reworking how their GPU interacts with the
> system memory.
What is a 'parasitic loss'?
David
> David Kanter (dkanter.delete@this.realworldtech.com) on April 23, 2013 8:14 am wrote:
>
As Intel’s integrated graphics becomes more capable and takes more of the market, DRAM consumption
> will shift from companies like Nvidia and AMD (which buy from Samsung, Hynix, Micron, etc.) to Intel.
>
> To put this in perspective, Intel has compared the Haswell
> GT3e performance to the discrete Nvidia GT 650M...
> Eh, I don't quite think that's how it will pan out. Lets assume that by some freak occurrence the GT3e
> actually is faster then a GT650M, that doesn't mean by any stretch of the imagination that it is as
> capable as one.
I think it will probably depend on the software.
>That wouldn't happen until you see Intel cramming 1GB of that eDRAM into their chips,
> because what that memory "unlocks" on a discrete GPU is the ability to store and/or >quickly process textures.
But it's not necessary to access all the memory that fast.
> I can only presume this might work similar in cases of video editing where the GPU is being leveraged
> for rendering or compute work, but I'll refer more specifically to gaming. When you run a graphics card
> of higher performance with less RAM then that of a slightly lower performance card that is equipped with
> more RAM, the latter will actually pull ahead in instances where there is a call for texture storage space,
It depends on the working set. If the working set is 64GB/s, maybe 75 or 80GB/s.
> such as when running high resolutions or with Anti-Aliasing being enabled. I believe it was the GTX480
> going up against the HD5870 where this occured, though it might have been GTX580 vs HD6970, but nevertheless...
> The nVidia card came equipped with 1.5GB, where as AMD was outfitting theirs with 2GB. Despite nV having
> a quicker product most of the time, when multi-screen (high resolution) gaming or increase levels of AA
> at even 1080p was tested, that "mere" 512MB was allowing the underdog to achieve higher frame rates. Or
> rather, more importantly, it was able to achieve playable frame rates!
That's right. If your working set spills out, the performance will drop from ~60-80GB/s dedicated to something lower. But it also depends on how well the eDRAM acts as a cache. Even with an 80% hit rate, that would translate into effective bandwidth of ~ 57GB/s.
> As such, I just don't see it taking anything away from the discrete card market (and the aforementioned
> memory makers), not unless Intel lets you supplement that 128MB eDRAM with a configurable amount of system
> DRAM, similar to what AMD had done with their 128MB of Side-port memory (DDR3) that the Northbridge IGPs
> used.
How do you think the GPU works today? It uses system memory and is controlled by the driver.
>Yet that again is imparting a parasitic loss on system memory performance, which I suspect is why
> AMD ditched the Side-port when they came out with the APUs, reworking how their GPU interacts with the
> system memory.
What is a 'parasitic loss'?
David