By: Formula350 (burban502.delete@this.gmail.com), April 23, 2013 1:58 pm
Room: Moderated Discussions
David Kanter (dkanter.delete@this.realworldtech.com) on April 23, 2013 8:14 am wrote:
Eh, I don't quite think that's how it will pan out. Lets assume that by some freak occurrence the GT3e actually is faster then a GT650M, that doesn't mean by any stretch of the imagination that it is as capable as one. That wouldn't happen until you see Intel cramming 1GB of that eDRAM into their chips, because what that memory "unlocks" on a discrete GPU is the ability to store and/or quickly process textures. I can only presume this might work similar in cases of video editing where the GPU is being leveraged for rendering or compute work, but I'll refer more specifically to gaming. When you run a graphics card of higher performance with less RAM then that of a slightly lower performance card that is equipped with more RAM, the latter will actually pull ahead in instances where there is a call for texture storage space, such as when running high resolutions or with Anti-Aliasing being enabled. I believe it was the GTX480 going up against the HD5870 where this occured, though it might have been GTX580 vs HD6970, but nevertheless... The nVidia card came equipped with 1.5GB, where as AMD was outfitting theirs with 2GB. Despite nV having a quicker product most of the time, when multi-screen (high resolution) gaming or increase levels of AA at even 1080p was tested, that "mere" 512MB was allowing the underdog to achieve higher frame rates. Or rather, more importantly, it was able to achieve playable frame rates!
As such, I just don't see it taking anything away from the discrete card market (and the aforementioned memory makers), not unless Intel lets you supplement that 128MB eDRAM with a configurable amount of system DRAM, similar to what AMD had done with their 128MB of Side-port memory (DDR3) that the Northbridge IGPs used. Yet that again is imparting a parasitic loss on system memory performance, which I suspect is why AMD ditched the Side-port when they came out with the APUs, reworking how their GPU interacts with the system memory. As it is now, when the APU's graphics processor is enabled, system memory performance is unaltered (unlike how it was with Side-port when you assigned it additional texture memory via system RAM). At least it's that way with Llano, on two different motherboards with two different chipsets (A75 and A55), but I can't say if that continues with their Steamroller APU variants.
At the end of the day though, I'm sure this is a stepping stone for Intel, and it'll be interesting to see how things play out. Unfortunately however, most consumers are ignorant to that level of a computer's workings and will simply see "IT HAZ MOAR PURFORMENCE!", so will eat it up like they always have. : But, oh well, that's sheeple for ya!
As Intel’s integrated graphics becomes more capable and takes more of the market, DRAM consumption will shift from companies like Nvidia and AMD (which buy from Samsung, Hynix, Micron, etc.) to Intel.
To put this in perspective, Intel has compared the Haswell GT3e performance to the discrete Nvidia GT 650M...
Eh, I don't quite think that's how it will pan out. Lets assume that by some freak occurrence the GT3e actually is faster then a GT650M, that doesn't mean by any stretch of the imagination that it is as capable as one. That wouldn't happen until you see Intel cramming 1GB of that eDRAM into their chips, because what that memory "unlocks" on a discrete GPU is the ability to store and/or quickly process textures. I can only presume this might work similar in cases of video editing where the GPU is being leveraged for rendering or compute work, but I'll refer more specifically to gaming. When you run a graphics card of higher performance with less RAM then that of a slightly lower performance card that is equipped with more RAM, the latter will actually pull ahead in instances where there is a call for texture storage space, such as when running high resolutions or with Anti-Aliasing being enabled. I believe it was the GTX480 going up against the HD5870 where this occured, though it might have been GTX580 vs HD6970, but nevertheless... The nVidia card came equipped with 1.5GB, where as AMD was outfitting theirs with 2GB. Despite nV having a quicker product most of the time, when multi-screen (high resolution) gaming or increase levels of AA at even 1080p was tested, that "mere" 512MB was allowing the underdog to achieve higher frame rates. Or rather, more importantly, it was able to achieve playable frame rates!
As such, I just don't see it taking anything away from the discrete card market (and the aforementioned memory makers), not unless Intel lets you supplement that 128MB eDRAM with a configurable amount of system DRAM, similar to what AMD had done with their 128MB of Side-port memory (DDR3) that the Northbridge IGPs used. Yet that again is imparting a parasitic loss on system memory performance, which I suspect is why AMD ditched the Side-port when they came out with the APUs, reworking how their GPU interacts with the system memory. As it is now, when the APU's graphics processor is enabled, system memory performance is unaltered (unlike how it was with Side-port when you assigned it additional texture memory via system RAM). At least it's that way with Llano, on two different motherboards with two different chipsets (A75 and A55), but I can't say if that continues with their Steamroller APU variants.
At the end of the day though, I'm sure this is a stepping stone for Intel, and it'll be interesting to see how things play out. Unfortunately however, most consumers are ignorant to that level of a computer's workings and will simply see "IT HAZ MOAR PURFORMENCE!", so will eat it up like they always have. : But, oh well, that's sheeple for ya!