By: Exophase (exophase.delete@this.gmail.com), April 20, 2012 6:11 pm
Room: Moderated Discussions
Personally I don't see the shared front end as being the major problem with the BD design. Intel shares the frontend between two threads too, and it has sort of similar width. If you're using one thread significantly more than the other (or dedicating the module to one thread exclusively) the shared nature shouldn't hurt you, so long as the scheduling between the two favors the more important one. Of course it doesn't help when AMD thinks it's better not to schedule threads together on a module if they can help it, and has to change that later..
In other words, that 10-20% or whatever is the cost of running two threads on one module vs two modules, not the cost of running one thread on the module. And since AMD is delivering just as many "full" cores and threads as Intel is at the same market segments (more really, if you consider that Intel doesn't enable HT in the mid-range) this isn't really the problem. Fairly little right now benefits from full time threading on > 4 cores and these are the workloads AMD actually does well at.
That's not to say the frontend doesn't have weaknesses. The instruction cache should have more associativity for sure, 2-way vs 8-way for Intel is going to be a big problem on some workloads. And the decoders can be tripped up by big flaws too, like only being able to handle one double path instruction per cycle, instead of the two you would expect. This makes the decode rate for a stream of AVX instructions choke on it, for instance - and since decode is in-order this is an easy place for fragility to be exposed. Same reason Intel naturally improved a lot with its 3-1-1 style decode patterns, when fused uops made the simple decoders much more powerful.
But I think the real problem with BD is in the cores themselves and memory hierarchy. Saying that it's merely "two ALUs" obscures the real execution width. If you could only do two simple ALU operations but could do other types of operations simultaneously it'd probably be a lot better off. But it doesn't just shove ALU operations through the two EX ports, but multiplies, branches, and stores must take one of them too. It doesn't look like the AG ports handle much else besides address generation (including loads) and inc/dec.
This creates a balance problem that's really astonishing.. I don't know much about chip design, so someone please tell me if I'm saying something stupid, but it seems like you put in a lot of circuitry in the register file and forwarding network to be able to accommodate 4 execution units simultaneously. Isn't it then a huge waste to make a couple of them capable of almost nothing? Seems like those AGLU ports should at least be capable of basic ALU operations and moves, considering their namesake. I wonder if this was the intention and it somehow got axed. At least Piledriver is supposed to extend them to include simple reg/reg moves, as well as some obscure instructions (XADD and BEXTR.. seriously? No idea what they're thinking here)
And the write-through cache doesn't hurt them on principle, but they're probably being burned by it because of the low L2 bandwidth. The WCC would presumably help a lot here if you're dealing with a lot of consecutive small (or at least, 32 or 64-bit sized) stores, but it might really be so bad that it's not enough.
The L2 latency does look high, and we know that if you halve the size to 1MB the design only allows taking it down to 18 cycles, so it's not dominated by size. I think the problem here is that AMD wants to accommodate huge frequencies. Aside from the problem that they don't have the power/thermal budget for those frequencies it suggests conflicting design goals. On the one hand AMD wants it to be able to turbo up for high single threaded performance - but they also want to cram a bunch of cores into a chip and let them run at full tilt. And since it's optimized for server workloads (high cache) that means it's also optimized for running a lot of cores without a lot of room leftover for turbo most of the time - so it ends up being good for low clocks but paying the design price for high clocks. Same with their goals in mobile, which are probably going to be their biggest markets in terms of chip volume - do you think those 17W-35W Trinity chips are going to benefit from a design that can turbo to 5GHz? Probably not very often.
So the whole high clock goals seem like an effort to gain a respectable halo product, but without actually coming close enough in single threaded performance vs Intel to look respectable. Should have cut their losses here and focused on throughput and low power, IMO.
In other words, that 10-20% or whatever is the cost of running two threads on one module vs two modules, not the cost of running one thread on the module. And since AMD is delivering just as many "full" cores and threads as Intel is at the same market segments (more really, if you consider that Intel doesn't enable HT in the mid-range) this isn't really the problem. Fairly little right now benefits from full time threading on > 4 cores and these are the workloads AMD actually does well at.
That's not to say the frontend doesn't have weaknesses. The instruction cache should have more associativity for sure, 2-way vs 8-way for Intel is going to be a big problem on some workloads. And the decoders can be tripped up by big flaws too, like only being able to handle one double path instruction per cycle, instead of the two you would expect. This makes the decode rate for a stream of AVX instructions choke on it, for instance - and since decode is in-order this is an easy place for fragility to be exposed. Same reason Intel naturally improved a lot with its 3-1-1 style decode patterns, when fused uops made the simple decoders much more powerful.
But I think the real problem with BD is in the cores themselves and memory hierarchy. Saying that it's merely "two ALUs" obscures the real execution width. If you could only do two simple ALU operations but could do other types of operations simultaneously it'd probably be a lot better off. But it doesn't just shove ALU operations through the two EX ports, but multiplies, branches, and stores must take one of them too. It doesn't look like the AG ports handle much else besides address generation (including loads) and inc/dec.
This creates a balance problem that's really astonishing.. I don't know much about chip design, so someone please tell me if I'm saying something stupid, but it seems like you put in a lot of circuitry in the register file and forwarding network to be able to accommodate 4 execution units simultaneously. Isn't it then a huge waste to make a couple of them capable of almost nothing? Seems like those AGLU ports should at least be capable of basic ALU operations and moves, considering their namesake. I wonder if this was the intention and it somehow got axed. At least Piledriver is supposed to extend them to include simple reg/reg moves, as well as some obscure instructions (XADD and BEXTR.. seriously? No idea what they're thinking here)
And the write-through cache doesn't hurt them on principle, but they're probably being burned by it because of the low L2 bandwidth. The WCC would presumably help a lot here if you're dealing with a lot of consecutive small (or at least, 32 or 64-bit sized) stores, but it might really be so bad that it's not enough.
The L2 latency does look high, and we know that if you halve the size to 1MB the design only allows taking it down to 18 cycles, so it's not dominated by size. I think the problem here is that AMD wants to accommodate huge frequencies. Aside from the problem that they don't have the power/thermal budget for those frequencies it suggests conflicting design goals. On the one hand AMD wants it to be able to turbo up for high single threaded performance - but they also want to cram a bunch of cores into a chip and let them run at full tilt. And since it's optimized for server workloads (high cache) that means it's also optimized for running a lot of cores without a lot of room leftover for turbo most of the time - so it ends up being good for low clocks but paying the design price for high clocks. Same with their goals in mobile, which are probably going to be their biggest markets in terms of chip volume - do you think those 17W-35W Trinity chips are going to benefit from a design that can turbo to 5GHz? Probably not very often.
So the whole high clock goals seem like an effort to gain a respectable halo product, but without actually coming close enough in single threaded performance vs Intel to look respectable. Should have cut their losses here and focused on throughput and low power, IMO.
Topic | Posted By | Date |
---|---|---|
Phoronix tests GCC compiler flags and Bulldozer. | I.S.T. | 2012/04/19 02:05 AM |
Single page view? | David Kanter | 2012/04/19 07:59 AM |
Single page view? | wainwright | 2012/04/19 08:22 AM |
Single page view? | slothrop | 2012/04/19 08:23 AM |
Single page view? | David Kanter | 2012/04/19 08:31 AM |
Single page view? | EduardoS | 2012/04/19 02:12 PM |
Is there a single page view option for RWT articles? | anon | 2012/04/19 08:27 AM |
Single page view? | Del | 2012/04/19 08:36 AM |
Single page view? | slacker | 2012/04/19 02:56 PM |
Single page view? | Del | 2012/04/22 05:09 AM |
Single page view? | David Kanter | 2012/04/22 08:38 AM |
Single page view? | Del | 2012/04/23 12:22 AM |
Single page view? | Michael S | 2012/04/19 12:30 PM |
Single page view? | Ungo | 2012/04/19 01:25 PM |
Single page view? | Foo_ | 2012/04/19 11:17 PM |
Single page view? | James | 2012/04/20 03:01 AM |
There are ads on the web? | JJB | 2012/04/20 03:32 AM |
What a bunch of freeloaders (NT) | slacker | 2012/04/20 12:44 PM |
So are you, probably | iz | 2012/04/21 03:41 AM |
Impression ad revenue | Paul A. Clayton | 2012/04/21 05:44 AM |
So are you, probably | slacker | 2012/04/21 12:09 PM |
So are you, probably | David Kanter | 2012/04/22 08:41 AM |
So are you, probably | iz | 2012/04/22 02:57 PM |
So are you, probably | Doug Siebert | 2012/04/22 11:37 AM |
Aha! | David Kanter | 2012/04/22 02:45 PM |
Aha! | bakaneko | 2012/04/22 07:49 PM |
So are you, probably | iz | 2012/04/22 02:48 PM |
That's not how the business works... | David Kanter | 2012/04/22 04:31 PM |
That's not how the business works... | iz | 2012/04/23 12:49 AM |
So are you, probably | slacker | 2012/04/22 10:31 PM |
back to phoronix | Michael S | 2012/04/23 01:07 AM |
So are you, probably | iz | 2012/04/23 02:29 AM |
Membership at RWT | David Kanter | 2012/04/23 10:24 AM |
So are you, probably | Jukka Larja | 2012/04/27 07:59 AM |
So, what do people think of these numbers> | I.S.T. | 2012/04/19 06:34 PM |
So, what do people think of these numbers> | Linus Torvalds | 2012/04/20 07:34 AM |
So, what do people think of these numbers> | Kira | 2012/04/20 08:18 AM |
So, what do people think of these numbers> | Linus Torvalds | 2012/04/20 09:05 AM |
So, what do people think of these numbers> | Doug Siebert | 2012/04/20 08:00 PM |
So, what do people think of these numbers> | Megol | 2012/04/21 08:05 AM |
So, what do people think of these numbers> | Linus Torvalds | 2012/04/21 12:11 PM |
Most problems are fixed... | Megol | 2012/04/24 06:00 AM |
So, what do people think of these numbers> | bakaneko | 2012/04/20 10:16 AM |
So, what do people think of these numbers> | bakaneko | 2012/04/20 10:37 AM |
So, what do people think of these numbers> | Linus Torvalds | 2012/04/20 12:24 PM |
So, what do people think of these numbers> | Joel | 2012/04/20 01:59 PM |
So, what do people think of these numbers> | Kira | 2012/04/20 02:32 PM |
So, what do people think of these numbers> | EduardoS | 2012/04/20 03:00 PM |
Bulldozer's Oddities. | Joel | 2012/04/20 03:54 PM |
In defense of Bulldozer's Oddities | David Kanter | 2012/04/20 04:32 PM |
In defense of Bulldozer's Oddities | Exophase | 2012/04/20 06:11 PM |
In defense of Bulldozer's Oddities | EduardoS | 2012/04/20 06:46 PM |
In defense of Bulldozer's Oddities | Exophase | 2012/04/20 07:18 PM |
In defense of Bulldozer's Oddities | anonymous | 2012/04/20 10:26 PM |
In defense of Bulldozer's Oddities | JJB | 2012/04/20 10:34 PM |
In defense of Bulldozer's Oddities | imaxx | 2012/04/21 06:21 AM |
In defense of Bulldozer's Oddities | Michael S | 2012/04/21 09:42 AM |
Bulldozer's integer execution units | David Kanter | 2012/04/25 03:29 PM |
Bulldozer's integer execution units | Exophase | 2012/04/26 11:17 AM |
Bulldozer's integer execution units | anonymous | 2012/04/26 02:15 PM |
Bulldozer's integer execution units | EduardoS | 2012/04/26 02:40 PM |
Bulldozer's integer execution units | Foo_ | 2012/04/27 07:21 AM |
Bulldozer's integer execution units | Megol | 2012/04/27 12:38 PM |
Bulldozer's integer execution units | EduardoS | 2012/04/26 02:47 PM |
Bulldozer's integer execution units | Exophase | 2012/04/26 04:02 PM |
Bulldozer's integer execution units | EduardoS | 2012/04/26 05:03 PM |
Bulldozer's integer execution units | Exophase | 2012/04/26 05:24 PM |
Bulldozer's integer execution units | EduardoS | 2012/04/26 06:18 PM |
Bulldozer's cache memory performance | Heikki Kultala | 2012/04/28 12:18 AM |
Bulldozer's cache memory performance | EduardoS | 2012/04/28 09:06 AM |
Bulldozer's integer execution units | David Kanter | 2012/04/26 03:03 PM |
Bulldozer's integer execution units | Exophase | 2012/04/26 03:59 PM |
Bulldozer's integer execution units | David Kanter | 2012/04/26 09:53 PM |
Bulldozer's integer execution units | Exophase | 2012/04/27 07:42 AM |
Bulldozer's integer execution units | David Kanter | 2012/04/27 10:06 AM |
Bulldozer's integer execution units | EduardoS | 2012/04/27 12:27 PM |
K8 divided pipelines? | Paul A. Clayton | 2012/04/27 12:59 PM |
Bulldozer's integer execution units | Michael S | 2012/04/27 03:37 AM |
Bulldozer's integer execution units | Exophase | 2012/04/27 07:33 AM |
Bulldozer's integer execution units | anonymous | 2012/04/27 08:03 AM |
Renaming Flags | Konrad Schwarz | 2012/04/27 02:04 AM |
Renaming Flags | none | 2012/04/27 03:03 AM |
Renaming Flags | Megol | 2012/04/27 11:42 AM |
Bulldozer's integer execution units | hcl64 | 2012/04/27 03:31 PM |
VEX supports 3+ operands. FPU have renaming already(NT) | Megol | 2012/04/28 07:20 AM |
In defense of Bulldozer's Oddities | Linus Torvalds | 2012/04/21 11:26 AM |
Thanks for the lesson | JJB | 2012/04/21 01:23 PM |
Side note.. | Linus Torvalds | 2012/04/21 01:57 PM |
In defense of Bulldozer's Oddities | Exophase | 2012/04/21 11:13 AM |
In defense of Bulldozer's Oddities | EduardoS | 2012/04/21 11:53 AM |
In defense of Bulldozer's Oddities | Gionatan Danti | 2012/04/21 11:42 AM |
In defense of Bulldozer's Oddities | hcl64 | 2012/04/27 04:07 PM |
In defense of Bulldozer's Oddities | David Kanter | 2012/04/28 05:29 AM |
In defense of Bulldozer's Oddities | hcl64 | 2012/04/28 01:44 PM |
In defense of Bulldozer's Oddities | David Kanter | 2012/04/28 08:42 PM |
In defense of Bulldozer's Oddities | hcl64 | 2012/04/28 09:39 PM |
Bulldozer's Oddities. | EduardoS | 2012/04/20 05:05 PM |
Bulldozer's Oddities. | anon | 2012/04/20 07:32 PM |
Bulldozer's Oddities. | EduardoS | 2012/04/21 11:37 AM |
Bulldozer's Oddities. | anon | 2012/04/21 09:16 PM |
Bulldozer's Oddities. | EduardoS | 2012/04/21 09:43 PM |
Bulldozer's Oddities. | anon | 2012/04/22 01:09 AM |
Bulldozer's Oddities. | EduardoS | 2012/04/22 12:57 PM |
Bulldozer's Oddities. | anon | 2012/04/22 03:17 PM |
Bulldozer's Oddities. | EduardoS | 2012/04/22 04:05 PM |
Bulldozer's Oddities. | anon | 2012/04/22 04:42 PM |
Bulldozer's Oddities. | anon | 2012/04/22 05:01 PM |
Bulldozer's Oddities. | EduardoS | 2012/04/22 09:28 PM |
Bulldozer's Oddities. | anon | 2012/04/22 10:05 PM |
Bulldozer's isn't bad. | a reader | 2012/04/21 09:01 AM |
Bulldozer's isn't bad. | Kira | 2012/04/21 10:29 AM |
Bulldozer's isn't bad. | hcl64 | 2012/04/27 04:58 PM |
Bulldozer's isn't bad. | anon | 2012/04/27 05:16 PM |
Bulldozer's isn't bad. | hcl64 | 2012/04/27 06:33 PM |
Bulldozer's isn't bad. | rwessel | 2012/04/27 10:12 PM |
Bulldozer's isn't bad. | EduardoS | 2012/04/28 08:29 AM |
Bulldozer's isn't bad. | EduardoS | 2012/04/28 08:30 AM |
Bulldozer's isn't bad. | Michael S | 2012/04/28 11:36 AM |
Bulldozer is made for SPEC fp | Pelle-48 | 2012/04/21 10:41 AM |
Bulldozer's Oddities. | mpx | 2012/04/22 02:47 AM |
Bulldozer's Oddities. | EduardoS | 2012/04/22 12:57 PM |
Bulldozer's Oddities. | mpx | 2012/04/23 06:04 AM |
Bulldozer's Oddities. | Eric | 2012/04/23 11:33 AM |
Bulldozer's Oddities. | EduardoS | 2012/04/23 01:22 PM |
Bulldozer's Oddities. | Eric | 2012/04/23 06:30 PM |
Bulldozer's Oddities. | hcl64 | 2012/04/27 05:16 PM |
Bulldozer's Oddities. | Y | 2012/04/25 03:34 AM |
Bulldozer's IDIV | Heikki Kultala | 2012/04/27 09:56 PM |
Bulldozer's IDIV | Y | 2012/04/30 12:51 AM |
Bulldozer's IDIV | EduardoS | 2012/04/30 04:39 AM |
Bulldozer's IDIV | P3Dnow | 2012/05/08 12:23 AM |
Bulldozer's IDIV | Exophase | 2012/05/08 06:37 AM |
Bulldozer's Oddities. | EduardoS | 2012/04/23 01:15 PM |
Clustered MT as SMT for high frequency | Paul A. Clayton | 2012/04/20 03:10 PM |
Clustered MT as SMT for high frequency | hcl64 | 2012/04/27 11:56 PM |
Clustered MT as SMT for high frequency | anonymous | 2012/04/28 12:43 AM |
Clustered MT as SMT for high frequency | hcl64 | 2012/04/28 01:59 PM |
Clustered MT as SMT for high frequency | anonymous | 2012/04/28 07:45 PM |
Clustered MT as SMT for high frequency | anon | 2012/04/28 01:13 AM |
Clustered MT as SMT for high frequency | hcl64 | 2012/04/28 02:23 PM |
Clustered MT as SMT for high frequency | anon | 2012/04/28 05:19 PM |
Clustered MT as SMT for high frequency | hcl64 | 2012/04/28 06:58 PM |
Clustered MT as SMT for high frequency | David Kanter | 2012/04/28 05:38 AM |
Guessed meaning of "strong dependency model" | Paul A. Clayton | 2012/04/28 06:24 AM |
Guessed meaning of "strong dependency model" | EduardoS | 2012/04/28 08:46 AM |
*Right meaning* about "strong dependency model" | hcl64 | 2012/04/28 03:59 PM |
Clustered MT as SMT for high frequency | hcl64 | 2012/04/28 03:24 PM |
Clustered MT as SMT for high frequency | anonymous | 2012/04/28 07:50 PM |
Clustered MT as SMT for high frequency | hcl64 | 2012/04/28 08:47 PM |
SNB width | David Kanter | 2012/04/28 08:48 PM |
SNB width | hcl64 | 2012/04/29 01:24 AM |
Clustered MT as SMT for high frequency | David Kanter | 2012/04/28 08:56 PM |
Clustered MT as SMT for high frequency | hcl64 | 2012/04/28 10:44 PM |
SOI, FD vs. PD | David Kanter | 2012/04/29 06:19 AM |
SOI, FD vs. PD | hcl64 | 2012/04/29 04:31 PM |
SOI, FD vs. PD | David Kanter | 2012/04/29 10:26 PM |
SOI, FD vs. PD | hcl64 | 2012/04/30 07:08 AM |
SOI, FD vs. PD | David Kanter | 2012/04/30 08:59 AM |
SOI, FD vs. PD | hcl64 | 2012/04/30 05:10 PM |
SOI, FD vs. PD | David Kanter | 2012/04/30 05:32 PM |
SOI, FD vs. PD | hcl64 | 2012/04/30 09:47 PM |
SOI, FD vs. PD | David Kanter | 2012/05/01 01:24 AM |
SOI, FD vs. PD | hcl64 | 2012/05/01 04:46 AM |
SOI, FD vs. PD | hcl64 | 2012/05/01 05:37 AM |
SOI, FD vs. PD | David Kanter | 2012/05/01 07:19 AM |
SOI, FD vs. PD | hcl64 | 2012/05/01 06:39 AM |
PD-SOI | David Kanter | 2012/05/02 11:22 AM |
SOI, FD vs. PD | slacker | 2012/04/30 07:10 PM |
SOI, FD vs. PD | David Kanter | 2012/04/30 09:16 PM |
SOI, FD vs. PD | slacker | 2012/05/01 09:04 PM |
SOI, FD vs. PD | David Kanter | 2012/05/02 07:19 AM |
SOI, FD vs. PD | zou | 2012/05/02 11:23 AM |
Previous discussion of clustered MT | Paul A. Clayton | 2012/04/28 06:00 AM |
Previous discussion of clustered MT | hcl64 | 2012/04/28 08:38 PM |
Previous discussion of clustered MT | David Kanter | 2012/04/30 03:37 PM |
Previous discussion of clustered MT | hcl64 | 2012/04/30 06:24 PM |
Previous discussion of clustered MT | David Kanter | 2012/04/30 06:40 PM |
Previous discussion of clustered MT | hcl64 | 2012/05/01 08:15 AM |
Latency issues | David Kanter | 2012/05/02 11:01 AM |
So, what do people think of these numbers> | Megol | 2012/04/21 12:57 AM |