By: David Kanter (dkanter.delete@this.realworldtech.com), April 20, 2012 4:32 pm
Room: Moderated Discussions
Joel (joel.hruska@gmail.com) on 4/20/12 wrote:
---------------------------
>EduardoS (no@spam.com) on 4/20/12 wrote:
>---------------------------
>>Kira (kirsc@aeterna.ru) on 4/20/12 wrote:
>>---------------------------
>>>What was the purpose of using a shared decoder even supposed to be? Is the size/power
>>>overhead of a pair of 4-wide decoders really that large in a modern desktop/server CPU?
>>
>>If you look, it's the biggest shared block after L2, at almost twice the size of the shared FPU,
>>
>>>Perhaps a single beefy 4-issue or 6-issue core with SMT would have been a smarter move.
>>
>>That would sacrify clockspeed, for workloads with low instruction level parallelism
>>higher clockspeed is prefered over a wider core.
>>
>>But apparently the target choosen was wrong and SB busted (except in a few workloads) the old rule "avarage IPC < 1".
>>
>
>EduardoS,
>
>The first step in understanding Bulldozer is realizing that >the chip doesn't make much sense. ;)
>AMD's stated reason for sharing so much of the front end was to reduce die space
>and offer most of the benefit of a traditional dual-core part in a fraction of the
>die space. This made a lot of sense at the time, particularly since Intel was already
>leading them by 12-18 months when it came to moving to new >nodes.
>
>Both my own tests and those done elsewhere have indicated that sharing the front-end
>as it does "cost" Bulldozer between 10-20% of its theoretical performance. In and
>of itself, that's not bad -- compared to Thuban, they saved >more than 10-20% die space (assuming all else equal).
It's very hard to measure that without using any performance analysis tools. I don't disagree that it's a significant hit in performance, but quantifying that is challenging.
>The problem is, all else *isn't* equal. AMD stuffed >Bulldozer with cache (an eight-core
>BD has something like 16MB of cache compared to 10MB of >cache for a 6-core Thuban).
Having lots of cache is good for server workloads.
>That blows the die-size savings apart...which might still be ok, if the caches were
>fast. They aren't. In fact, they're painfully slow. Because every L1 write is duplicated
>in L2, L1 write latency is effectively pinned to L2 write >latency.
You are correct that the caches are painfully slow, but that's not the reason why. Frankly, I don't understand why the L1 is 4 cycles instead of 3. I REALLY don't understand why the L2 cache is so slow (20 cycles, really??), because size alone doesn't account for it. 12-14 cycles sounds much more reasonable.
The L3 cache is also quite slow, in part because of the slow L2 and in part because it runs at asynchronous to the cores. If you look at those two factors together and assume a 14 cycle L2, you can probably cut the L3 latency down by ~10 cycles.
The fact that the L1 is write through is totally irrelevant to latency and should actually improve things because AMD got rid of ECC on the L1. The stores go directly to the 4KB write combining cache and then written back to the L2 on a deferred basis.
http://www.realworldtech.com/page.cfm?ArticleID=RWT082610181333&p=9
>I still believe BD's biggest problem is its cache >latencies, but the chip as it
>shipped last year is a badly flawed piece of work.
I think the cache hierarchy overall is definitely one of the biggest culprits. Hopefully they will fix things in the future.
DK
---------------------------
>EduardoS (no@spam.com) on 4/20/12 wrote:
>---------------------------
>>Kira (kirsc@aeterna.ru) on 4/20/12 wrote:
>>---------------------------
>>>What was the purpose of using a shared decoder even supposed to be? Is the size/power
>>>overhead of a pair of 4-wide decoders really that large in a modern desktop/server CPU?
>>
>>If you look, it's the biggest shared block after L2, at almost twice the size of the shared FPU,
>>
>>>Perhaps a single beefy 4-issue or 6-issue core with SMT would have been a smarter move.
>>
>>That would sacrify clockspeed, for workloads with low instruction level parallelism
>>higher clockspeed is prefered over a wider core.
>>
>>But apparently the target choosen was wrong and SB busted (except in a few workloads) the old rule "avarage IPC < 1".
>>
>
>EduardoS,
>
>The first step in understanding Bulldozer is realizing that >the chip doesn't make much sense. ;)
>AMD's stated reason for sharing so much of the front end was to reduce die space
>and offer most of the benefit of a traditional dual-core part in a fraction of the
>die space. This made a lot of sense at the time, particularly since Intel was already
>leading them by 12-18 months when it came to moving to new >nodes.
>
>Both my own tests and those done elsewhere have indicated that sharing the front-end
>as it does "cost" Bulldozer between 10-20% of its theoretical performance. In and
>of itself, that's not bad -- compared to Thuban, they saved >more than 10-20% die space (assuming all else equal).
It's very hard to measure that without using any performance analysis tools. I don't disagree that it's a significant hit in performance, but quantifying that is challenging.
>The problem is, all else *isn't* equal. AMD stuffed >Bulldozer with cache (an eight-core
>BD has something like 16MB of cache compared to 10MB of >cache for a 6-core Thuban).
Having lots of cache is good for server workloads.
>That blows the die-size savings apart...which might still be ok, if the caches were
>fast. They aren't. In fact, they're painfully slow. Because every L1 write is duplicated
>in L2, L1 write latency is effectively pinned to L2 write >latency.
You are correct that the caches are painfully slow, but that's not the reason why. Frankly, I don't understand why the L1 is 4 cycles instead of 3. I REALLY don't understand why the L2 cache is so slow (20 cycles, really??), because size alone doesn't account for it. 12-14 cycles sounds much more reasonable.
The L3 cache is also quite slow, in part because of the slow L2 and in part because it runs at asynchronous to the cores. If you look at those two factors together and assume a 14 cycle L2, you can probably cut the L3 latency down by ~10 cycles.
The fact that the L1 is write through is totally irrelevant to latency and should actually improve things because AMD got rid of ECC on the L1. The stores go directly to the 4KB write combining cache and then written back to the L2 on a deferred basis.
http://www.realworldtech.com/page.cfm?ArticleID=RWT082610181333&p=9
>I still believe BD's biggest problem is its cache >latencies, but the chip as it
>shipped last year is a badly flawed piece of work.
I think the cache hierarchy overall is definitely one of the biggest culprits. Hopefully they will fix things in the future.
DK
Topic | Posted By | Date |
---|---|---|
Phoronix tests GCC compiler flags and Bulldozer. | I.S.T. | 2012/04/19 02:05 AM |
Single page view? | David Kanter | 2012/04/19 07:59 AM |
Single page view? | wainwright | 2012/04/19 08:22 AM |
Single page view? | slothrop | 2012/04/19 08:23 AM |
Single page view? | David Kanter | 2012/04/19 08:31 AM |
Single page view? | EduardoS | 2012/04/19 02:12 PM |
Is there a single page view option for RWT articles? | anon | 2012/04/19 08:27 AM |
Single page view? | Del | 2012/04/19 08:36 AM |
Single page view? | slacker | 2012/04/19 02:56 PM |
Single page view? | Del | 2012/04/22 05:09 AM |
Single page view? | David Kanter | 2012/04/22 08:38 AM |
Single page view? | Del | 2012/04/23 12:22 AM |
Single page view? | Michael S | 2012/04/19 12:30 PM |
Single page view? | Ungo | 2012/04/19 01:25 PM |
Single page view? | Foo_ | 2012/04/19 11:17 PM |
Single page view? | James | 2012/04/20 03:01 AM |
There are ads on the web? | JJB | 2012/04/20 03:32 AM |
What a bunch of freeloaders (NT) | slacker | 2012/04/20 12:44 PM |
So are you, probably | iz | 2012/04/21 03:41 AM |
Impression ad revenue | Paul A. Clayton | 2012/04/21 05:44 AM |
So are you, probably | slacker | 2012/04/21 12:09 PM |
So are you, probably | David Kanter | 2012/04/22 08:41 AM |
So are you, probably | iz | 2012/04/22 02:57 PM |
So are you, probably | Doug Siebert | 2012/04/22 11:37 AM |
Aha! | David Kanter | 2012/04/22 02:45 PM |
Aha! | bakaneko | 2012/04/22 07:49 PM |
So are you, probably | iz | 2012/04/22 02:48 PM |
That's not how the business works... | David Kanter | 2012/04/22 04:31 PM |
That's not how the business works... | iz | 2012/04/23 12:49 AM |
So are you, probably | slacker | 2012/04/22 10:31 PM |
back to phoronix | Michael S | 2012/04/23 01:07 AM |
So are you, probably | iz | 2012/04/23 02:29 AM |
Membership at RWT | David Kanter | 2012/04/23 10:24 AM |
So are you, probably | Jukka Larja | 2012/04/27 07:59 AM |
So, what do people think of these numbers> | I.S.T. | 2012/04/19 06:34 PM |
So, what do people think of these numbers> | Linus Torvalds | 2012/04/20 07:34 AM |
So, what do people think of these numbers> | Kira | 2012/04/20 08:18 AM |
So, what do people think of these numbers> | Linus Torvalds | 2012/04/20 09:05 AM |
So, what do people think of these numbers> | Doug Siebert | 2012/04/20 08:00 PM |
So, what do people think of these numbers> | Megol | 2012/04/21 08:05 AM |
So, what do people think of these numbers> | Linus Torvalds | 2012/04/21 12:11 PM |
Most problems are fixed... | Megol | 2012/04/24 06:00 AM |
So, what do people think of these numbers> | bakaneko | 2012/04/20 10:16 AM |
So, what do people think of these numbers> | bakaneko | 2012/04/20 10:37 AM |
So, what do people think of these numbers> | Linus Torvalds | 2012/04/20 12:24 PM |
So, what do people think of these numbers> | Joel | 2012/04/20 01:59 PM |
So, what do people think of these numbers> | Kira | 2012/04/20 02:32 PM |
So, what do people think of these numbers> | EduardoS | 2012/04/20 03:00 PM |
Bulldozer's Oddities. | Joel | 2012/04/20 03:54 PM |
In defense of Bulldozer's Oddities | David Kanter | 2012/04/20 04:32 PM |
In defense of Bulldozer's Oddities | Exophase | 2012/04/20 06:11 PM |
In defense of Bulldozer's Oddities | EduardoS | 2012/04/20 06:46 PM |
In defense of Bulldozer's Oddities | Exophase | 2012/04/20 07:18 PM |
In defense of Bulldozer's Oddities | anonymous | 2012/04/20 10:26 PM |
In defense of Bulldozer's Oddities | JJB | 2012/04/20 10:34 PM |
In defense of Bulldozer's Oddities | imaxx | 2012/04/21 06:21 AM |
In defense of Bulldozer's Oddities | Michael S | 2012/04/21 09:42 AM |
Bulldozer's integer execution units | David Kanter | 2012/04/25 03:29 PM |
Bulldozer's integer execution units | Exophase | 2012/04/26 11:17 AM |
Bulldozer's integer execution units | anonymous | 2012/04/26 02:15 PM |
Bulldozer's integer execution units | EduardoS | 2012/04/26 02:40 PM |
Bulldozer's integer execution units | Foo_ | 2012/04/27 07:21 AM |
Bulldozer's integer execution units | Megol | 2012/04/27 12:38 PM |
Bulldozer's integer execution units | EduardoS | 2012/04/26 02:47 PM |
Bulldozer's integer execution units | Exophase | 2012/04/26 04:02 PM |
Bulldozer's integer execution units | EduardoS | 2012/04/26 05:03 PM |
Bulldozer's integer execution units | Exophase | 2012/04/26 05:24 PM |
Bulldozer's integer execution units | EduardoS | 2012/04/26 06:18 PM |
Bulldozer's cache memory performance | Heikki Kultala | 2012/04/28 12:18 AM |
Bulldozer's cache memory performance | EduardoS | 2012/04/28 09:06 AM |
Bulldozer's integer execution units | David Kanter | 2012/04/26 03:03 PM |
Bulldozer's integer execution units | Exophase | 2012/04/26 03:59 PM |
Bulldozer's integer execution units | David Kanter | 2012/04/26 09:53 PM |
Bulldozer's integer execution units | Exophase | 2012/04/27 07:42 AM |
Bulldozer's integer execution units | David Kanter | 2012/04/27 10:06 AM |
Bulldozer's integer execution units | EduardoS | 2012/04/27 12:27 PM |
K8 divided pipelines? | Paul A. Clayton | 2012/04/27 12:59 PM |
Bulldozer's integer execution units | Michael S | 2012/04/27 03:37 AM |
Bulldozer's integer execution units | Exophase | 2012/04/27 07:33 AM |
Bulldozer's integer execution units | anonymous | 2012/04/27 08:03 AM |
Renaming Flags | Konrad Schwarz | 2012/04/27 02:04 AM |
Renaming Flags | none | 2012/04/27 03:03 AM |
Renaming Flags | Megol | 2012/04/27 11:42 AM |
Bulldozer's integer execution units | hcl64 | 2012/04/27 03:31 PM |
VEX supports 3+ operands. FPU have renaming already(NT) | Megol | 2012/04/28 07:20 AM |
In defense of Bulldozer's Oddities | Linus Torvalds | 2012/04/21 11:26 AM |
Thanks for the lesson | JJB | 2012/04/21 01:23 PM |
Side note.. | Linus Torvalds | 2012/04/21 01:57 PM |
In defense of Bulldozer's Oddities | Exophase | 2012/04/21 11:13 AM |
In defense of Bulldozer's Oddities | EduardoS | 2012/04/21 11:53 AM |
In defense of Bulldozer's Oddities | Gionatan Danti | 2012/04/21 11:42 AM |
In defense of Bulldozer's Oddities | hcl64 | 2012/04/27 04:07 PM |
In defense of Bulldozer's Oddities | David Kanter | 2012/04/28 05:29 AM |
In defense of Bulldozer's Oddities | hcl64 | 2012/04/28 01:44 PM |
In defense of Bulldozer's Oddities | David Kanter | 2012/04/28 08:42 PM |
In defense of Bulldozer's Oddities | hcl64 | 2012/04/28 09:39 PM |
Bulldozer's Oddities. | EduardoS | 2012/04/20 05:05 PM |
Bulldozer's Oddities. | anon | 2012/04/20 07:32 PM |
Bulldozer's Oddities. | EduardoS | 2012/04/21 11:37 AM |
Bulldozer's Oddities. | anon | 2012/04/21 09:16 PM |
Bulldozer's Oddities. | EduardoS | 2012/04/21 09:43 PM |
Bulldozer's Oddities. | anon | 2012/04/22 01:09 AM |
Bulldozer's Oddities. | EduardoS | 2012/04/22 12:57 PM |
Bulldozer's Oddities. | anon | 2012/04/22 03:17 PM |
Bulldozer's Oddities. | EduardoS | 2012/04/22 04:05 PM |
Bulldozer's Oddities. | anon | 2012/04/22 04:42 PM |
Bulldozer's Oddities. | anon | 2012/04/22 05:01 PM |
Bulldozer's Oddities. | EduardoS | 2012/04/22 09:28 PM |
Bulldozer's Oddities. | anon | 2012/04/22 10:05 PM |
Bulldozer's isn't bad. | a reader | 2012/04/21 09:01 AM |
Bulldozer's isn't bad. | Kira | 2012/04/21 10:29 AM |
Bulldozer's isn't bad. | hcl64 | 2012/04/27 04:58 PM |
Bulldozer's isn't bad. | anon | 2012/04/27 05:16 PM |
Bulldozer's isn't bad. | hcl64 | 2012/04/27 06:33 PM |
Bulldozer's isn't bad. | rwessel | 2012/04/27 10:12 PM |
Bulldozer's isn't bad. | EduardoS | 2012/04/28 08:29 AM |
Bulldozer's isn't bad. | EduardoS | 2012/04/28 08:30 AM |
Bulldozer's isn't bad. | Michael S | 2012/04/28 11:36 AM |
Bulldozer is made for SPEC fp | Pelle-48 | 2012/04/21 10:41 AM |
Bulldozer's Oddities. | mpx | 2012/04/22 02:47 AM |
Bulldozer's Oddities. | EduardoS | 2012/04/22 12:57 PM |
Bulldozer's Oddities. | mpx | 2012/04/23 06:04 AM |
Bulldozer's Oddities. | Eric | 2012/04/23 11:33 AM |
Bulldozer's Oddities. | EduardoS | 2012/04/23 01:22 PM |
Bulldozer's Oddities. | Eric | 2012/04/23 06:30 PM |
Bulldozer's Oddities. | hcl64 | 2012/04/27 05:16 PM |
Bulldozer's Oddities. | Y | 2012/04/25 03:34 AM |
Bulldozer's IDIV | Heikki Kultala | 2012/04/27 09:56 PM |
Bulldozer's IDIV | Y | 2012/04/30 12:51 AM |
Bulldozer's IDIV | EduardoS | 2012/04/30 04:39 AM |
Bulldozer's IDIV | P3Dnow | 2012/05/08 12:23 AM |
Bulldozer's IDIV | Exophase | 2012/05/08 06:37 AM |
Bulldozer's Oddities. | EduardoS | 2012/04/23 01:15 PM |
Clustered MT as SMT for high frequency | Paul A. Clayton | 2012/04/20 03:10 PM |
Clustered MT as SMT for high frequency | hcl64 | 2012/04/27 11:56 PM |
Clustered MT as SMT for high frequency | anonymous | 2012/04/28 12:43 AM |
Clustered MT as SMT for high frequency | hcl64 | 2012/04/28 01:59 PM |
Clustered MT as SMT for high frequency | anonymous | 2012/04/28 07:45 PM |
Clustered MT as SMT for high frequency | anon | 2012/04/28 01:13 AM |
Clustered MT as SMT for high frequency | hcl64 | 2012/04/28 02:23 PM |
Clustered MT as SMT for high frequency | anon | 2012/04/28 05:19 PM |
Clustered MT as SMT for high frequency | hcl64 | 2012/04/28 06:58 PM |
Clustered MT as SMT for high frequency | David Kanter | 2012/04/28 05:38 AM |
Guessed meaning of "strong dependency model" | Paul A. Clayton | 2012/04/28 06:24 AM |
Guessed meaning of "strong dependency model" | EduardoS | 2012/04/28 08:46 AM |
*Right meaning* about "strong dependency model" | hcl64 | 2012/04/28 03:59 PM |
Clustered MT as SMT for high frequency | hcl64 | 2012/04/28 03:24 PM |
Clustered MT as SMT for high frequency | anonymous | 2012/04/28 07:50 PM |
Clustered MT as SMT for high frequency | hcl64 | 2012/04/28 08:47 PM |
SNB width | David Kanter | 2012/04/28 08:48 PM |
SNB width | hcl64 | 2012/04/29 01:24 AM |
Clustered MT as SMT for high frequency | David Kanter | 2012/04/28 08:56 PM |
Clustered MT as SMT for high frequency | hcl64 | 2012/04/28 10:44 PM |
SOI, FD vs. PD | David Kanter | 2012/04/29 06:19 AM |
SOI, FD vs. PD | hcl64 | 2012/04/29 04:31 PM |
SOI, FD vs. PD | David Kanter | 2012/04/29 10:26 PM |
SOI, FD vs. PD | hcl64 | 2012/04/30 07:08 AM |
SOI, FD vs. PD | David Kanter | 2012/04/30 08:59 AM |
SOI, FD vs. PD | hcl64 | 2012/04/30 05:10 PM |
SOI, FD vs. PD | David Kanter | 2012/04/30 05:32 PM |
SOI, FD vs. PD | hcl64 | 2012/04/30 09:47 PM |
SOI, FD vs. PD | David Kanter | 2012/05/01 01:24 AM |
SOI, FD vs. PD | hcl64 | 2012/05/01 04:46 AM |
SOI, FD vs. PD | hcl64 | 2012/05/01 05:37 AM |
SOI, FD vs. PD | David Kanter | 2012/05/01 07:19 AM |
SOI, FD vs. PD | hcl64 | 2012/05/01 06:39 AM |
PD-SOI | David Kanter | 2012/05/02 11:22 AM |
SOI, FD vs. PD | slacker | 2012/04/30 07:10 PM |
SOI, FD vs. PD | David Kanter | 2012/04/30 09:16 PM |
SOI, FD vs. PD | slacker | 2012/05/01 09:04 PM |
SOI, FD vs. PD | David Kanter | 2012/05/02 07:19 AM |
SOI, FD vs. PD | zou | 2012/05/02 11:23 AM |
Previous discussion of clustered MT | Paul A. Clayton | 2012/04/28 06:00 AM |
Previous discussion of clustered MT | hcl64 | 2012/04/28 08:38 PM |
Previous discussion of clustered MT | David Kanter | 2012/04/30 03:37 PM |
Previous discussion of clustered MT | hcl64 | 2012/04/30 06:24 PM |
Previous discussion of clustered MT | David Kanter | 2012/04/30 06:40 PM |
Previous discussion of clustered MT | hcl64 | 2012/05/01 08:15 AM |
Latency issues | David Kanter | 2012/05/02 11:01 AM |
So, what do people think of these numbers> | Megol | 2012/04/21 12:57 AM |