By: Doug S (foo.delete@this.bar.bar), July 30, 2021 11:01 am
Room: Moderated Discussions
Heikki Kultala (heikki.kult.ala.delete@this.gmail.com) on July 29, 2021 11:18 pm wrote:
> Doug S (foo.delete@this.bar.bar) on July 29, 2021 5:44 pm wrote:
> > None of it really matters, since the process names have nothing to do with a physical dimension
> > anywhere in the design. It is just a placeholder for "2x the transistors in the next generation"
> > but we aren't even seeing that lately as TSMC only got 1.8x scaling on N5 and 1.7x on N3
> > - but TSMC wasn't calling those 5nm and 3nm, it is mostly outsiders doing so (maybe TSMC
> > does as well, but probably only because outsiders referred to them that way)
> >
> > Who knows what TSMC will call the stuff below N2, will it be N1.4 or P1400 or just
> > choose another letter at random, multiply by 10, so X14 then X10 and so on.
>
> TSMC did not get 1.8x scaling on N5. In reality (by synthesizing any
> reasonable piece of logic than does something) it's much worse.
>
> Or, lets say that TSMC might have gotten 1.8x for single best-case standard cell
> component type for their marketing materials, but TSMCs customers get MUCH LESS
> than 1,8x for their real-world designs that actually do something useful.
Cache scaling is not as good, so customers like Apple who added a lot of cache when going to N5 get worse scaling. For N5 to N3 TSMC states logic scales at 1.7x, cache scales at 1.2x and I/O scales at 1.1x. I don't recall seeing them report cache scaling for N7 to N5.
Anyone know why cache scaling is becoming a problem? Might it have to do with congestion in the metal layers? If they do can something like Intel's PowerVia and have metal sandwiching the logic, the metal routing will become easier - especially for parts that will be stacked which is more and more common.
Also Apple taped out A14 months earlier than they typically tape out a new core. Perhaps that had to do with the A14 design being used for M1, and probably wanting some extra test time / rework cycles before announcing the new Macs. The early tapeout would leave a shorter window between completion of the A13 and A14 designs, which may have meant cutting a few corners as far as optimizing for area.
Apple A12 on N7 had a transistor density of 83 Mtr/mm^2. Apple A14 on N5 was 134. That's scaling at 1.6x, which isn't all that far from TSMC's claim of 1.8x, when you take into consideration that cache sizes were increased and Apple may not have worked quite as hard on minimizing the area. TSMC's peak quoted transistor density for N5 is 171 Mtr/mm^2, Kirin at 145 is not far behind. Go look at Intel's claimed Mtr/mm^2 versus density of actual Intel CPUs and you'll find there is a much bigger gap.
> Doug S (foo.delete@this.bar.bar) on July 29, 2021 5:44 pm wrote:
> > None of it really matters, since the process names have nothing to do with a physical dimension
> > anywhere in the design. It is just a placeholder for "2x the transistors in the next generation"
> > but we aren't even seeing that lately as TSMC only got 1.8x scaling on N5 and 1.7x on N3
> > - but TSMC wasn't calling those 5nm and 3nm, it is mostly outsiders doing so (maybe TSMC
> > does as well, but probably only because outsiders referred to them that way)
> >
> > Who knows what TSMC will call the stuff below N2, will it be N1.4 or P1400 or just
> > choose another letter at random, multiply by 10, so X14 then X10 and so on.
>
> TSMC did not get 1.8x scaling on N5. In reality (by synthesizing any
> reasonable piece of logic than does something) it's much worse.
>
> Or, lets say that TSMC might have gotten 1.8x for single best-case standard cell
> component type for their marketing materials, but TSMCs customers get MUCH LESS
> than 1,8x for their real-world designs that actually do something useful.
Cache scaling is not as good, so customers like Apple who added a lot of cache when going to N5 get worse scaling. For N5 to N3 TSMC states logic scales at 1.7x, cache scales at 1.2x and I/O scales at 1.1x. I don't recall seeing them report cache scaling for N7 to N5.
Anyone know why cache scaling is becoming a problem? Might it have to do with congestion in the metal layers? If they do can something like Intel's PowerVia and have metal sandwiching the logic, the metal routing will become easier - especially for parts that will be stacked which is more and more common.
Also Apple taped out A14 months earlier than they typically tape out a new core. Perhaps that had to do with the A14 design being used for M1, and probably wanting some extra test time / rework cycles before announcing the new Macs. The early tapeout would leave a shorter window between completion of the A13 and A14 designs, which may have meant cutting a few corners as far as optimizing for area.
Apple A12 on N7 had a transistor density of 83 Mtr/mm^2. Apple A14 on N5 was 134. That's scaling at 1.6x, which isn't all that far from TSMC's claim of 1.8x, when you take into consideration that cache sizes were increased and Apple may not have worked quite as hard on minimizing the area. TSMC's peak quoted transistor density for N5 is 171 Mtr/mm^2, Kirin at 145 is not far behind. Go look at Intel's claimed Mtr/mm^2 versus density of actual Intel CPUs and you'll find there is a much bigger gap.