Old school (~1995-2003) frequency scaling vs today within a single node?

By: John H (john.delete@this.not.com), May 23, 2020 5:32 am
Room: Moderated Discussions
Just curious --

During much of the 90s, and early 2000s we would see substantial frequency scaling* from the same processor within a period of 6-18 months, while today you basically see chips at "max margin' even when launching on a new manufacturing process (i.e. Zen 2).

Are the two main causes of "Today we see the full potential out of the box" :

- Manufacturing nodes are very expensive, and midpoint corrections are designed to improve yield but not necessarily frequency
- Processor modeling is much better today so critical speedpaths are basically all figured out at once?

Thanks..
John

*I'm thinking of how Coppermine had ~ 4-5 revisions - some of which appeared to be the die itself, and others were the manufacturing process. K7, Williamette, Northwood had similar constant increments. K6 also had a long history of frequency bumps on the same node.
 Next Post in Thread >
TopicPosted ByDate
Old school (~1995-2003) frequency scaling vs today within a single node?John H2020/05/23 05:32 AM
  1995-2003 microarchitectures were optimized for the _next_ processHeikkI Kultala2020/05/23 01:18 PM
    1995-2003 microarchitectures were optimized for the _next_ processanon2020/05/23 04:34 PM
    1995-2003 microarchitectures were optimized for the _next_ processDoug S2020/05/24 12:51 PM
Reply to this Topic
Name:
Email:
Topic:
Body: No Text
How do you spell purple?