By: Adrian (a.delete@this.acm.org), June 2, 2022 1:47 am
Room: Moderated Discussions
Doug S (foo.delete@this.bar.bar) on June 1, 2022 9:50 pm wrote:
> Peter Lewis (peter.delete@this.notyahoo.com) on June 1, 2022 3:55 pm wrote:
> > >> I think x86 will eventually be killed by variable length instruction decode, Moore’s law slowing
> > >> down, availability of software binary translation from x86 to something else and most low-performance
> > >> software running on top of JavaScript. The x86 instruction sets will eventually have the same market
> > >> significance as the IBM 360 instruction set. I own Intel stock and I’m not selling because I think
> > >> it will take more than 20 years for x86 to be displaced from the dominant position it has today.
> > >
> > > Why? What are the market forces that you believe will displace
> > > x86? What do you think will replace it, RISC-V?
> >
> > My guess is the higher complexity and higher power consumption
> > of x86 will eventually allow ARM implementations
> > to outperform x86 implementations. Apple’s M1 P-cores currently
> > decode 8 instructions per clock, while Intel’s
> > Golden Cove cores in Alder Lake and Sapphire Rapids decode
> > 6 instructions per clock. When ARM implementations
> > are decoding 32 instructions per clock, it will be very difficult for x86 implementations to keep up.
> >
>
>
> People have been predicting this ever since the first superscalar RISCs appeared over 30
> years ago, and it still hasn't happened. A couple years ago people were saying 3/4 wide was
> the best x86 could manage, now they've beat it. Are you saying where they are now is the
> limit, or will the goalposts shift again in 5 years when they take the next step wider?
>
Using an ISA that requires a more complex decoder, especially when decoding many instructions per cycle, is just one of many factors which determine the relationships between speed, power consumption, manufacturing cost, design cost.
A disadvantage in ISA can be easily compensated by advantages in manufacturing process or just by having more competent designers.
The same happens e.g. in a fighting sport. A heavier fighter has an advantage, put a more skilled opponent can easily beat him, despite the weight difference.
Nevertheless, when everything else is the same, so the 2 opponents are equally skilled, the heavier will win, which is why all fighting sport competitors are separated in weight classes.
The same is in CPU design and manufacturing. When everything else would be equal, the CPU using an ISA which needs a simpler decoder would be cheaper and/or faster.
As long as Intel had other more important advantages, a less efficient ISA did not matter.
Now, when everybody able to do up-to-date CPU design has access to the same manufacturing process and they also either might be able to spend similar amounts of money as Intel on a CPU design or they might benefit of reduced design costs, e.g. by licensing cores or other components, the more efficient ISA begins to matter again.
Before the last few years, there were no credible chances that x86-64 could be replaced by AArch64, despite the claims that were repeated frequently and the various failed attempts to design competitive CPUs.
Now after the incredible difficulties that Intel had in its attempts to remain competitive in semiconductor manufacturing, and after a decade during which Intel did not have any clear direction of evolution for the evolution of the x86-64 ISA, i.e. the 2022/2023 Intel non-server CPUs are expected to have approximately the same ISA as the 2013/2014 Intel CPUs, in contrast with ARM, which evolved from Armv7 to Armv8 to Armv9, with decent improvements after each ISA revision, the chances of the Intel/AMD ISA to be eventually replaced during the next decade have become non-negligible.
> Peter Lewis (peter.delete@this.notyahoo.com) on June 1, 2022 3:55 pm wrote:
> > >> I think x86 will eventually be killed by variable length instruction decode, Moore’s law slowing
> > >> down, availability of software binary translation from x86 to something else and most low-performance
> > >> software running on top of JavaScript. The x86 instruction sets will eventually have the same market
> > >> significance as the IBM 360 instruction set. I own Intel stock and I’m not selling because I think
> > >> it will take more than 20 years for x86 to be displaced from the dominant position it has today.
> > >
> > > Why? What are the market forces that you believe will displace
> > > x86? What do you think will replace it, RISC-V?
> >
> > My guess is the higher complexity and higher power consumption
> > of x86 will eventually allow ARM implementations
> > to outperform x86 implementations. Apple’s M1 P-cores currently
> > decode 8 instructions per clock, while Intel’s
> > Golden Cove cores in Alder Lake and Sapphire Rapids decode
> > 6 instructions per clock. When ARM implementations
> > are decoding 32 instructions per clock, it will be very difficult for x86 implementations to keep up.
> >
>
>
> People have been predicting this ever since the first superscalar RISCs appeared over 30
> years ago, and it still hasn't happened. A couple years ago people were saying 3/4 wide was
> the best x86 could manage, now they've beat it. Are you saying where they are now is the
> limit, or will the goalposts shift again in 5 years when they take the next step wider?
>
Using an ISA that requires a more complex decoder, especially when decoding many instructions per cycle, is just one of many factors which determine the relationships between speed, power consumption, manufacturing cost, design cost.
A disadvantage in ISA can be easily compensated by advantages in manufacturing process or just by having more competent designers.
The same happens e.g. in a fighting sport. A heavier fighter has an advantage, put a more skilled opponent can easily beat him, despite the weight difference.
Nevertheless, when everything else is the same, so the 2 opponents are equally skilled, the heavier will win, which is why all fighting sport competitors are separated in weight classes.
The same is in CPU design and manufacturing. When everything else would be equal, the CPU using an ISA which needs a simpler decoder would be cheaper and/or faster.
As long as Intel had other more important advantages, a less efficient ISA did not matter.
Now, when everybody able to do up-to-date CPU design has access to the same manufacturing process and they also either might be able to spend similar amounts of money as Intel on a CPU design or they might benefit of reduced design costs, e.g. by licensing cores or other components, the more efficient ISA begins to matter again.
Before the last few years, there were no credible chances that x86-64 could be replaced by AArch64, despite the claims that were repeated frequently and the various failed attempts to design competitive CPUs.
Now after the incredible difficulties that Intel had in its attempts to remain competitive in semiconductor manufacturing, and after a decade during which Intel did not have any clear direction of evolution for the evolution of the x86-64 ISA, i.e. the 2022/2023 Intel non-server CPUs are expected to have approximately the same ISA as the 2013/2014 Intel CPUs, in contrast with ARM, which evolved from Armv7 to Armv8 to Armv9, with decent improvements after each ISA revision, the chances of the Intel/AMD ISA to be eventually replaced during the next decade have become non-negligible.