LEA

By: Brett (ggtgp.delete@this.yahoo.com), June 16, 2022 2:52 pm
Room: Moderated Discussions
Mark Roulo (nothanks.delete@this.xxx.com) on June 16, 2022 1:57 pm wrote:
> Paul A. Clayton (paaronclayton.delete@this.gmail.com) on June 16, 2022 1:13 pm wrote:
> > Doug S (foo.delete@this.bar.bar) on June 16, 2022 9:39 am wrote:
> > > hobold (hobold.delete@this.vectorizer.org) on June 16, 2022 5:12 am wrote:
> > [snip]
> > >> I was wondering for a while if maybe Apple designed a processor that can sometimes
> > >> execute two serially dependent instructions within one longer clock cycle.
> > >>
> > >> How much % of cycle time is latch overhead these days? What if instead of the usual beat
> > >> "latch work latch work latch" you built for "latch work work latch work work latch"?
> > >>
> > >> One probably wouldn't even try to make this work for arbitrary sequences of two dependent instructions.
> > >> But maybe a small subset of dependent pairs is statistically dominant enough to focus on?
> > >
> > > It sounds like you're suggesting something like Pentium
> > > 4's "double pumped ALU" (aka "Rapid Execution Engine")
> >
> > The proposal was to remove latch overhead (width-pipelined/staggered ALUs do not remove
> > the latches between operations). A cascaded ALU does perform one operation after another
> > (plausibly without latches). I think Sun's SuperSPARC implemented something like this
>
> My understanding of SuperSPARC is that code like this:
>
> r1 = r2 + r3
> r4 = r1 + 7
>
> Could be scheduled such that both instructions ran in a single clock cycle even though
> the second add needed the result of the first add.
>
> [It could also do one more instruction if everything worked out ...]
>

The bulk of the x86 performance advantage over dumb early RISC is LEA which is a shifted double add.

LEA Rt, [Rs1+a*Rs2+b] => Rt = Rs1 + a*Rs2 + b

A three source add only costs a gate delay on top of the dozen or so for a two source add, and the short shift costs 2 or so gate delays.
An ALU cycle has 20 or so gate delays that you have to complete your work.
Someone will correct me for x86 64bit clock speeds. ;)

Since the backend is 10 wide you can bundle consecutive adds doing both the first add and a double add that completes the same cycle. This only requires one more read port as it will borrow from the other add. The bigger problem is another result means another write port and those are expensive. Ideally you would add a triple add instruction and related to get better performance.

If the intermediate result is immediately destroyed by the second add you can get rid of the first add, saving a write port and a tracking slot.

You could play checkpoint tricks where you delay and perhaps never save the intermediate result if an interrupt does not occur. Don’t know if that is possible, but generalized this would get a huge performance boost by saving write ports, enabling you to execute wider.

The future is wide, really really wide.
< Previous Post in ThreadNext Post in Thread >
TopicPosted ByDate
M2 benchmarks-2022/06/15 12:27 PM
  You mean "absurd ARM"? ;-) (NT)Rayla2022/06/15 02:18 PM
    It has PPC heritage :) (NT)anon22022/06/15 02:55 PM
      Performance per clock2022/06/15 03:05 PM
        Performance per single clock cyclehobold2022/06/16 05:12 AM
          Performance per single clock cycledmcq2022/06/16 06:59 AM
            Performance per single clock cyclehobold2022/06/16 07:42 AM
          Performance per single clock cycleDoug S2022/06/16 09:39 AM
            Performance per single clock cyclehobold2022/06/16 12:36 PM
            More like cascaded ALUsPaul A. Clayton2022/06/16 01:13 PM
              SuperSPARC ALUMark Roulo2022/06/16 01:57 PM
                LEABrett2022/06/16 02:52 PM
  M2 benchmarksDaveC2022/06/15 03:31 PM
    M2 benchmarksanon22022/06/15 05:06 PM
    M2 benchmarks2022/06/15 07:21 PM
    M2 benchmarks---2022/06/15 07:33 PM
  M2 benchmarksAdrian2022/06/15 10:11 PM
    M2 benchmarksEric Fink2022/06/16 12:07 AM
      M2 benchmarksAdrian2022/06/16 02:09 AM
        M2 benchmarksEric Fink2022/06/16 05:46 AM
          M2 benchmarksAdrian2022/06/16 09:27 AM
            M2 benchmarks---2022/06/16 10:08 AM
              M2 benchmarksAdrian2022/06/16 11:43 AM
                M2 benchmarksDummond D. Slow2022/06/16 01:03 PM
                  M2 benchmarksAdrian2022/06/17 03:34 AM
                    M2 benchmarksDummond D. Slow2022/06/17 07:35 AM
            M2 benchmarksnone2022/06/16 10:14 AM
              M2 benchmarksAdrian2022/06/16 12:44 PM
            M2 benchmarksEric Fink2022/06/17 02:05 AM
        M2 benchmarksAnon2022/06/16 06:28 AM
          M2 benchmarks => MTAdrian2022/06/16 11:04 AM
            M2 benchmarks => MTAnon2022/06/18 02:38 AM
              M2 benchmarks => MTAdrian2022/06/18 03:25 AM
                M2 benchmarks => MT---2022/06/18 10:14 AM
      M2 benchmarksDoug S2022/06/16 09:49 AM
        M2 Pro at 3nmEric Fink2022/06/17 02:51 AM
    M2 benchmarksSean M2022/06/16 01:00 AM
      M2 benchmarksDoug S2022/06/16 09:56 AM
        M2 benchmarksjoema2022/06/16 01:28 PM
          M2 benchmarksSean M2022/06/16 02:53 PM
            M2 benchmarksDoug S2022/06/16 09:19 PM
              M2 benchmarksDoug S2022/06/16 09:21 PM
                M2 benchmarks---2022/06/16 10:53 PM
                  M2 benchmarksDoug S2022/06/17 12:37 AM
                  Apple’s STEM AmbitionsSean M2022/06/17 04:18 AM
                    Apple’s STEM Ambitions---2022/06/17 09:33 AM
                      Mac Pro with Nvidia H100Tony Wu2022/06/17 06:37 PM
                        Mac Pro with Nvidia H100Doug S2022/06/17 10:37 PM
                          Mac Pro with Nvidia H100Tony Wu2022/06/18 06:49 AM
                            Mac Pro with Nvidia H100Dan Fay2022/06/18 07:40 AM
                          Mac Pro with Nvidia H100Anon42022/06/20 09:04 AM
                            Mac Pro with Nvidia H100Simon Farnsworth2022/06/20 10:09 AM
                              Mac Pro with Nvidia H100Doug S2022/06/20 10:32 AM
                                Mac Pro with Nvidia H100Simon Farnsworth2022/06/20 11:20 AM
                              Mac Pro with Nvidia H100Anon42022/06/20 04:16 PM
                            Mac Pro with Nvidia H100Doug S2022/06/20 10:19 AM
                        Mac Pro with Nvidia H100me2022/06/18 07:17 AM
                          Mac Pro with Nvidia H100Tony Wu2022/06/18 09:28 AM
                            Mac Pro with Nvidia H100me2022/06/19 10:08 AM
                              Mac Pro with Nvidia H100Dummond D. Slow2022/06/19 10:51 AM
                                Mac Pro with Nvidia H100Elliott H2022/06/19 06:39 PM
                            Mac Pro with Nvidia H100Doug S2022/06/19 06:16 PM
                              Mac Pro with Nvidia H100---2022/06/19 06:56 PM
                                Mac Pro with Nvidia H100Sam G2022/06/19 11:00 PM
                                  Mac Pro with Nvidia H100---2022/06/20 06:25 AM
                                    Mac Pro with Nvidia H100anon52022/06/20 08:41 AM
                                      Mac Pro with Nvidia H100Sam G2022/06/20 07:22 PM
                                    Mac Pro with Nvidia H100Sam G2022/06/20 07:13 PM
                                      Mac Pro with Nvidia H100Doug S2022/06/20 10:19 PM
                                        Mac Pro with Nvidia H100Sam G2022/06/22 12:06 AM
                                          Mac Pro with Nvidia H100Doug S2022/06/22 09:18 AM
                                  Mac Pro with Nvidia H100Doug S2022/06/20 10:38 AM
                                    Mac Pro with Nvidia H100Sam G2022/06/20 07:17 PM
                              Mac Pro with Nvidia H100Dummond D. Slow2022/06/20 05:46 PM
                      Apple’s STEM Ambitionsnoko2022/06/17 07:32 PM
                      Quick aside: huge pages also useful for nested page tables (virtualization) (NT)Paul A. Clayton2022/06/18 06:28 AM
                        Quick aside: huge pages also useful for nested page tables (virtualization)---2022/06/18 10:16 AM
          Not this nonsense againAnon2022/06/16 03:06 PM
            Parallel video encodingWes Felter2022/06/16 04:57 PM
              Parallel video encodingDummond D. Slow2022/06/16 07:16 PM
                Parallel video encodingWes Felter2022/06/16 07:49 PM
              Parallel video encoding---2022/06/16 07:41 PM
                Parallel video encodingDummond D. Slow2022/06/16 10:08 PM
                  Parallel video encoding---2022/06/16 11:03 PM
                    Parallel video encodingDummond D. Slow2022/06/17 07:45 AM
            Not this nonsense againjoema2022/06/16 09:13 PM
              Not this nonsense again---2022/06/16 11:18 PM
  M2 benchmarks-DDR4 vs DDR5Per Hesselgren2022/06/16 01:09 AM
    M2 benchmarks-DDR4 vs DDR5Rayla2022/06/16 08:12 AM
      M2 benchmarks-DDR4 vs DDR5Doug S2022/06/16 09:58 AM
        M2 benchmarks-DDR4 vs DDR5Rayla2022/06/16 11:58 AM
Reply to this Topic
Name:
Email:
Topic:
Body: No Text
How do you spell avocado?