CMPXCHG latency

By: Linus Torvalds (torvalds.delete@this.linux-foundation.org), April 2, 2008 8:04 am
Room: Moderated Discussions
Zan (zan@hate.spam.net) on 4/2/08 wrote:
>
>What's needed from the atomic instruction in the spinlock()
>function is obviously some kind memory barrier semantics.

Those semantics don't have to be very strong.

Even with traditional x86 locked ops, the only semantics
you need is that memory accesses appear to execute
in the right order.

That "appear" is important, because there are lots of cases
where it means "they don't execute in the right order at
all, but nobody can tell the difference".

For example, people are used to a CPU buffering stores, as
long as the store buffer is still checked on subsequent
loads so that the stores appear to happen before the
loads from the same CPU do. Everybody does that.

Similarly, in SMP environments, if you have strong ordering
guarantees (like Intel does), that doesn't mean you cannot
re-order memory operations - it just means that you must do
it invisibly to other cores.

And there's a really easy case where memory operations are
guaranteed to be 100% invisible to other cores: when they
are all done in the cache.

In other words, you can speculate and re-order any amount
of memory operations across even a memory barrier, as long
as they hit the local CPU cache and there are no cache
evictions or fills that could expose the fact that the core
actually did the operations out of order.

>In a safe but slow implementation you can flush all the
>pipelines and L/S buffers, then do a bus transaction to
>make sure you have local cache line in exclusive state.

Only insane people do that. I don't think Intel has ever
done it since they introduced data caches (others have: it
is essentially what the original alpha did for LL/SC, ugh).

>Are you proposing to speculate past the atomic instruction
>and borrow a little bit from transaction memory?

Absolutely.

Intel already does "transactions". What do you think the
whole load order speculation is? They promote loads past
earlier stores even if those stores don't have a known
address - and if it turns out that the store aliased, the
CPU core will just end up restarting the instruction
sequence.

It's no different from a branch mis-prediction, really.

Well, it is different, not in the sense of what you
do when things go wrong, but in the sense that you obviously
need different structures to keep track of your predictions,
and check them.

So memory order speculation basically means that you have
to keep track of each memory operation that you did out of
order, and if there is any event that could show that order
to be wrong (like an alias between an earlier store and a
later load, but with SMP ordering also a violation of the
visible bus - or shared L3 - traffic) you just need to undo
and restart in the right order - exactly the same way you'd
undo and restart a mis-predicted branch.

There is nothing fundamentally new there, except for the
much more complicated tracking of dependencies. And we know
that Intel already tracks a subset of them, since
they already do the load-store reordering, and since I'm
pretty sure they do load-load re-ordering too internally.

So to do load-vs-locked reordering is really not a big
step. They probably didn't need to add a ton of new logic,
just extend their existing logic a bit.

So now, when you hit an atomic op, you don't need to flush
the pipeline at all. You do need to add the slightly
stricter ordering dependencies to the things you track, but
the thing is, since the Intel memory ordering is already so
strict, the only thing a locked op adds to it is really the
store buffer synchronization (ie normally the cpu allows
loads to pass stores in the buffer, even visibly to other
CPUs).

So afaik, if you already have the mechanism for doing all
the memory ordering tracking, you now basically need to
just make the load of a locked op depend on the most recent
store in the store buffer, and you're now done! And if they
all hit in the cache (stores of course need to hit in
exclusive lines) there will be no restart or flush of the
pipeline, and the locked op acted exactly the same way a
normal RMW operation would have, apart from the small
extra dependency tracking.

Hmm?

The "cheap spinlock" thing is an extension of this, since
doing the lock without actually requiring exclusive owner-
ship of the cacheline actually changes visible behavior,
but only with regard to the lock itself (and the ordering
that lock implies!). With a new instruction, that kind of
change is a non-issue, since it wouldn't change any old
and guaranteed behavior, it would just introduce a new model
of synchronization.

Linus
< Previous Post in ThreadNext Post in Thread >
TopicPosted ByDate
Nehalem Architecture: Improvements Detailed Blut Aus Nord2008/03/17 02:52 PM
  Nehalem Architecture: Improvements Detailed bah2008/03/17 04:45 PM
    Nehalem Architecture: Improvements Detailed Linus Torvalds2008/03/17 06:14 PM
      Nehalem Architecture: Improvements Detailed Gabriele Svelto2008/03/18 01:11 AM
        Nehalem Architecture: Improvements Detailed Henrik S2008/03/18 04:23 AM
        Nehalem Architecture: Improvements Detailed Doug Siebert2008/03/18 09:48 PM
          Nehalem Architecture: Improvements Detailed anon2008/03/18 10:37 PM
            Nehalem Architecture: Improvements Detailed Doug Siebert2008/03/19 05:23 PM
          Nehalem Architecture: Improvements Detailed Ian Ollmann2008/03/19 08:15 AM
            SSE 4.2 Michael S2008/03/19 04:13 PM
              SSE 4.2 Ian Ollmann2008/03/20 09:56 AM
              SSE 4.2 anonymous2008/03/20 12:29 PM
                SSE 4.2 David W. Hess2008/03/21 07:24 AM
                  SSE 4.2 anonymous2008/03/22 07:27 AM
      CMPXCHG latencyDavid Kanter2008/03/28 05:59 PM
        CMPXCHG latencyanonymous coward2008/03/28 10:24 PM
          CMPXCHG latencyDavid Kanter2008/03/28 10:26 PM
            CMPXCHG latencyLinus Torvalds2008/03/29 11:43 AM
              CMPXCHG latencyDavid W. Hess2008/03/29 11:56 AM
              CMPXCHG latencyLinus Torvalds2008/03/29 02:17 PM
                CMPXCHG latencyGabriele Svelto2008/03/31 12:25 AM
                  CMPXCHG latencyMichael S2008/03/31 12:38 AM
                    CMPXCHG latencynick2008/03/31 12:52 AM
                      CMPXCHG latencyMichael S2008/03/31 01:51 AM
                        CMPXCHG latencyGabriele Svelto2008/03/31 02:08 AM
                        CMPXCHG latencynick2008/03/31 07:20 PM
                          CMPXCHG latencyMichael S2008/04/01 01:14 AM
                            CMPXCHG latencynick2008/04/01 02:34 AM
                    CMPXCHG latencyLinus Torvalds2008/03/31 10:16 AM
                      CMPXCHG latencyAaron Spink2008/03/31 07:15 PM
                        CMPXCHG latencynick2008/03/31 07:34 PM
                        CMPXCHG latencyLinus Torvalds2008/04/01 08:25 AM
                          CMPXCHG latencyZan2008/04/01 09:54 PM
                          CMPXCHG latencyZan2008/04/02 12:11 AM
                            CMPXCHG latencyLinus Torvalds2008/04/02 08:04 AM
                              CMPXCHG latencyZan2008/04/02 11:02 AM
                                CMPXCHG latencyLinus Torvalds2008/04/02 12:02 PM
                                  CMPXCHG latencyZan2008/04/02 04:15 PM
                      CMPXCHG latencyMichael S2008/04/01 01:26 AM
                        CMPXCHG latencyLinus Torvalds2008/04/01 07:08 AM
                CMPXCHG latency - Intel sourceWouter Tinus2008/04/02 12:36 PM
                  CMPXCHG latency - Intel sourceLinus Torvalds2008/04/02 02:21 PM
                    CMPXCHG latency - Intel sourceDavid Kanter2008/04/02 02:39 PM
    Nehalem Architecture: Improvements Detailed Philip Honermann2008/03/19 01:11 PM
      Nehalem Architecture: Improvements Detailed Linus Torvalds2008/03/19 01:43 PM
        CMPXCHG - all or nothingMichael S2008/03/19 03:49 PM
          multithreading - all or nothingno@thanks.com2008/03/19 05:17 PM
          CMPXCHG - all or nothingLinus Torvalds2008/03/19 05:21 PM
            CMPXCHG - all or nothingMichael S2008/03/20 06:38 AM
              CMPXCHG - all or nothingLinus Torvalds2008/03/20 08:45 AM
                CMPXCHG - all or nothingMichael S2008/03/21 07:08 AM
                  CMPXCHG - all or nothingLinus Torvalds2008/03/21 08:47 AM
            CMPXCHG - all or nothingHenrik S2008/03/20 10:09 AM
              CMPXCHG - all or nothingLinus Torvalds2008/03/20 10:53 AM
                CMPXCHG - all or nothingHenrik S2008/03/20 12:03 PM
                  CMPXCHG - all or nothingLinus Torvalds2008/03/20 01:12 PM
                    CMPXCHG - all or nothingHenrik S2008/03/21 12:13 AM
                      CMPXCHG - all or nothingGabriele Svelto2008/03/21 01:22 AM
        Nehalem Architecture: Improvements Detailed Philip Honermann2008/03/19 06:28 PM
          Nehalem Architecture: Improvements Detailed Linus Torvalds2008/03/19 07:42 PM
            Nehalem Architecture: Improvements Detailed Philip Honermann2008/03/20 06:03 PM
              Nehalem Architecture: Improvements Detailed Linus Torvalds2008/03/20 06:33 PM
                Nehalem Architecture: Improvements Detailed Philip Honermann2008/03/25 06:37 AM
                  Nehalem Architecture: Improvements Detailed Linus Torvalds2008/03/25 08:52 AM
                    What is DCAS? (NT)David Kanter2008/03/25 10:13 AM
                      Double compare-and-exchangeHenrik S2008/03/25 10:57 AM
                        Double compare-and-exchangeLinus Torvalds2008/03/25 11:38 AM
                          Double compare-and-exchangesavantu2008/03/25 01:54 PM
                            Double compare-and-exchangeLinus Torvalds2008/03/25 04:09 PM
                              Double compare-and-exchangeJamie Lucier2008/03/25 08:55 PM
                                Double compare-and-exchangesavantu2008/03/25 09:15 PM
                                  Double compare-and-exchangeHenrik S2008/03/26 08:40 AM
                                    Double compare-and-exchangeArun Ramakrishnan2008/03/27 02:07 AM
                                      Double compare-and-exchangeHenrik S2008/03/27 04:45 AM
                                  Surely GPL applies ?Richard Cownie2008/03/26 10:05 AM
                                    Surely GPL applies ?anon2008/03/26 02:58 PM
                                    Surely GPL applies ?Paul2008/03/26 05:01 PM
                                Double compare-and-exchangesomeone2008/03/25 09:18 PM
                                  Double compare-and-exchangeArun Ramakrishnan2008/03/27 02:03 AM
                                    Double compare-and-exchangesavantu2008/03/27 03:01 AM
                                      Double compare-and-exchangeArun Ramakrishnan2008/03/30 09:09 AM
                                        Double compare-and-exchangesavantu2008/03/30 09:59 AM
                                Double compare-and-exchangeLinus Torvalds2008/03/26 10:50 AM
                                  Double compare-and-exchangeanon2008/03/26 04:47 PM
                                  Double compare-and-exchangePaul2008/03/26 05:07 PM
                          Double compare-and-exchangeHoward Chu2008/03/25 05:18 PM
  Nehalem Architecture: Improvements Detailed Mr. Camel2008/03/17 08:50 PM
    Nehalem Architecture: Improvements Detailed anonymous2008/03/17 09:20 PM
  TFP will finally come :-)Paul A. Clayton2008/03/18 12:56 PM
  Nehalem Architecture: Improvements Detailed IntelUser20002008/03/27 07:46 PM
    Nehalem Architecture: Improvements Detailed David Kanter2008/03/27 10:21 PM
      Nehalem Architecture: Improvements Detailed nick2008/03/27 11:06 PM
        Nehalem Architecture: Improvements Detailed David Kanter2008/03/28 02:45 PM
          Nehalem Architecture: Improvements Detailed nick2008/03/28 07:52 PM
  L1 I-cachepuzzled2008/04/01 07:53 AM
    L1 I-cacheS. Rao2008/04/01 09:47 AM
    L1 I-cacherwessel2008/04/01 12:23 PM
    L1 I-cacheGabriele Svelto2008/04/03 12:30 AM
Reply to this Topic
Name:
Email:
Topic:
Body: No Text
How do you spell tangerine? 🍊