By: Linus Torvalds (torvalds.delete@this.linux-foundation.org), January 3, 2020 7:05 pm
Room: Moderated Discussions
Beastian (no.email.delete@this.aol.com) on January 3, 2020 11:46 am wrote:
> I'm usually on the other side of these primitives when I write code as a consumer of them,
> but it's very interesting to read about the nuances related to their implementations:
The whole post seems to be just wrong, and is measuring something completely different than what the author thinks and claims it is measuring.
First off, spinlocks can only be used if you actually know you're not being scheduled while using them. But the blog post author seems to be implementing his own spinlocks in user space with no regard for whether the lock user might be scheduled or not. And the code used for the claimed "lock not held" timing is complete garbage.
It basically reads the time before releasing the lock, and then it reads it after acquiring the lock again, and claims that the time difference is the time when no lock was held. Which is just inane and pointless and completely wrong.
That's pure garbage. What happens is that
(a) since you're spinning, you're using CPU time
(b) at a random time, the scheduler will schedule you out
(c) that random time might ne just after you read the "current time", but before you actually released the spinlock.
So now you still hold the lock, but you got scheduled away from the CPU, because you had used up your time slice. The "current time" you read is basically now stale, and has nothing to do with the (future) time when you are actually going to release the lock.
Somebody else comes in and wants that "spinlock", and that somebody will now spin for a long while, since nobody is releasing it - it's still held by that other thread entirely that was just scheduled out. At some point, the scheduler says "ok, now you've used your time slice", and schedules the original thread, and now the lock is actually released. Then another thread comes in, gets the lock again, and then it looks at the time and says "oh, a long time passed without the lock being held at all".
And notice how the above is the good schenario. If you have more threads than CPU's (maybe because of other processes unrelated to your own test load), maybe the next thread that gets shceduled isn't the one that is going to release the lock. No, that one already got its timeslice, so the next thread scheduled might be another thread that wants that lock that is still being held by the thread that isn't even running right now!
So the code in question is pure garbage. You can't do spinlocks like that. Or rather, you very much can do them like that, and when you do that you are measuring random latencies and getting nonsensical values, because what you are measuring is "I have a lot of busywork, where all the processes are CPU-bound, and I'm measuring random points of how long the scheduler kept the process in place".
And then you write a blog-post blamings others, not understanding that it's your incorrect code that is garbage, and is giving random garbage values.
And then you test different schedulers, and you get different random values that you think are interesting, because you think they show something cool about the schedulers.
But no. You're just getting random values because different schedulers have different heuristics for "do I want to let CPU bound processes use long time slices or not"? Particularly in a load where everybody is just spinning on the silly and buggy benchmark, so they all look like they are pure throughput benchmarks and aren't actually waiting on each other.
You might even see issues like "when I run this as a foreground UI process, I get different numbers than when I run it in the background as a batch process". Cool interesting numbers, aren't they?
No, they aren't cool and interesting at all, you've just created a particularly bad random number generator.
So what's the fix for this?
Use a lock where you tell the system that you're waiting for the lock, and where the unlocking thread will let you know when it's done, so that the scheduler can actually work with you, instead of (randomly) working against you.
Notice, how when the author uses an actual std::mutex, things just work fairly well, and regardless of scheduler. Because now you're doing what you're supposed to do. Yeah, the timing values might still be off - bad luck is bad luck - but at least now the scheduler is aware that you're "spinning" on a lock.
Or, if you really want to use use spinlocks (hint: you don't), make sure that while you hold the lock, you're not getting scheduled away. You need to use a realtime scheduler for that (or be the kernel: inside the kernel spinlocks are fine, because the kernel itself can say "hey, I'm doing a spinlock, you can't schedule me right now").
But if you use a realtime scheduler, you need to be aware of the other implications of that. There are many, and some of them are deadly. I would suggest strongly against trying. You'll likely get all the other issues wrong anyway, and now some of the mistakes (like unfairness or [priority inversions) can literally hang your whole thing entirely and things go from "slow because I did bad locking" to "not working at all, because I didn't think through a lot of other things".
Note that even OS kernels can have this issue - imagine what happens in virtualized environments with overcommitted physical CPU's scheduled by a hypervisor as virtual CPU's? Yeah - exactly. Don't do that. Or at least be aware of it, and have some virtualization-aware paravirtualized spinlock so that you can tell the hypervisor that "hey, don't do that to me right now, I'm in a critical region".
Because otherwise you're going to at some time be scheduled away while you're holding the lock (perhaps after you've done all the work, and you're just about to release it), and everybody else will be blocking on your incorrect locking while you're scheduled away and not making any progress. All spinning on CPU's.
Really, it's that simple.
This has absolutely nothing to do with cache coherence latencies or anything like that. It has everything to do with badly implemented locking.
I repeat: do not use spinlocks in user space, unless you actually know what you're doing. And be aware that the likelihood that you know what you are doing is basically nil.
There's a very real reason why you need to use sleeping locks (like pthread_mutex etc).
In fact, I'd go even further: don't ever make up your own locking routines. You will get the wrong, whether they are spinlocks or not. You'll get memory ordering wrong, or you'll get fairness wrong, or you'll get issues like the above "busy-looping while somebody else has been scheduled out".
And no, adding random "sched_yield()" calls while you're spinning on the spinlock will not really help. It will easily result in scheduling storms while people are yielding to all the wrong processes.
Sadly, even the system locking isn't necessarily wonderful. For a lot of benchmarks, for example, you want unfair locking, because it can improve throughput enormously. But that can cause bad latencies. And your standard system locking (eg pthread_mutex_lock() may not have a flag to say "I care about fair locking because latency is more important than throughput".
So even if you get locking technically right and are avoiding the outright bugs, you may get the wrong kind of lock behavior for your load. Throughput and latency really do tend to have very antagonistic tendencies wrt locking. An unfair lock that keeps the lock with one single thread (or keeps it to one single CPU) can give much better cache locality behavior, and much better throughput numbers.
But that unfair lock that prefers local threads and cores might thus directly result in latency spikes when some other core would really want to get the lock, but keeping it core-local helps cache behavior. In contrast, a fair lock avoids the latency spikes, but will cause a lot of cross-CPU cache coherency, because now the locked region will be much more aggressively moving from one CPU to another.
In general, unfair locking can get so bad latency-wise that it ends up being entirely unacceptable for larger systems. But for smaller systems the unfairness might not be as noticeable, but the performance advantage is noticeable, so then the system vendor will pick that unfair but faster lock queueing algorithm.
(Pretty much every time we picked an unfair - but fast - locking model in the kernel, we ended up regretting it eventually, and had to add fairness).
So you might want to look into not the standard library implementation, but specific locking implentations for your particular needs. Which is admittedly very very annoying indeed. But don't write your own. Find somebody else that wrote one, and spent the decades actually tuning it and making it work.
Because you should never ever think that you're clever enough to write your own locking routines.. Because the likelihood is that you aren't (and by that "you" I very much include myself - we've tweaked all the in-kernel locking over decades, and gone through the simple test-and-set to ticket locks to cacheline-efficient queuing locks, and even people who know what they are doing tend to get it wrong several times).
There's a reason why you can find decades of academic papers on locking. Really. It's hard.
Linus
> I'm usually on the other side of these primitives when I write code as a consumer of them,
> but it's very interesting to read about the nuances related to their implementations:
The whole post seems to be just wrong, and is measuring something completely different than what the author thinks and claims it is measuring.
First off, spinlocks can only be used if you actually know you're not being scheduled while using them. But the blog post author seems to be implementing his own spinlocks in user space with no regard for whether the lock user might be scheduled or not. And the code used for the claimed "lock not held" timing is complete garbage.
It basically reads the time before releasing the lock, and then it reads it after acquiring the lock again, and claims that the time difference is the time when no lock was held. Which is just inane and pointless and completely wrong.
That's pure garbage. What happens is that
(a) since you're spinning, you're using CPU time
(b) at a random time, the scheduler will schedule you out
(c) that random time might ne just after you read the "current time", but before you actually released the spinlock.
So now you still hold the lock, but you got scheduled away from the CPU, because you had used up your time slice. The "current time" you read is basically now stale, and has nothing to do with the (future) time when you are actually going to release the lock.
Somebody else comes in and wants that "spinlock", and that somebody will now spin for a long while, since nobody is releasing it - it's still held by that other thread entirely that was just scheduled out. At some point, the scheduler says "ok, now you've used your time slice", and schedules the original thread, and now the lock is actually released. Then another thread comes in, gets the lock again, and then it looks at the time and says "oh, a long time passed without the lock being held at all".
And notice how the above is the good schenario. If you have more threads than CPU's (maybe because of other processes unrelated to your own test load), maybe the next thread that gets shceduled isn't the one that is going to release the lock. No, that one already got its timeslice, so the next thread scheduled might be another thread that wants that lock that is still being held by the thread that isn't even running right now!
So the code in question is pure garbage. You can't do spinlocks like that. Or rather, you very much can do them like that, and when you do that you are measuring random latencies and getting nonsensical values, because what you are measuring is "I have a lot of busywork, where all the processes are CPU-bound, and I'm measuring random points of how long the scheduler kept the process in place".
And then you write a blog-post blamings others, not understanding that it's your incorrect code that is garbage, and is giving random garbage values.
And then you test different schedulers, and you get different random values that you think are interesting, because you think they show something cool about the schedulers.
But no. You're just getting random values because different schedulers have different heuristics for "do I want to let CPU bound processes use long time slices or not"? Particularly in a load where everybody is just spinning on the silly and buggy benchmark, so they all look like they are pure throughput benchmarks and aren't actually waiting on each other.
You might even see issues like "when I run this as a foreground UI process, I get different numbers than when I run it in the background as a batch process". Cool interesting numbers, aren't they?
No, they aren't cool and interesting at all, you've just created a particularly bad random number generator.
So what's the fix for this?
Use a lock where you tell the system that you're waiting for the lock, and where the unlocking thread will let you know when it's done, so that the scheduler can actually work with you, instead of (randomly) working against you.
Notice, how when the author uses an actual std::mutex, things just work fairly well, and regardless of scheduler. Because now you're doing what you're supposed to do. Yeah, the timing values might still be off - bad luck is bad luck - but at least now the scheduler is aware that you're "spinning" on a lock.
Or, if you really want to use use spinlocks (hint: you don't), make sure that while you hold the lock, you're not getting scheduled away. You need to use a realtime scheduler for that (or be the kernel: inside the kernel spinlocks are fine, because the kernel itself can say "hey, I'm doing a spinlock, you can't schedule me right now").
But if you use a realtime scheduler, you need to be aware of the other implications of that. There are many, and some of them are deadly. I would suggest strongly against trying. You'll likely get all the other issues wrong anyway, and now some of the mistakes (like unfairness or [priority inversions) can literally hang your whole thing entirely and things go from "slow because I did bad locking" to "not working at all, because I didn't think through a lot of other things".
Note that even OS kernels can have this issue - imagine what happens in virtualized environments with overcommitted physical CPU's scheduled by a hypervisor as virtual CPU's? Yeah - exactly. Don't do that. Or at least be aware of it, and have some virtualization-aware paravirtualized spinlock so that you can tell the hypervisor that "hey, don't do that to me right now, I'm in a critical region".
Because otherwise you're going to at some time be scheduled away while you're holding the lock (perhaps after you've done all the work, and you're just about to release it), and everybody else will be blocking on your incorrect locking while you're scheduled away and not making any progress. All spinning on CPU's.
Really, it's that simple.
This has absolutely nothing to do with cache coherence latencies or anything like that. It has everything to do with badly implemented locking.
I repeat: do not use spinlocks in user space, unless you actually know what you're doing. And be aware that the likelihood that you know what you are doing is basically nil.
There's a very real reason why you need to use sleeping locks (like pthread_mutex etc).
In fact, I'd go even further: don't ever make up your own locking routines. You will get the wrong, whether they are spinlocks or not. You'll get memory ordering wrong, or you'll get fairness wrong, or you'll get issues like the above "busy-looping while somebody else has been scheduled out".
And no, adding random "sched_yield()" calls while you're spinning on the spinlock will not really help. It will easily result in scheduling storms while people are yielding to all the wrong processes.
Sadly, even the system locking isn't necessarily wonderful. For a lot of benchmarks, for example, you want unfair locking, because it can improve throughput enormously. But that can cause bad latencies. And your standard system locking (eg pthread_mutex_lock() may not have a flag to say "I care about fair locking because latency is more important than throughput".
So even if you get locking technically right and are avoiding the outright bugs, you may get the wrong kind of lock behavior for your load. Throughput and latency really do tend to have very antagonistic tendencies wrt locking. An unfair lock that keeps the lock with one single thread (or keeps it to one single CPU) can give much better cache locality behavior, and much better throughput numbers.
But that unfair lock that prefers local threads and cores might thus directly result in latency spikes when some other core would really want to get the lock, but keeping it core-local helps cache behavior. In contrast, a fair lock avoids the latency spikes, but will cause a lot of cross-CPU cache coherency, because now the locked region will be much more aggressively moving from one CPU to another.
In general, unfair locking can get so bad latency-wise that it ends up being entirely unacceptable for larger systems. But for smaller systems the unfairness might not be as noticeable, but the performance advantage is noticeable, so then the system vendor will pick that unfair but faster lock queueing algorithm.
(Pretty much every time we picked an unfair - but fast - locking model in the kernel, we ended up regretting it eventually, and had to add fairness).
So you might want to look into not the standard library implementation, but specific locking implentations for your particular needs. Which is admittedly very very annoying indeed. But don't write your own. Find somebody else that wrote one, and spent the decades actually tuning it and making it work.
Because you should never ever think that you're clever enough to write your own locking routines.. Because the likelihood is that you aren't (and by that "you" I very much include myself - we've tweaked all the in-kernel locking over decades, and gone through the simple test-and-set to ticket locks to cacheline-efficient queuing locks, and even people who know what they are doing tend to get it wrong several times).
There's a reason why you can find decades of academic papers on locking. Really. It's hard.
Linus
Topic | Posted By | Date |
---|---|---|
Nuances related to Spinlock implementation and the Linux Scheduler | Beastian | 2020/01/03 12:46 PM |
Nuances related to Spinlock implementation and the Linux Scheduler | Montaray Jack | 2020/01/03 01:14 PM |
Nuances related to Spinlock implementation and the Linux Scheduler | Montaray Jack | 2020/01/03 01:49 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/03 07:05 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Beastian | 2020/01/04 12:03 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Malte Skarupke | 2020/01/04 12:22 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/04 01:31 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | dmcq | 2020/01/05 07:33 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | smeuletz | 2020/01/06 02:05 AM |
Do not blame others for your unfinished job | smeuletz | 2020/01/06 02:08 AM |
Where did all the experts come from? Did Linus get linked? (NT) | anon | 2020/01/06 04:27 AM |
Phoronix | Gabriele Svelto | 2020/01/06 05:04 AM |
Phoronix | Salvatore De Dominicis | 2020/01/06 07:59 AM |
Do not blame anyone. Please give polite, constructive criticism | Chester | 2020/01/06 09:17 AM |
Do not blame anyone. Please give polite, constructive criticism | smeuletz | 2020/01/06 10:11 AM |
Do not blame anyone. Please give polite, constructive criticism | Chester | 2020/01/06 10:54 AM |
Do not blame anyone. Please give polite, constructive criticism | smeuletz | 2020/01/06 11:33 AM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/06 12:58 PM |
Do not blame anyone. Please give polite, constructive criticism | Gionatan Danti | 2020/01/06 01:13 PM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/06 01:28 PM |
Do not blame anyone. Please give polite, constructive criticism | Gionatan Danti | 2020/01/06 01:52 PM |
Do not blame anyone. Please give polite, constructive criticism | John Scott | 2020/01/10 08:48 AM |
Do not blame anyone. Please give polite, constructive criticism | supernovas | 2020/01/10 10:01 AM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/10 12:45 PM |
Do not blame anyone. Please give polite, constructive criticism | GDan | 2020/04/06 03:10 AM |
Oracle | Anon3 | 2020/04/07 06:42 AM |
Do not blame anyone. Please give polite, constructive criticism | smeuletz | 2020/01/07 04:07 AM |
Do not blame anyone. Please give polite, constructive criticism | Simon Farnsworth | 2020/01/07 01:40 PM |
Do not blame anyone. Please give polite, constructive criticism | Etienne | 2020/01/08 02:08 AM |
Do not blame anyone. Please give polite, constructive criticism | smeuletz | 2020/01/08 02:18 AM |
Do not blame anyone. Please give polite, constructive criticism | Michael S | 2020/01/08 02:56 AM |
Not deprecating irrelevant API: sched_yield() on quantum computers? | smeuletz | 2020/01/07 04:34 AM |
Do not blame anyone. Please give polite, constructive criticism | magicalgoat | 2020/01/09 05:58 PM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/09 10:37 PM |
Do not blame anyone. Please give polite, constructive criticism | Anon3 | 2020/01/10 04:40 PM |
Do not blame anyone. Please give polite, constructive criticism | rwessel | 2020/01/06 10:04 PM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/06 12:11 PM |
Do not blame anyone. Please give polite, constructive criticism | Gabriele Svelto | 2020/01/06 02:36 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Howard Chu | 2020/01/09 11:39 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/10 12:30 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | president ltd | 2020/01/04 02:44 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Jörn Engel | 2020/01/04 12:34 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Emil Briggs | 2020/01/04 01:13 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Jörn Engel | 2020/01/04 01:46 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/04 02:24 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/04 03:54 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Jörn Engel | 2020/01/05 10:21 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/05 12:42 PM |
FUTEX_LOCK_PI performance | Jörn Engel | 2020/01/05 02:45 PM |
FUTEX_LOCK_PI performance | Linus Torvalds | 2020/01/05 04:30 PM |
FUTEX_LOCK_PI performance | Jörn Engel | 2020/01/05 07:03 PM |
FUTEX_LOCK_PI performance | RichardC | 2020/01/06 07:11 AM |
FUTEX_LOCK_PI performance | Linus Torvalds | 2020/01/06 01:11 PM |
FUTEX_LOCK_PI performance | Gabriele Svelto | 2020/01/06 03:20 AM |
FUTEX_LOCK_PI performance | xilun | 2020/01/06 05:19 PM |
FUTEX_LOCK_PI performance | Konrad Schwarz | 2020/01/13 04:36 AM |
FUTEX_LOCK_PI performance | Gabriele Svelto | 2020/01/13 04:53 AM |
FUTEX_LOCK_PI performance | Simon Farnsworth | 2020/01/13 05:36 AM |
FUTEX_LOCK_PI performance | rwessel | 2020/01/13 06:22 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | rainstar | 2020/01/04 10:58 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Charles Ellis | 2020/01/05 04:00 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Richard | 2020/01/05 09:58 AM |
It's hard to separate | Michael S | 2020/01/05 11:17 AM |
It's hard to separate | rainstared | 2020/01/06 01:52 AM |
It's hard to separate | David Kanter | 2020/01/08 09:27 AM |
It's hard to separate | Anon | 2020/01/08 09:37 PM |
It's hard to separate | none | 2020/01/08 11:50 PM |
It's hard to separate | Anon | 2020/01/09 01:41 AM |
It's hard to separate | none | 2020/01/09 03:54 AM |
It's hard to separate | gallier2 | 2020/01/09 04:19 AM |
It's hard to separate | Anon | 2020/01/09 05:12 AM |
It's hard to separate | Adrian | 2020/01/09 05:24 AM |
It's hard to separate | gallier2 | 2020/01/09 05:58 AM |
It's hard to separate | Adrian | 2020/01/09 07:09 AM |
It's hard to separate | gallier2 | 2020/01/09 05:42 AM |
It's hard to separate | Adrian | 2020/01/09 04:41 AM |
It's hard to separate | Anon | 2020/01/09 05:24 AM |
It's hard to separate | gallier2 | 2020/01/09 06:07 AM |
It's hard to separate | David Hess | 2020/01/09 09:27 AM |
It's hard to separate | Adrian | 2020/01/09 10:15 AM |
It's hard to separate | David Hess | 2020/01/09 10:45 AM |
It's hard to separate | Anon | 2020/01/09 11:15 AM |
It's hard to separate | Adrian | 2020/01/09 11:51 AM |
It's hard to separate | Brett | 2020/01/09 01:49 PM |
Zilog Z8000 | Brett | 2020/01/10 10:53 PM |
Zilog Z8000 | David Hess | 2020/01/11 07:06 AM |
Zilog Z8000 | Adrian | 2020/01/11 07:29 AM |
Zilog Z8000 | David Hess | 2020/01/11 08:45 AM |
Zilog Z8000 | Ricardo B | 2020/01/11 08:04 PM |
Zilog Z8000 | Ronald Maas | 2020/01/12 10:47 AM |
Zilog Z8000 | Ricardo B | 2020/01/12 12:15 PM |
Zilog Z8000 | Anon | 2020/01/12 11:34 PM |
Zilog Z8000 | Jose | 2020/01/13 01:23 AM |
Zilog Z8000 | gallier2 | 2020/01/13 01:42 AM |
Zilog Z8000 | Jose | 2020/01/13 10:04 PM |
Zilog Z8000 | rwessel | 2020/01/13 10:40 PM |
Zilog Z8000 | David Hess | 2020/01/13 11:35 PM |
Zilog Z8000 | Simon Farnsworth | 2020/01/14 03:56 AM |
Zilog Z8000 | Michael S | 2020/01/14 04:09 AM |
Zilog Z8000 | Simon Farnsworth | 2020/01/14 05:06 AM |
Zilog Z8000 | David Hess | 2020/01/14 10:22 AM |
Zilog Z8000 | David Hess | 2020/01/14 10:15 AM |
Zilog Z8000 | rwessel | 2020/01/14 04:12 PM |
286 16 bit I/O | Tim McCaffrey | 2020/01/15 11:25 AM |
286 16 bit I/O | David Hess | 2020/01/15 09:17 PM |
Zilog Z8000 | Ricardo B | 2020/01/13 11:52 AM |
Zilog Z8000 | Anon | 2020/01/13 12:25 PM |
Zilog Z8000 | David Hess | 2020/01/13 06:38 PM |
Zilog Z8000 | rwessel | 2020/01/13 07:16 PM |
Zilog Z8000 | David Hess | 2020/01/13 07:47 PM |
Zilog Z8000 | someone | 2020/01/14 07:54 AM |
Zilog Z8000 | Anon | 2020/01/14 08:31 AM |
Zilog Z8000 | Ricardo B | 2020/01/14 06:29 PM |
Zilog Z8000 | Simon Farnsworth | 2020/01/15 03:26 AM |
Zilog Z8000 | Tim McCaffrey | 2020/01/15 11:27 AM |
Zilog Z8000 | Simon Farnsworth | 2020/01/15 02:32 PM |
Zilog Z8000 | Ricardo B | 2020/01/15 03:47 PM |
Zilog Z8000 | Anon | 2020/01/15 04:08 PM |
Zilog Z8000 | Ricardo B | 2020/01/15 05:16 PM |
Zilog Z8000 | Anon | 2020/01/15 05:31 PM |
Zilog Z8000 | Ricardo B | 2020/01/15 06:46 PM |
Zilog Z8000 | Anon | 2020/01/15 07:04 PM |
Zilog Z8000 | David Hess | 2020/01/15 09:53 PM |
Zilog Z8000 | Ricardo B | 2020/01/16 07:27 PM |
Zilog Z8000 | Anon | 2020/01/16 08:33 PM |
Zilog Z8000 | Ronald Maas | 2020/01/17 12:05 AM |
Zilog Z8000 | Anon | 2020/01/17 08:15 AM |
Zilog Z8000 | Ricardo B | 2020/01/17 02:59 PM |
Zilog Z8000 | Anon | 2020/01/17 07:40 PM |
Zilog Z8000 | Ricardo B | 2020/01/18 08:42 AM |
Zilog Z8000 | gallier2 | 2020/01/19 08:02 AM |
Zilog Z8000 | David Hess | 2020/01/18 07:12 AM |
Zilog Z8000 | David Hess | 2020/01/15 09:49 PM |
Zilog Z8000 | gallier2 | 2020/01/16 12:57 AM |
Zilog Z8000 | Simon Farnsworth | 2020/01/16 02:30 AM |
IBM PC success | Etienne | 2020/01/16 06:42 AM |
Zilog Z8000 | Ricardo B | 2020/01/16 07:32 PM |
Zilog Z8000 | Brett | 2020/01/17 01:38 AM |
Zilog Z8000 | David Hess | 2020/01/18 07:28 AM |
Zilog Z8000 | David Hess | 2020/01/18 07:22 AM |
Zilog Z8000 | David Hess | 2020/01/15 09:30 PM |
Zilog Z8000 | Maxwell | 2020/01/11 09:07 AM |
Zilog Z8000 | David Hess | 2020/01/11 09:40 AM |
Zilog Z8000 | Maxwell | 2020/01/11 10:08 AM |
Zilog Z8000 | Ricardo B | 2020/01/11 08:42 PM |
8086 does NOT have those addressing modes | Devin | 2020/01/12 02:13 PM |
8086 does NOT have those addressing modes | Ricardo B | 2020/01/12 06:46 PM |
8086 does NOT have those addressing modes | Anon | 2020/01/13 05:10 AM |
8086 does NOT have those addressing modes | gallier2 | 2020/01/13 06:07 AM |
8086 does NOT have those addressing modes | Anon | 2020/01/13 07:09 AM |
8086 does NOT have those addressing modes | Ricardo B | 2020/01/13 11:48 AM |
8086 does NOT have those addressing modes | Michael S | 2020/01/13 07:40 AM |
Zilog Z8000 | Ronald Maas | 2020/01/13 09:44 AM |
Zilog Z8000 | Anon | 2020/01/13 04:32 PM |
8086 does NOT have those addressing modes | Ricardo B | 2020/01/13 11:24 AM |
8086 does NOT have those addressing modes | rwessel | 2020/01/13 03:59 PM |
8086 does NOT have those addressing modes | David Hess | 2020/01/13 07:12 PM |
8086 does NOT have those addressing modes | rwessel | 2020/01/13 07:28 PM |
8086 does NOT have those addressing modes | David Hess | 2020/01/13 07:51 PM |
8086 does NOT have those addressing modes | David Hess | 2020/01/13 06:55 PM |
Zilog Z8000 | rwessel | 2020/01/11 01:26 PM |
Zilog Z8000 | Brett | 2020/01/11 03:16 PM |
Zilog Z8000 | rwessel | 2020/01/11 08:20 PM |
Zilog Z8000 | Brett | 2020/01/12 01:02 PM |
Zilog Z8000 | rwessel | 2020/01/12 10:06 PM |
Zilog Z8000 | Brett | 2020/01/12 11:02 PM |
Zilog Z8000 | James | 2020/01/13 06:12 AM |
Zilog Z8000 | Adrian | 2020/01/12 12:38 AM |
PDP-11 | Michael S | 2020/01/12 02:33 AM |
Zilog Z8000 | rwessel | 2020/01/12 07:01 AM |
Zilog Z8000 | Ronald Maas | 2020/01/12 11:03 AM |
Zilog Z8000 | Konrad Schwarz | 2020/01/13 04:49 AM |
Zilog Z8000 | Adrian | 2020/01/14 12:38 AM |
Zilog Z8000 | konrad.schwarz | 2020/01/15 05:50 AM |
Zilog Z8000 | Adrian | 2020/01/15 11:24 PM |
It's hard to separate | David Hess | 2020/01/11 07:08 AM |
It's hard to separate | David Hess | 2020/01/11 07:11 AM |
It's hard to separate | Adrian | 2020/01/09 12:16 PM |
It's hard to separate | David Hess | 2020/01/11 07:17 AM |
It's hard to separate | gallier2 | 2020/01/10 01:11 AM |
It's hard to separate | none | 2020/01/10 02:58 AM |
It's hard to separate | rwessel | 2020/01/09 08:00 AM |
It's hard to separate | David Hess | 2020/01/09 09:10 AM |
It's hard to separate | rwessel | 2020/01/09 09:51 AM |
It's hard to separate | Adrian | 2020/01/08 11:58 PM |
It's hard to separate | rwessel | 2020/01/09 07:31 AM |
It's hard to separate | Adrian | 2020/01/09 07:44 AM |
It's hard to separate | David Hess | 2020/01/09 09:37 AM |
It's hard to separate | none | 2020/01/09 10:34 AM |
Are segments so bad? | Paul A. Clayton | 2020/01/09 03:15 PM |
Yes, they are terrible (NT) | Anon | 2020/01/09 03:20 PM |
Are segments so bad? | Adrian | 2020/01/10 12:49 AM |
Are segments so bad? | Etienne | 2020/01/10 02:28 AM |
Are segments so bad? | gallier2 | 2020/01/10 02:37 AM |
Are segments so bad? | Adrian | 2020/01/10 03:19 AM |
Are segments so bad? | Adrian | 2020/01/10 04:27 AM |
Are segments so bad? | Etienne | 2020/01/10 04:41 AM |
Are segments so bad? | Adrian | 2020/01/10 03:05 AM |
Are segments so bad? | gallier2 | 2020/01/10 03:13 AM |
Are segments so bad? | Anon3 | 2020/01/10 11:37 AM |
Are segments so bad? | Adrian | 2020/01/10 11:47 AM |
Are segments so bad? | Brendan | 2020/01/11 01:43 AM |
Are segments so bad? | Anon | 2020/01/10 06:51 PM |
Are segments so bad? | Adrian | 2020/01/11 01:05 AM |
Are segments so bad? | Jukka Larja | 2020/01/11 08:20 AM |
Are segments so bad? | Brendan | 2020/01/11 10:14 AM |
Are segments so bad? | Jukka Larja | 2020/01/11 09:15 PM |
Are segments so bad? | Brendan | 2020/01/11 11:15 PM |
Are segments so bad? | Jukka Larja | 2020/01/12 04:18 AM |
Are segments so bad? | anon | 2020/01/12 12:30 PM |
Are segments so bad? | Brendan | 2020/01/12 10:19 PM |
the world sucks worse than you're aware of | Michael S | 2020/01/13 01:50 AM |
the world sucks worse than you're aware of | Brendan | 2020/01/13 03:56 AM |
the world sucks worse than you're aware of | Gabriele Svelto | 2020/01/13 04:46 AM |
Are segments so bad? | Jukka Larja | 2020/01/13 07:41 AM |
Are segments so bad? | Brendan | 2020/01/13 08:21 AM |
Are segments so bad? | Jukka Larja | 2020/01/13 09:43 AM |
Are segments so bad? | Brendan | 2020/01/13 01:02 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/13 01:22 PM |
Are segments so bad? | Brendan | 2020/01/13 02:50 PM |
actor of around 200? | Michael S | 2020/01/14 03:58 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/14 12:50 PM |
Not overcomitting leads to more OOMs, not less | Brendan | 2020/01/14 01:40 PM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/15 03:17 AM |
Not overcomitting leads to more OOMs, not less | Anon | 2020/01/15 04:43 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/15 05:09 AM |
Not overcomitting leads to more OOMs, not less | Anon | 2020/01/15 05:16 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/15 06:58 AM |
Not overcomitting leads to more OOMs, not less | Anon | 2020/01/15 09:08 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/16 04:05 AM |
Not overcomitting leads to more OOMs, not less | Michael S | 2020/01/15 04:48 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/15 05:10 AM |
Not overcomitting leads to more OOMs, not less | Michael S | 2020/01/15 08:13 AM |
Not overcomitting leads to more OOMs, not less | Jukka Larja | 2020/01/15 08:46 AM |
Not overcomitting leads to more OOMs, not less | Jukka Larja | 2020/01/15 06:08 AM |
Thanks for the info (NT) | Gabriele Svelto | 2020/01/15 07:00 AM |
Not overcomitting leads to more OOMs, not less | Linus Torvalds | 2020/01/15 12:30 PM |
OOM killer complains | Anon | 2020/01/15 12:44 PM |
OOM killer complains | anon | 2020/01/15 04:26 PM |
Not overcomitting leads to more OOMs, not less | Brendan | 2020/01/16 07:26 AM |
Not overcomitting leads to more OOMs, not less | Linus Torvalds | 2020/01/16 10:17 AM |
Not overcomitting leads to more OOMs, not less | Linus Torvalds | 2020/01/16 10:48 AM |
Not overcomitting leads to more OOMs, not less | Doug S | 2020/01/16 03:41 PM |
Not overcomitting leads to more OOMs, not less | Doug S | 2020/01/16 03:44 PM |
Are segments so bad? | rwessel | 2020/01/13 04:11 PM |
Are segments so bad? | Jukka Larja | 2020/01/14 07:37 AM |
Are segments so bad? | Brendan | 2020/01/14 08:48 AM |
Are segments so bad? | Jukka Larja | 2020/01/14 11:13 AM |
Are segments so bad? | Brendan | 2020/01/14 02:30 PM |
Are segments so bad? | Brett | 2020/01/14 10:13 PM |
Are segments so bad? | Jukka Larja | 2020/01/15 07:04 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/15 03:35 AM |
Specifying cost of dropping pages | Paul A. Clayton | 2020/01/13 03:00 PM |
Specifying cost of dropping pages | rwessel | 2020/01/13 04:19 PM |
Specifying cost of dropping pages | Gabriele Svelto | 2020/01/15 03:23 AM |
Are segments so bad? | anon | 2020/01/14 02:15 AM |
Are segments so bad? | Brendan | 2020/01/14 06:13 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/14 12:57 PM |
Are segments so bad? | Brendan | 2020/01/14 02:58 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/15 03:33 AM |
Are segments so bad? | Anon | 2020/01/15 05:24 AM |
Are segments so bad? | Jukka Larja | 2020/01/15 06:20 AM |
Are segments so bad? | Etienne | 2020/01/15 05:56 AM |
Are segments so bad? | Jukka Larja | 2020/01/15 08:53 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/16 06:12 AM |
Are segments so bad? | Jukka Larja | 2020/01/16 10:56 AM |
Are segments so bad? | Brendan | 2020/01/15 06:20 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/15 06:56 AM |
Are segments so bad? | Brendan | 2020/01/16 07:16 AM |
Are segments so bad? | Jukka Larja | 2020/01/16 11:08 AM |
Are segments so bad? | Brendan | 2020/01/17 01:52 PM |
Are segments so bad? | Jukka Larja | 2020/01/17 10:08 PM |
Are segments so bad? | Brendan | 2020/01/18 12:40 PM |
Are segments so bad? | Jukka Larja | 2020/01/18 10:13 PM |
Are segments so bad? | Brendan | 2020/01/19 12:25 PM |
Are segments so bad? | Brett | 2020/01/19 03:18 PM |
Are segments so bad? | Brett | 2020/01/19 03:34 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 12:57 AM |
Are segments so bad? | Jukka Larja | 2020/01/20 05:54 AM |
Are segments so bad? | Brendan | 2020/01/20 12:43 PM |
Are segments so bad? | Jukka Larja | 2020/01/21 07:01 AM |
Are segments so bad? | Brendan | 2020/01/21 06:04 PM |
Are segments so bad? | Jukka Larja | 2020/01/22 07:30 AM |
Are segments so bad? | Brendan | 2020/01/22 03:56 PM |
Are segments so bad? | Jukka Larja | 2020/01/23 08:44 AM |
Are segments so bad? | rwessel | 2020/01/16 03:06 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/16 03:13 PM |
Are segments so bad? | Brendan | 2020/01/17 01:51 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/17 03:18 PM |
Are segments so bad? | Anon | 2020/01/17 08:01 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 01:06 AM |
Are segments so bad? | Brendan | 2020/01/18 03:15 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 12:55 AM |
Are segments so bad? | Michael S | 2020/01/20 05:30 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 08:02 AM |
Are segments so bad? | Jukka Larja | 2020/01/20 08:41 AM |
Are segments so bad? | Michael S | 2020/01/20 08:45 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 09:36 AM |
Are segments so bad? | Brendan | 2020/01/20 11:04 AM |
Are segments so bad? | Michael S | 2020/01/20 01:22 PM |
Are segments so bad? | Brendan | 2020/01/20 02:38 PM |
Are segments so bad? | Simon Farnsworth | 2020/01/20 03:40 PM |
Are segments so bad? | Anon | 2020/01/20 04:35 PM |
Are segments so bad? | Simon Farnsworth | 2020/01/20 05:30 PM |
Are segments so bad? | Michael S | 2020/01/20 05:20 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/21 05:08 AM |
Are segments so bad? | Brendan | 2020/01/21 06:07 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/22 01:53 AM |
Are segments so bad? | Brendan | 2020/01/22 04:32 AM |
Are segments so bad? | Jukka Larja | 2020/01/22 07:12 AM |
Are segments so bad? | Brendan | 2020/01/22 04:28 PM |
Are segments so bad? | Jukka Larja | 2020/01/23 07:36 AM |
Are segments so bad? | Brendan | 2020/01/24 07:27 PM |
Are segments so bad? | Jukka Larja | 2020/01/24 10:42 PM |
Are segments so bad? | Brendan | 2020/01/25 02:46 AM |
Are segments so bad? | Jukka Larja | 2020/01/25 08:29 AM |
Are segments so bad? | Brendan | 2020/01/26 11:17 PM |
Are segments so bad? | Jukka Larja | 2020/01/27 07:55 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/27 04:33 PM |
Are segments so bad? | Jukka Larja | 2020/01/28 06:28 AM |
DDS assets and MipMap chains | Montaray Jack | 2020/01/29 03:26 AM |
Are segments so bad? | gallier2 | 2020/01/27 03:58 AM |
Are segments so bad? | Jukka Larja | 2020/01/27 06:19 AM |
Are segments so bad? | Anne O. Nymous | 2020/01/25 03:23 AM |
Are segments so bad? | Anon | 2020/01/22 05:52 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/23 01:24 AM |
Are segments so bad? | Anon | 2020/01/23 05:24 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/24 12:43 AM |
Are segments so bad? | Anon | 2020/01/24 04:04 AM |
Are segments so bad? | Etienne | 2020/01/24 06:10 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/23 01:48 AM |
Are segments so bad? | Michael S | 2020/01/23 03:48 AM |
Are segments so bad? | Jukka Larja | 2020/01/23 07:38 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/23 01:29 PM |
Are segments so bad? | Anon | 2020/01/23 06:08 PM |
Are segments so bad? | Jukka Larja | 2020/01/24 09:51 PM |
Are segments so bad? | Anon | 2020/01/23 06:02 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/24 03:57 AM |
Are segments so bad? | Anon | 2020/01/24 04:17 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/24 09:23 AM |
Are segments so bad? | Anon | 2020/02/02 10:15 PM |
Are segments so bad? | Gabriele Svelto | 2020/02/03 01:47 AM |
Are segments so bad? | Anon | 2020/02/03 02:34 AM |
Are segments so bad? | Gabriele Svelto | 2020/02/03 05:36 AM |
Are segments so bad? | Anon3 | 2020/02/03 08:47 AM |
Are segments so bad? | Anon | 2020/02/04 05:49 PM |
Are segments so bad? | Jukka Larja | 2020/01/24 10:10 PM |
Are segments so bad? | Jukka Larja | 2020/01/17 10:26 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/12 04:18 AM |
Are segments so bad? | Jukka Larja | 2020/01/12 08:41 AM |
Are segments so bad? | rwessel | 2020/01/11 01:31 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/11 08:22 AM |
Are segments so bad? | Ricardo B | 2020/01/11 08:01 PM |
Are segments so bad? | Adrian | 2020/01/12 12:18 AM |
Are segments so bad? | Michael S | 2020/01/12 02:43 AM |
Are segments so bad? | Adrian | 2020/01/12 04:35 AM |
Are segments so bad? | Ricardo B | 2020/01/12 12:04 PM |
Are segments so bad? | Anon3 | 2020/01/12 05:52 PM |
Are segments so bad? | Brendan | 2020/01/12 09:58 PM |
Are segments so bad? | Paul A. Clayton | 2020/01/13 09:11 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | rainstared | 2020/01/06 01:43 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Foo_ | 2020/01/06 05:33 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | dmcq | 2020/01/06 06:03 AM |
changes in context | Carlie Coats | 2020/01/09 09:06 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | rainstar | 2020/01/09 10:16 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Montaray Jack | 2020/01/09 11:11 PM |
Suggested reading for the author | anon | 2020/01/04 11:16 PM |
Suggested reading for the author | ab | 2020/01/05 05:15 AM |
Looking at the other side (frequency scaling) | Chester | 2020/01/06 10:19 AM |
Looking at the other side (frequency scaling) | Foo_ | 2020/01/06 11:00 AM |
Why spinlocks were used | Foo_ | 2020/01/06 11:06 AM |
Why spinlocks were used | Jukka Larja | 2020/01/06 12:59 PM |
Why spinlocks were used | Simon Cooke | 2020/01/06 03:16 PM |
Why spinlocks were used | Rizzo | 2020/01/07 01:18 AM |
Looking at the other side (frequency scaling) | ab | 2020/01/07 01:14 AM |
Cross-platform code | Gian-Carlo Pascutto | 2020/01/06 08:00 AM |
Cross-platform code | Michael S | 2020/01/06 09:11 AM |
Cross-platform code | Gian-Carlo Pascutto | 2020/01/06 12:33 PM |
Cross-platform code | Michael S | 2020/01/06 01:59 PM |
Cross-platform code | Nksingh | 2020/01/07 12:09 AM |
Cross-platform code | Michael S | 2020/01/07 02:00 AM |
SRW lock implementation | Michael S | 2020/01/07 02:35 AM |
SRW lock implementation | Nksingh | 2020/01/09 02:17 PM |
broken URL in Linux source code | Michael S | 2020/01/14 01:56 AM |
broken URL in Linux source code | Travis Downs | 2020/01/14 10:14 AM |
broken URL in Linux source code | Michael S | 2020/01/14 10:48 AM |
broken URL in Linux source code | Travis Downs | 2020/01/14 04:43 PM |
SRW lock implementation - url broken | Michael S | 2020/01/14 03:07 AM |
SRW lock implementation - url broken | Travis Downs | 2020/01/14 11:06 AM |
SRW lock implementation - url broken | gpderetta | 2020/01/15 04:28 AM |
SRW lock implementation - url broken | Travis Downs | 2020/01/15 11:16 AM |
SRW lock implementation - url broken | Linus Torvalds | 2020/01/15 11:20 AM |
SRW lock implementation - url broken | Travis Downs | 2020/01/15 11:35 AM |
SRW lock implementation - url broken | Linus Torvalds | 2020/01/16 11:24 AM |
SRW lock implementation - url broken | Konrad Schwarz | 2020/02/05 10:19 AM |
SRW lock implementation - url broken | nksingh | 2020/02/05 02:42 PM |
Cross-platform code | Linus Torvalds | 2020/01/06 01:57 PM |