By: dmcq (dmcq.delete@this.fano.co.uk), January 5, 2020 7:33 am
Room: Moderated Discussions
Linus Torvalds (torvalds.delete@this.linux-foundation.org) on January 4, 2020 12:31 pm wrote:
> Malte Skarupke (malteskarupke.delete@this.web.de) on January 4, 2020 11:22 am wrote:
> > Case 3 is the same situation, thread C is still not being run
> > even though fifteen other threads are calling yield().
>
> The problem with that is "yield" is pretty much undefined. The definition of it is literally
> about single queue of a real-time behavior with a real-time scheduler with priorities.
>
> But that "definition" has almost nothing to do with actual usage. There are various random
> people who use it, and some might use it for locking, while some use it for other things.
>
> What you want to use it for is "schedule the right process". But you don't even
> know what the right process is, or if you do you don't tell the system (because sched_yield()
> literally doesn't have that interface), so the kernel has to guess.
>
> In some cases, "sched_yield()" is basically used by processes that say "I'm CPU-intensive, but I'm not important
> for latency, and I don't want to cause problems for others". You'll find various random GUI programs doing
> that because they are threaded, and one thread does things like update the screen, while another thread does
> calculations. The calculation loop (which is still important, just not latency-critical) might have "sched_yield()
> in it as a cheap way of saying "maybe there's a more important UI event going on".
>
> In other cases, it's a performance optimization, where somebody says "I have done my part of the work and
> written it out, now I'm yielding because there's another user that is likely to want to use it, and that other
> user is actually the more heavy CPU hog and should run before I start generating more data for it".
>
> And in others, it's because they are actually using real-time scheduling - perhaps on dedicated CPU doing the
> true Hard real-time kinds of things - and depend on the FIFO definition within their priority group.
>
> That's the defined usage, but even that is really defined only in the traditional UP sense
> of "there is one process running at a time, and we have one single queue of them".
>
> End result: sched_yield() is basically historical garbage. It's often literally wrong even for the defined
> usage because times have moved on from that whole "we run one thing at a time" long long ago. If your
> RT system actually has multiple concurrent threads (as opposed to being limited to a dedicated CPU) it's
> not really all that well-defined even there. But at least there you can still pretend it is.
>
> So what happens? Pretty much every single use of sched_yield() is some random person
> doing something wrong, and they have added it randomly to their load based on the random
> timing behavior on their machine. They may even have done extensive timings on a benchmark
> of their load in order to find exactly where they should yield.
>
> And then the system topology changes, and you have a hundred other sched_yield() users that
> use it for what they tuned things for, and their load behavior is very different indeed...
>
> In other words, if you think your locking should depend on 'sched_yield()', you're simply wrong.
>
> What do you think "sched_yield()" should do in a NUMA environment where your have 'N' NUMA
> domains, and each NUMA domain has 'M' cores? Perhaps totalling hundreds (or even thousands)
> of cores in total, some running your threads, some running something else entirely?
>
> And the process you want it to wake up is on another core entirely, possibly in a different
> NUMA domain, but that other core is right now running something else? Most people who use "sched_yield()"
> expect it to be a very light-weight operation. You said so yourself in your blog post, since
> you timed it. They expect it to be a light-weight operation exactly because they (incorrectly)
> think it's trivial, and they have literally tuned their load for that case.
>
> In fact, the most probable historical use of "sched_yield()" is because they wrote the
> code twenty years ago, really only did have one single CPU, and they had latency issues
> and did that whole "I want to make sure that if there's a UI process, it gets to run".
>
> But even a regular desktop machine today might have 8 cores and 16 threads, and now one of them has a thread
> that says "Hmm, I have nothing to do, but I can't sleep, so I want to randomly yield to somebody else".
>
> Do you think, for example, that the system should do a very expensive "check every single CPU thread to
> see if one of them has a runnable thread but is running something else, and break the CPU affinity of that
> thread and bring it to this CPU because I have a thread that says 'I'm not important' right now".
>
> So yes, that other thread isn't running right now, but bringing it to this
> CPU might slow it down enormously because now all the caches are gone because
> all the historical data it was working on is in another NUMA domain.
>
> But that would actually be the optimal thing for your locking case. You literally want to find that
> special other runnable thread (but not really any thread - you want to magically the one that isn't
> in a loop doing "yield" and wasting CPU time) that might have run on another CPU, and say "I want 'yield'
> to waste resources to move that thread to my core now, because my core thinks it's done".
>
> And you want to do that regardless of what anybody else used sched_yield() for?
>
> See how simplistic - and self-centered - your expectation is?
>
> And all because you did locking fundamentally wrong.
>
> Yes, it turns out that certain simple schedulers get exactly the behavior you want. The best way
> to get exactly your behavior is to have a single run-queue for the whole system, and make 'sched_yield()'
> always put the thread at the back of that run-queue, and pick the front one instead.
>
> IOW, for your bad, simplistic, and incorrect locking, the optimal scheduler is a stupid one
> that does not try to take any kind of CPU cache placement into account, does not try to at all
> optimize the run-queues to be thread-local, and just basically treats the scheduling decision
> as if we were still running one single CPU core, and that CPU had no cache locality issues.
>
> Guess what? The scheduler that your benchmark thinks is "optimal" is likely the worst of the bunch
> in a lot of other circumstances. But you have a simple benchmark, you have a clear and simple world-view,
> and you think that because of that, your benchmark is meaningful and not giving random numbers, but
> meaningful numbers where low numbers of your benchmark means that the scheduler is "good".
>
> When pretty much exactly the reverse is the case, because your world-view was
> not "simple", it was "simplistic" - to the point of being actively incorrect.
>
> See what I'm trying to explain here?
>
> The fact is, doing your own locking is hard. You need to really understand the issues, and you need to not
> over-simplify your model of the world to the point where it isn't actually describing reality any more.
>
> And no, any locking model that uses "sched_yield()" is simply garbage. Really. If you use
> "sched_yield()" you are basically doing something random. Imagine what happens if you use
> your "sched_yield()" for locking in a game, and somebody has a background task that does
> virus scanning, updates some system DB, or does pretty much anything else at the time?
>
> Yeah, you just potentially didn't just yield cross-CPU, you were yiedling
> to something else entirely that has nothing to do with your locking.
>
> sched_yield() is not acceptable for locking. EVER. Not unless you're
> en embedded system running a single load on a single core.
>
> If I haven't convinced you of that by now, I don't know what I can say.
>
> Good locking simply needs to be more directed than what "sched_yield()" can ever give you outside of a UP
> system without caches. It needs to actively tell the system what you're yielding to (and optimally it would
> also tell the system about whether you care about fairness/latency or not - a lot of loads don't).
>
> But that's not "sched_yield()" - that's something different. It's generally something like std::mutex,
> pthread_mutex_lock(), or perhaps a tuned thing that uses an OS-specific facility like "futex", where
> you do the nonblocking (and non-contended) case in user space using a shared memory location, but when
> you get contention you tell the OS what you're waiting for (and what you're waking up).
>
> Btw, locking can be simple too. If you do a lot of work, and the locking is something "occasional",
> you can use things like just passing a token over a pipe (or even a network connection) as your locking
> mechanism. Yes, the individual locking events end up being somewhat expensive, and it doesn't work
> very well at all for what is one very common case - no locking really needed at all because there's
> really only one active thread, but it can actually be a very simple and effective model.
>
> It turns out people get even that simple model wrong (passing tokens around in a pipe
> is exactly what the GNU make "jobserver" code does as a kind of "counting semaphore"
> implementation), and we found a bug in that user-space jobserver locking just last
> month because it got exposed when we tried to make the kernel more efficient.
>
> But at least that was a conceptually very simple model for doing locking: you create what is
> basically a counting semaphore by initialize a pipe with N characters (where N is your parallelism
> for the semaphore), and then anybody who wants to get the lock does a one-byte read() call
> and anybody who wants to release the lock writes a single byte back to the pipe.
>
> Simple, effective, and works even across processes. It doesn't even perform all that horribly
> because read/write is pretty simple in the end - but it's a system call for each lock/unlock,
> which is totally unacceptable if you compare to something like a shared memory location
> that you can just use a single atomic CPU operation on, of course.
>
> But again, depending on what your locking requirements are, that pipe (or even TCP
> socket) may actually be the right thing. It has advantages in that it can work in
> situations where that "we're all threads that share the same memory" isn't true.
>
> And it can actually out-perform your sched_yield() model.
>
> (Side note: the optimization I did in the kernel was to make the wakeup for a pipe write-to-read operation
> really be directed to just one of the waiting writers. And it turned out that the GNU make jobserver
> had a race that means that sometimes it would lose tokens for a while. And the kernel doing that nice
> targeted wakeup ended up making that race trigger all the time. So what currently happens is that if
> you have one writer and ten readers of the pipe, that one writer will wake up all the readers. But
> that will still work perfectly well for your benchmark, because you didn't overcommit a lot of people
> locking, I think, so you basically don't much see the worst-case thundering-herd issues).
>
> So sadly, I can report that the pipe-based locking isn't fair, and won't
> have great latencies for that reason, because we're working around a bug
> in user space where that fairness caused horrendous performance problems.
>
> Dealing with reality is hard. It sometimes means that you need to make your mental model for how locking
> needs to work a lot more complicated. And sometimes it means that you need to keep your OS kernel doing
> stupid things because people inadvertently depended on the timing of said stupid things.
>
> Which, as mentioned, is a problem for sched_yield() too. Lots of users, all of which are basically buggy-by-definition,
> and all you can do is a bad half-arsed job of trying to make it "kind of work".
>
> Reality is messy.
>
> Linus
Gotta agree with all that. In fact having worked for a while on a TP system I tend to the fascist side on enforcing proper handling of locks. You want the system to know if a process is holding a lock and what other processes are waiting. This will enable it to up the priority if needed and cope with errors better. The only pure yield like operation that should ever be done is deep in the kernel when the only worthwhile thing to do is idle waiting for an event.
> Malte Skarupke (malteskarupke.delete@this.web.de) on January 4, 2020 11:22 am wrote:
> > Case 3 is the same situation, thread C is still not being run
> > even though fifteen other threads are calling yield().
>
> The problem with that is "yield" is pretty much undefined. The definition of it is literally
> about single queue of a real-time behavior with a real-time scheduler with priorities.
>
> But that "definition" has almost nothing to do with actual usage. There are various random
> people who use it, and some might use it for locking, while some use it for other things.
>
> What you want to use it for is "schedule the right process". But you don't even
> know what the right process is, or if you do you don't tell the system (because sched_yield()
> literally doesn't have that interface), so the kernel has to guess.
>
> In some cases, "sched_yield()" is basically used by processes that say "I'm CPU-intensive, but I'm not important
> for latency, and I don't want to cause problems for others". You'll find various random GUI programs doing
> that because they are threaded, and one thread does things like update the screen, while another thread does
> calculations. The calculation loop (which is still important, just not latency-critical) might have "sched_yield()
> in it as a cheap way of saying "maybe there's a more important UI event going on".
>
> In other cases, it's a performance optimization, where somebody says "I have done my part of the work and
> written it out, now I'm yielding because there's another user that is likely to want to use it, and that other
> user is actually the more heavy CPU hog and should run before I start generating more data for it".
>
> And in others, it's because they are actually using real-time scheduling - perhaps on dedicated CPU doing the
> true Hard real-time kinds of things - and depend on the FIFO definition within their priority group.
>
> That's the defined usage, but even that is really defined only in the traditional UP sense
> of "there is one process running at a time, and we have one single queue of them".
>
> End result: sched_yield() is basically historical garbage. It's often literally wrong even for the defined
> usage because times have moved on from that whole "we run one thing at a time" long long ago. If your
> RT system actually has multiple concurrent threads (as opposed to being limited to a dedicated CPU) it's
> not really all that well-defined even there. But at least there you can still pretend it is.
>
> So what happens? Pretty much every single use of sched_yield() is some random person
> doing something wrong, and they have added it randomly to their load based on the random
> timing behavior on their machine. They may even have done extensive timings on a benchmark
> of their load in order to find exactly where they should yield.
>
> And then the system topology changes, and you have a hundred other sched_yield() users that
> use it for what they tuned things for, and their load behavior is very different indeed...
>
> In other words, if you think your locking should depend on 'sched_yield()', you're simply wrong.
>
> What do you think "sched_yield()" should do in a NUMA environment where your have 'N' NUMA
> domains, and each NUMA domain has 'M' cores? Perhaps totalling hundreds (or even thousands)
> of cores in total, some running your threads, some running something else entirely?
>
> And the process you want it to wake up is on another core entirely, possibly in a different
> NUMA domain, but that other core is right now running something else? Most people who use "sched_yield()"
> expect it to be a very light-weight operation. You said so yourself in your blog post, since
> you timed it. They expect it to be a light-weight operation exactly because they (incorrectly)
> think it's trivial, and they have literally tuned their load for that case.
>
> In fact, the most probable historical use of "sched_yield()" is because they wrote the
> code twenty years ago, really only did have one single CPU, and they had latency issues
> and did that whole "I want to make sure that if there's a UI process, it gets to run".
>
> But even a regular desktop machine today might have 8 cores and 16 threads, and now one of them has a thread
> that says "Hmm, I have nothing to do, but I can't sleep, so I want to randomly yield to somebody else".
>
> Do you think, for example, that the system should do a very expensive "check every single CPU thread to
> see if one of them has a runnable thread but is running something else, and break the CPU affinity of that
> thread and bring it to this CPU because I have a thread that says 'I'm not important' right now".
>
> So yes, that other thread isn't running right now, but bringing it to this
> CPU might slow it down enormously because now all the caches are gone because
> all the historical data it was working on is in another NUMA domain.
>
> But that would actually be the optimal thing for your locking case. You literally want to find that
> special other runnable thread (but not really any thread - you want to magically the one that isn't
> in a loop doing "yield" and wasting CPU time) that might have run on another CPU, and say "I want 'yield'
> to waste resources to move that thread to my core now, because my core thinks it's done".
>
> And you want to do that regardless of what anybody else used sched_yield() for?
>
> See how simplistic - and self-centered - your expectation is?
>
> And all because you did locking fundamentally wrong.
>
> Yes, it turns out that certain simple schedulers get exactly the behavior you want. The best way
> to get exactly your behavior is to have a single run-queue for the whole system, and make 'sched_yield()'
> always put the thread at the back of that run-queue, and pick the front one instead.
>
> IOW, for your bad, simplistic, and incorrect locking, the optimal scheduler is a stupid one
> that does not try to take any kind of CPU cache placement into account, does not try to at all
> optimize the run-queues to be thread-local, and just basically treats the scheduling decision
> as if we were still running one single CPU core, and that CPU had no cache locality issues.
>
> Guess what? The scheduler that your benchmark thinks is "optimal" is likely the worst of the bunch
> in a lot of other circumstances. But you have a simple benchmark, you have a clear and simple world-view,
> and you think that because of that, your benchmark is meaningful and not giving random numbers, but
> meaningful numbers where low numbers of your benchmark means that the scheduler is "good".
>
> When pretty much exactly the reverse is the case, because your world-view was
> not "simple", it was "simplistic" - to the point of being actively incorrect.
>
> See what I'm trying to explain here?
>
> The fact is, doing your own locking is hard. You need to really understand the issues, and you need to not
> over-simplify your model of the world to the point where it isn't actually describing reality any more.
>
> And no, any locking model that uses "sched_yield()" is simply garbage. Really. If you use
> "sched_yield()" you are basically doing something random. Imagine what happens if you use
> your "sched_yield()" for locking in a game, and somebody has a background task that does
> virus scanning, updates some system DB, or does pretty much anything else at the time?
>
> Yeah, you just potentially didn't just yield cross-CPU, you were yiedling
> to something else entirely that has nothing to do with your locking.
>
> sched_yield() is not acceptable for locking. EVER. Not unless you're
> en embedded system running a single load on a single core.
>
> If I haven't convinced you of that by now, I don't know what I can say.
>
> Good locking simply needs to be more directed than what "sched_yield()" can ever give you outside of a UP
> system without caches. It needs to actively tell the system what you're yielding to (and optimally it would
> also tell the system about whether you care about fairness/latency or not - a lot of loads don't).
>
> But that's not "sched_yield()" - that's something different. It's generally something like std::mutex,
> pthread_mutex_lock(), or perhaps a tuned thing that uses an OS-specific facility like "futex", where
> you do the nonblocking (and non-contended) case in user space using a shared memory location, but when
> you get contention you tell the OS what you're waiting for (and what you're waking up).
>
> Btw, locking can be simple too. If you do a lot of work, and the locking is something "occasional",
> you can use things like just passing a token over a pipe (or even a network connection) as your locking
> mechanism. Yes, the individual locking events end up being somewhat expensive, and it doesn't work
> very well at all for what is one very common case - no locking really needed at all because there's
> really only one active thread, but it can actually be a very simple and effective model.
>
> It turns out people get even that simple model wrong (passing tokens around in a pipe
> is exactly what the GNU make "jobserver" code does as a kind of "counting semaphore"
> implementation), and we found a bug in that user-space jobserver locking just last
> month because it got exposed when we tried to make the kernel more efficient.
>
> But at least that was a conceptually very simple model for doing locking: you create what is
> basically a counting semaphore by initialize a pipe with N characters (where N is your parallelism
> for the semaphore), and then anybody who wants to get the lock does a one-byte read() call
> and anybody who wants to release the lock writes a single byte back to the pipe.
>
> Simple, effective, and works even across processes. It doesn't even perform all that horribly
> because read/write is pretty simple in the end - but it's a system call for each lock/unlock,
> which is totally unacceptable if you compare to something like a shared memory location
> that you can just use a single atomic CPU operation on, of course.
>
> But again, depending on what your locking requirements are, that pipe (or even TCP
> socket) may actually be the right thing. It has advantages in that it can work in
> situations where that "we're all threads that share the same memory" isn't true.
>
> And it can actually out-perform your sched_yield() model.
>
> (Side note: the optimization I did in the kernel was to make the wakeup for a pipe write-to-read operation
> really be directed to just one of the waiting writers. And it turned out that the GNU make jobserver
> had a race that means that sometimes it would lose tokens for a while. And the kernel doing that nice
> targeted wakeup ended up making that race trigger all the time. So what currently happens is that if
> you have one writer and ten readers of the pipe, that one writer will wake up all the readers. But
> that will still work perfectly well for your benchmark, because you didn't overcommit a lot of people
> locking, I think, so you basically don't much see the worst-case thundering-herd issues).
>
> So sadly, I can report that the pipe-based locking isn't fair, and won't
> have great latencies for that reason, because we're working around a bug
> in user space where that fairness caused horrendous performance problems.
>
> Dealing with reality is hard. It sometimes means that you need to make your mental model for how locking
> needs to work a lot more complicated. And sometimes it means that you need to keep your OS kernel doing
> stupid things because people inadvertently depended on the timing of said stupid things.
>
> Which, as mentioned, is a problem for sched_yield() too. Lots of users, all of which are basically buggy-by-definition,
> and all you can do is a bad half-arsed job of trying to make it "kind of work".
>
> Reality is messy.
>
> Linus
Gotta agree with all that. In fact having worked for a while on a TP system I tend to the fascist side on enforcing proper handling of locks. You want the system to know if a process is holding a lock and what other processes are waiting. This will enable it to up the priority if needed and cope with errors better. The only pure yield like operation that should ever be done is deep in the kernel when the only worthwhile thing to do is idle waiting for an event.
Topic | Posted By | Date |
---|---|---|
Nuances related to Spinlock implementation and the Linux Scheduler | Beastian | 2020/01/03 12:46 PM |
Nuances related to Spinlock implementation and the Linux Scheduler | Montaray Jack | 2020/01/03 01:14 PM |
Nuances related to Spinlock implementation and the Linux Scheduler | Montaray Jack | 2020/01/03 01:49 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/03 07:05 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Beastian | 2020/01/04 12:03 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Malte Skarupke | 2020/01/04 12:22 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/04 01:31 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | dmcq | 2020/01/05 07:33 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | smeuletz | 2020/01/06 02:05 AM |
Do not blame others for your unfinished job | smeuletz | 2020/01/06 02:08 AM |
Where did all the experts come from? Did Linus get linked? (NT) | anon | 2020/01/06 04:27 AM |
Phoronix | Gabriele Svelto | 2020/01/06 05:04 AM |
Phoronix | Salvatore De Dominicis | 2020/01/06 07:59 AM |
Do not blame anyone. Please give polite, constructive criticism | Chester | 2020/01/06 09:17 AM |
Do not blame anyone. Please give polite, constructive criticism | smeuletz | 2020/01/06 10:11 AM |
Do not blame anyone. Please give polite, constructive criticism | Chester | 2020/01/06 10:54 AM |
Do not blame anyone. Please give polite, constructive criticism | smeuletz | 2020/01/06 11:33 AM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/06 12:58 PM |
Do not blame anyone. Please give polite, constructive criticism | Gionatan Danti | 2020/01/06 01:13 PM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/06 01:28 PM |
Do not blame anyone. Please give polite, constructive criticism | Gionatan Danti | 2020/01/06 01:52 PM |
Do not blame anyone. Please give polite, constructive criticism | John Scott | 2020/01/10 08:48 AM |
Do not blame anyone. Please give polite, constructive criticism | supernovas | 2020/01/10 10:01 AM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/10 12:45 PM |
Do not blame anyone. Please give polite, constructive criticism | GDan | 2020/04/06 03:10 AM |
Oracle | Anon3 | 2020/04/07 06:42 AM |
Do not blame anyone. Please give polite, constructive criticism | smeuletz | 2020/01/07 04:07 AM |
Do not blame anyone. Please give polite, constructive criticism | Simon Farnsworth | 2020/01/07 01:40 PM |
Do not blame anyone. Please give polite, constructive criticism | Etienne | 2020/01/08 02:08 AM |
Do not blame anyone. Please give polite, constructive criticism | smeuletz | 2020/01/08 02:18 AM |
Do not blame anyone. Please give polite, constructive criticism | Michael S | 2020/01/08 02:56 AM |
Not deprecating irrelevant API: sched_yield() on quantum computers? | smeuletz | 2020/01/07 04:34 AM |
Do not blame anyone. Please give polite, constructive criticism | magicalgoat | 2020/01/09 05:58 PM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/09 10:37 PM |
Do not blame anyone. Please give polite, constructive criticism | Anon3 | 2020/01/10 04:40 PM |
Do not blame anyone. Please give polite, constructive criticism | rwessel | 2020/01/06 10:04 PM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/06 12:11 PM |
Do not blame anyone. Please give polite, constructive criticism | Gabriele Svelto | 2020/01/06 02:36 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Howard Chu | 2020/01/09 11:39 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/10 12:30 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | president ltd | 2020/01/04 02:44 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Jörn Engel | 2020/01/04 12:34 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Emil Briggs | 2020/01/04 01:13 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Jörn Engel | 2020/01/04 01:46 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/04 02:24 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/04 03:54 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Jörn Engel | 2020/01/05 10:21 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/05 12:42 PM |
FUTEX_LOCK_PI performance | Jörn Engel | 2020/01/05 02:45 PM |
FUTEX_LOCK_PI performance | Linus Torvalds | 2020/01/05 04:30 PM |
FUTEX_LOCK_PI performance | Jörn Engel | 2020/01/05 07:03 PM |
FUTEX_LOCK_PI performance | RichardC | 2020/01/06 07:11 AM |
FUTEX_LOCK_PI performance | Linus Torvalds | 2020/01/06 01:11 PM |
FUTEX_LOCK_PI performance | Gabriele Svelto | 2020/01/06 03:20 AM |
FUTEX_LOCK_PI performance | xilun | 2020/01/06 05:19 PM |
FUTEX_LOCK_PI performance | Konrad Schwarz | 2020/01/13 04:36 AM |
FUTEX_LOCK_PI performance | Gabriele Svelto | 2020/01/13 04:53 AM |
FUTEX_LOCK_PI performance | Simon Farnsworth | 2020/01/13 05:36 AM |
FUTEX_LOCK_PI performance | rwessel | 2020/01/13 06:22 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | rainstar | 2020/01/04 10:58 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Charles Ellis | 2020/01/05 04:00 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Richard | 2020/01/05 09:58 AM |
It's hard to separate | Michael S | 2020/01/05 11:17 AM |
It's hard to separate | rainstared | 2020/01/06 01:52 AM |
It's hard to separate | David Kanter | 2020/01/08 09:27 AM |
It's hard to separate | Anon | 2020/01/08 09:37 PM |
It's hard to separate | none | 2020/01/08 11:50 PM |
It's hard to separate | Anon | 2020/01/09 01:41 AM |
It's hard to separate | none | 2020/01/09 03:54 AM |
It's hard to separate | gallier2 | 2020/01/09 04:19 AM |
It's hard to separate | Anon | 2020/01/09 05:12 AM |
It's hard to separate | Adrian | 2020/01/09 05:24 AM |
It's hard to separate | gallier2 | 2020/01/09 05:58 AM |
It's hard to separate | Adrian | 2020/01/09 07:09 AM |
It's hard to separate | gallier2 | 2020/01/09 05:42 AM |
It's hard to separate | Adrian | 2020/01/09 04:41 AM |
It's hard to separate | Anon | 2020/01/09 05:24 AM |
It's hard to separate | gallier2 | 2020/01/09 06:07 AM |
It's hard to separate | David Hess | 2020/01/09 09:27 AM |
It's hard to separate | Adrian | 2020/01/09 10:15 AM |
It's hard to separate | David Hess | 2020/01/09 10:45 AM |
It's hard to separate | Anon | 2020/01/09 11:15 AM |
It's hard to separate | Adrian | 2020/01/09 11:51 AM |
It's hard to separate | Brett | 2020/01/09 01:49 PM |
Zilog Z8000 | Brett | 2020/01/10 10:53 PM |
Zilog Z8000 | David Hess | 2020/01/11 07:06 AM |
Zilog Z8000 | Adrian | 2020/01/11 07:29 AM |
Zilog Z8000 | David Hess | 2020/01/11 08:45 AM |
Zilog Z8000 | Ricardo B | 2020/01/11 08:04 PM |
Zilog Z8000 | Ronald Maas | 2020/01/12 10:47 AM |
Zilog Z8000 | Ricardo B | 2020/01/12 12:15 PM |
Zilog Z8000 | Anon | 2020/01/12 11:34 PM |
Zilog Z8000 | Jose | 2020/01/13 01:23 AM |
Zilog Z8000 | gallier2 | 2020/01/13 01:42 AM |
Zilog Z8000 | Jose | 2020/01/13 10:04 PM |
Zilog Z8000 | rwessel | 2020/01/13 10:40 PM |
Zilog Z8000 | David Hess | 2020/01/13 11:35 PM |
Zilog Z8000 | Simon Farnsworth | 2020/01/14 03:56 AM |
Zilog Z8000 | Michael S | 2020/01/14 04:09 AM |
Zilog Z8000 | Simon Farnsworth | 2020/01/14 05:06 AM |
Zilog Z8000 | David Hess | 2020/01/14 10:22 AM |
Zilog Z8000 | David Hess | 2020/01/14 10:15 AM |
Zilog Z8000 | rwessel | 2020/01/14 04:12 PM |
286 16 bit I/O | Tim McCaffrey | 2020/01/15 11:25 AM |
286 16 bit I/O | David Hess | 2020/01/15 09:17 PM |
Zilog Z8000 | Ricardo B | 2020/01/13 11:52 AM |
Zilog Z8000 | Anon | 2020/01/13 12:25 PM |
Zilog Z8000 | David Hess | 2020/01/13 06:38 PM |
Zilog Z8000 | rwessel | 2020/01/13 07:16 PM |
Zilog Z8000 | David Hess | 2020/01/13 07:47 PM |
Zilog Z8000 | someone | 2020/01/14 07:54 AM |
Zilog Z8000 | Anon | 2020/01/14 08:31 AM |
Zilog Z8000 | Ricardo B | 2020/01/14 06:29 PM |
Zilog Z8000 | Simon Farnsworth | 2020/01/15 03:26 AM |
Zilog Z8000 | Tim McCaffrey | 2020/01/15 11:27 AM |
Zilog Z8000 | Simon Farnsworth | 2020/01/15 02:32 PM |
Zilog Z8000 | Ricardo B | 2020/01/15 03:47 PM |
Zilog Z8000 | Anon | 2020/01/15 04:08 PM |
Zilog Z8000 | Ricardo B | 2020/01/15 05:16 PM |
Zilog Z8000 | Anon | 2020/01/15 05:31 PM |
Zilog Z8000 | Ricardo B | 2020/01/15 06:46 PM |
Zilog Z8000 | Anon | 2020/01/15 07:04 PM |
Zilog Z8000 | David Hess | 2020/01/15 09:53 PM |
Zilog Z8000 | Ricardo B | 2020/01/16 07:27 PM |
Zilog Z8000 | Anon | 2020/01/16 08:33 PM |
Zilog Z8000 | Ronald Maas | 2020/01/17 12:05 AM |
Zilog Z8000 | Anon | 2020/01/17 08:15 AM |
Zilog Z8000 | Ricardo B | 2020/01/17 02:59 PM |
Zilog Z8000 | Anon | 2020/01/17 07:40 PM |
Zilog Z8000 | Ricardo B | 2020/01/18 08:42 AM |
Zilog Z8000 | gallier2 | 2020/01/19 08:02 AM |
Zilog Z8000 | David Hess | 2020/01/18 07:12 AM |
Zilog Z8000 | David Hess | 2020/01/15 09:49 PM |
Zilog Z8000 | gallier2 | 2020/01/16 12:57 AM |
Zilog Z8000 | Simon Farnsworth | 2020/01/16 02:30 AM |
IBM PC success | Etienne | 2020/01/16 06:42 AM |
Zilog Z8000 | Ricardo B | 2020/01/16 07:32 PM |
Zilog Z8000 | Brett | 2020/01/17 01:38 AM |
Zilog Z8000 | David Hess | 2020/01/18 07:28 AM |
Zilog Z8000 | David Hess | 2020/01/18 07:22 AM |
Zilog Z8000 | David Hess | 2020/01/15 09:30 PM |
Zilog Z8000 | Maxwell | 2020/01/11 09:07 AM |
Zilog Z8000 | David Hess | 2020/01/11 09:40 AM |
Zilog Z8000 | Maxwell | 2020/01/11 10:08 AM |
Zilog Z8000 | Ricardo B | 2020/01/11 08:42 PM |
8086 does NOT have those addressing modes | Devin | 2020/01/12 02:13 PM |
8086 does NOT have those addressing modes | Ricardo B | 2020/01/12 06:46 PM |
8086 does NOT have those addressing modes | Anon | 2020/01/13 05:10 AM |
8086 does NOT have those addressing modes | gallier2 | 2020/01/13 06:07 AM |
8086 does NOT have those addressing modes | Anon | 2020/01/13 07:09 AM |
8086 does NOT have those addressing modes | Ricardo B | 2020/01/13 11:48 AM |
8086 does NOT have those addressing modes | Michael S | 2020/01/13 07:40 AM |
Zilog Z8000 | Ronald Maas | 2020/01/13 09:44 AM |
Zilog Z8000 | Anon | 2020/01/13 04:32 PM |
8086 does NOT have those addressing modes | Ricardo B | 2020/01/13 11:24 AM |
8086 does NOT have those addressing modes | rwessel | 2020/01/13 03:59 PM |
8086 does NOT have those addressing modes | David Hess | 2020/01/13 07:12 PM |
8086 does NOT have those addressing modes | rwessel | 2020/01/13 07:28 PM |
8086 does NOT have those addressing modes | David Hess | 2020/01/13 07:51 PM |
8086 does NOT have those addressing modes | David Hess | 2020/01/13 06:55 PM |
Zilog Z8000 | rwessel | 2020/01/11 01:26 PM |
Zilog Z8000 | Brett | 2020/01/11 03:16 PM |
Zilog Z8000 | rwessel | 2020/01/11 08:20 PM |
Zilog Z8000 | Brett | 2020/01/12 01:02 PM |
Zilog Z8000 | rwessel | 2020/01/12 10:06 PM |
Zilog Z8000 | Brett | 2020/01/12 11:02 PM |
Zilog Z8000 | James | 2020/01/13 06:12 AM |
Zilog Z8000 | Adrian | 2020/01/12 12:38 AM |
PDP-11 | Michael S | 2020/01/12 02:33 AM |
Zilog Z8000 | rwessel | 2020/01/12 07:01 AM |
Zilog Z8000 | Ronald Maas | 2020/01/12 11:03 AM |
Zilog Z8000 | Konrad Schwarz | 2020/01/13 04:49 AM |
Zilog Z8000 | Adrian | 2020/01/14 12:38 AM |
Zilog Z8000 | konrad.schwarz | 2020/01/15 05:50 AM |
Zilog Z8000 | Adrian | 2020/01/15 11:24 PM |
It's hard to separate | David Hess | 2020/01/11 07:08 AM |
It's hard to separate | David Hess | 2020/01/11 07:11 AM |
It's hard to separate | Adrian | 2020/01/09 12:16 PM |
It's hard to separate | David Hess | 2020/01/11 07:17 AM |
It's hard to separate | gallier2 | 2020/01/10 01:11 AM |
It's hard to separate | none | 2020/01/10 02:58 AM |
It's hard to separate | rwessel | 2020/01/09 08:00 AM |
It's hard to separate | David Hess | 2020/01/09 09:10 AM |
It's hard to separate | rwessel | 2020/01/09 09:51 AM |
It's hard to separate | Adrian | 2020/01/08 11:58 PM |
It's hard to separate | rwessel | 2020/01/09 07:31 AM |
It's hard to separate | Adrian | 2020/01/09 07:44 AM |
It's hard to separate | David Hess | 2020/01/09 09:37 AM |
It's hard to separate | none | 2020/01/09 10:34 AM |
Are segments so bad? | Paul A. Clayton | 2020/01/09 03:15 PM |
Yes, they are terrible (NT) | Anon | 2020/01/09 03:20 PM |
Are segments so bad? | Adrian | 2020/01/10 12:49 AM |
Are segments so bad? | Etienne | 2020/01/10 02:28 AM |
Are segments so bad? | gallier2 | 2020/01/10 02:37 AM |
Are segments so bad? | Adrian | 2020/01/10 03:19 AM |
Are segments so bad? | Adrian | 2020/01/10 04:27 AM |
Are segments so bad? | Etienne | 2020/01/10 04:41 AM |
Are segments so bad? | Adrian | 2020/01/10 03:05 AM |
Are segments so bad? | gallier2 | 2020/01/10 03:13 AM |
Are segments so bad? | Anon3 | 2020/01/10 11:37 AM |
Are segments so bad? | Adrian | 2020/01/10 11:47 AM |
Are segments so bad? | Brendan | 2020/01/11 01:43 AM |
Are segments so bad? | Anon | 2020/01/10 06:51 PM |
Are segments so bad? | Adrian | 2020/01/11 01:05 AM |
Are segments so bad? | Jukka Larja | 2020/01/11 08:20 AM |
Are segments so bad? | Brendan | 2020/01/11 10:14 AM |
Are segments so bad? | Jukka Larja | 2020/01/11 09:15 PM |
Are segments so bad? | Brendan | 2020/01/11 11:15 PM |
Are segments so bad? | Jukka Larja | 2020/01/12 04:18 AM |
Are segments so bad? | anon | 2020/01/12 12:30 PM |
Are segments so bad? | Brendan | 2020/01/12 10:19 PM |
the world sucks worse than you're aware of | Michael S | 2020/01/13 01:50 AM |
the world sucks worse than you're aware of | Brendan | 2020/01/13 03:56 AM |
the world sucks worse than you're aware of | Gabriele Svelto | 2020/01/13 04:46 AM |
Are segments so bad? | Jukka Larja | 2020/01/13 07:41 AM |
Are segments so bad? | Brendan | 2020/01/13 08:21 AM |
Are segments so bad? | Jukka Larja | 2020/01/13 09:43 AM |
Are segments so bad? | Brendan | 2020/01/13 01:02 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/13 01:22 PM |
Are segments so bad? | Brendan | 2020/01/13 02:50 PM |
actor of around 200? | Michael S | 2020/01/14 03:58 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/14 12:50 PM |
Not overcomitting leads to more OOMs, not less | Brendan | 2020/01/14 01:40 PM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/15 03:17 AM |
Not overcomitting leads to more OOMs, not less | Anon | 2020/01/15 04:43 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/15 05:09 AM |
Not overcomitting leads to more OOMs, not less | Anon | 2020/01/15 05:16 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/15 06:58 AM |
Not overcomitting leads to more OOMs, not less | Anon | 2020/01/15 09:08 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/16 04:05 AM |
Not overcomitting leads to more OOMs, not less | Michael S | 2020/01/15 04:48 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/15 05:10 AM |
Not overcomitting leads to more OOMs, not less | Michael S | 2020/01/15 08:13 AM |
Not overcomitting leads to more OOMs, not less | Jukka Larja | 2020/01/15 08:46 AM |
Not overcomitting leads to more OOMs, not less | Jukka Larja | 2020/01/15 06:08 AM |
Thanks for the info (NT) | Gabriele Svelto | 2020/01/15 07:00 AM |
Not overcomitting leads to more OOMs, not less | Linus Torvalds | 2020/01/15 12:30 PM |
OOM killer complains | Anon | 2020/01/15 12:44 PM |
OOM killer complains | anon | 2020/01/15 04:26 PM |
Not overcomitting leads to more OOMs, not less | Brendan | 2020/01/16 07:26 AM |
Not overcomitting leads to more OOMs, not less | Linus Torvalds | 2020/01/16 10:17 AM |
Not overcomitting leads to more OOMs, not less | Linus Torvalds | 2020/01/16 10:48 AM |
Not overcomitting leads to more OOMs, not less | Doug S | 2020/01/16 03:41 PM |
Not overcomitting leads to more OOMs, not less | Doug S | 2020/01/16 03:44 PM |
Are segments so bad? | rwessel | 2020/01/13 04:11 PM |
Are segments so bad? | Jukka Larja | 2020/01/14 07:37 AM |
Are segments so bad? | Brendan | 2020/01/14 08:48 AM |
Are segments so bad? | Jukka Larja | 2020/01/14 11:13 AM |
Are segments so bad? | Brendan | 2020/01/14 02:30 PM |
Are segments so bad? | Brett | 2020/01/14 10:13 PM |
Are segments so bad? | Jukka Larja | 2020/01/15 07:04 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/15 03:35 AM |
Specifying cost of dropping pages | Paul A. Clayton | 2020/01/13 03:00 PM |
Specifying cost of dropping pages | rwessel | 2020/01/13 04:19 PM |
Specifying cost of dropping pages | Gabriele Svelto | 2020/01/15 03:23 AM |
Are segments so bad? | anon | 2020/01/14 02:15 AM |
Are segments so bad? | Brendan | 2020/01/14 06:13 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/14 12:57 PM |
Are segments so bad? | Brendan | 2020/01/14 02:58 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/15 03:33 AM |
Are segments so bad? | Anon | 2020/01/15 05:24 AM |
Are segments so bad? | Jukka Larja | 2020/01/15 06:20 AM |
Are segments so bad? | Etienne | 2020/01/15 05:56 AM |
Are segments so bad? | Jukka Larja | 2020/01/15 08:53 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/16 06:12 AM |
Are segments so bad? | Jukka Larja | 2020/01/16 10:56 AM |
Are segments so bad? | Brendan | 2020/01/15 06:20 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/15 06:56 AM |
Are segments so bad? | Brendan | 2020/01/16 07:16 AM |
Are segments so bad? | Jukka Larja | 2020/01/16 11:08 AM |
Are segments so bad? | Brendan | 2020/01/17 01:52 PM |
Are segments so bad? | Jukka Larja | 2020/01/17 10:08 PM |
Are segments so bad? | Brendan | 2020/01/18 12:40 PM |
Are segments so bad? | Jukka Larja | 2020/01/18 10:13 PM |
Are segments so bad? | Brendan | 2020/01/19 12:25 PM |
Are segments so bad? | Brett | 2020/01/19 03:18 PM |
Are segments so bad? | Brett | 2020/01/19 03:34 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 12:57 AM |
Are segments so bad? | Jukka Larja | 2020/01/20 05:54 AM |
Are segments so bad? | Brendan | 2020/01/20 12:43 PM |
Are segments so bad? | Jukka Larja | 2020/01/21 07:01 AM |
Are segments so bad? | Brendan | 2020/01/21 06:04 PM |
Are segments so bad? | Jukka Larja | 2020/01/22 07:30 AM |
Are segments so bad? | Brendan | 2020/01/22 03:56 PM |
Are segments so bad? | Jukka Larja | 2020/01/23 08:44 AM |
Are segments so bad? | rwessel | 2020/01/16 03:06 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/16 03:13 PM |
Are segments so bad? | Brendan | 2020/01/17 01:51 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/17 03:18 PM |
Are segments so bad? | Anon | 2020/01/17 08:01 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 01:06 AM |
Are segments so bad? | Brendan | 2020/01/18 03:15 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 12:55 AM |
Are segments so bad? | Michael S | 2020/01/20 05:30 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 08:02 AM |
Are segments so bad? | Jukka Larja | 2020/01/20 08:41 AM |
Are segments so bad? | Michael S | 2020/01/20 08:45 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 09:36 AM |
Are segments so bad? | Brendan | 2020/01/20 11:04 AM |
Are segments so bad? | Michael S | 2020/01/20 01:22 PM |
Are segments so bad? | Brendan | 2020/01/20 02:38 PM |
Are segments so bad? | Simon Farnsworth | 2020/01/20 03:40 PM |
Are segments so bad? | Anon | 2020/01/20 04:35 PM |
Are segments so bad? | Simon Farnsworth | 2020/01/20 05:30 PM |
Are segments so bad? | Michael S | 2020/01/20 05:20 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/21 05:08 AM |
Are segments so bad? | Brendan | 2020/01/21 06:07 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/22 01:53 AM |
Are segments so bad? | Brendan | 2020/01/22 04:32 AM |
Are segments so bad? | Jukka Larja | 2020/01/22 07:12 AM |
Are segments so bad? | Brendan | 2020/01/22 04:28 PM |
Are segments so bad? | Jukka Larja | 2020/01/23 07:36 AM |
Are segments so bad? | Brendan | 2020/01/24 07:27 PM |
Are segments so bad? | Jukka Larja | 2020/01/24 10:42 PM |
Are segments so bad? | Brendan | 2020/01/25 02:46 AM |
Are segments so bad? | Jukka Larja | 2020/01/25 08:29 AM |
Are segments so bad? | Brendan | 2020/01/26 11:17 PM |
Are segments so bad? | Jukka Larja | 2020/01/27 07:55 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/27 04:33 PM |
Are segments so bad? | Jukka Larja | 2020/01/28 06:28 AM |
DDS assets and MipMap chains | Montaray Jack | 2020/01/29 03:26 AM |
Are segments so bad? | gallier2 | 2020/01/27 03:58 AM |
Are segments so bad? | Jukka Larja | 2020/01/27 06:19 AM |
Are segments so bad? | Anne O. Nymous | 2020/01/25 03:23 AM |
Are segments so bad? | Anon | 2020/01/22 05:52 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/23 01:24 AM |
Are segments so bad? | Anon | 2020/01/23 05:24 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/24 12:43 AM |
Are segments so bad? | Anon | 2020/01/24 04:04 AM |
Are segments so bad? | Etienne | 2020/01/24 06:10 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/23 01:48 AM |
Are segments so bad? | Michael S | 2020/01/23 03:48 AM |
Are segments so bad? | Jukka Larja | 2020/01/23 07:38 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/23 01:29 PM |
Are segments so bad? | Anon | 2020/01/23 06:08 PM |
Are segments so bad? | Jukka Larja | 2020/01/24 09:51 PM |
Are segments so bad? | Anon | 2020/01/23 06:02 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/24 03:57 AM |
Are segments so bad? | Anon | 2020/01/24 04:17 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/24 09:23 AM |
Are segments so bad? | Anon | 2020/02/02 10:15 PM |
Are segments so bad? | Gabriele Svelto | 2020/02/03 01:47 AM |
Are segments so bad? | Anon | 2020/02/03 02:34 AM |
Are segments so bad? | Gabriele Svelto | 2020/02/03 05:36 AM |
Are segments so bad? | Anon3 | 2020/02/03 08:47 AM |
Are segments so bad? | Anon | 2020/02/04 05:49 PM |
Are segments so bad? | Jukka Larja | 2020/01/24 10:10 PM |
Are segments so bad? | Jukka Larja | 2020/01/17 10:26 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/12 04:18 AM |
Are segments so bad? | Jukka Larja | 2020/01/12 08:41 AM |
Are segments so bad? | rwessel | 2020/01/11 01:31 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/11 08:22 AM |
Are segments so bad? | Ricardo B | 2020/01/11 08:01 PM |
Are segments so bad? | Adrian | 2020/01/12 12:18 AM |
Are segments so bad? | Michael S | 2020/01/12 02:43 AM |
Are segments so bad? | Adrian | 2020/01/12 04:35 AM |
Are segments so bad? | Ricardo B | 2020/01/12 12:04 PM |
Are segments so bad? | Anon3 | 2020/01/12 05:52 PM |
Are segments so bad? | Brendan | 2020/01/12 09:58 PM |
Are segments so bad? | Paul A. Clayton | 2020/01/13 09:11 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | rainstared | 2020/01/06 01:43 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Foo_ | 2020/01/06 05:33 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | dmcq | 2020/01/06 06:03 AM |
changes in context | Carlie Coats | 2020/01/09 09:06 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | rainstar | 2020/01/09 10:16 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Montaray Jack | 2020/01/09 11:11 PM |
Suggested reading for the author | anon | 2020/01/04 11:16 PM |
Suggested reading for the author | ab | 2020/01/05 05:15 AM |
Looking at the other side (frequency scaling) | Chester | 2020/01/06 10:19 AM |
Looking at the other side (frequency scaling) | Foo_ | 2020/01/06 11:00 AM |
Why spinlocks were used | Foo_ | 2020/01/06 11:06 AM |
Why spinlocks were used | Jukka Larja | 2020/01/06 12:59 PM |
Why spinlocks were used | Simon Cooke | 2020/01/06 03:16 PM |
Why spinlocks were used | Rizzo | 2020/01/07 01:18 AM |
Looking at the other side (frequency scaling) | ab | 2020/01/07 01:14 AM |
Cross-platform code | Gian-Carlo Pascutto | 2020/01/06 08:00 AM |
Cross-platform code | Michael S | 2020/01/06 09:11 AM |
Cross-platform code | Gian-Carlo Pascutto | 2020/01/06 12:33 PM |
Cross-platform code | Michael S | 2020/01/06 01:59 PM |
Cross-platform code | Nksingh | 2020/01/07 12:09 AM |
Cross-platform code | Michael S | 2020/01/07 02:00 AM |
SRW lock implementation | Michael S | 2020/01/07 02:35 AM |
SRW lock implementation | Nksingh | 2020/01/09 02:17 PM |
broken URL in Linux source code | Michael S | 2020/01/14 01:56 AM |
broken URL in Linux source code | Travis Downs | 2020/01/14 10:14 AM |
broken URL in Linux source code | Michael S | 2020/01/14 10:48 AM |
broken URL in Linux source code | Travis Downs | 2020/01/14 04:43 PM |
SRW lock implementation - url broken | Michael S | 2020/01/14 03:07 AM |
SRW lock implementation - url broken | Travis Downs | 2020/01/14 11:06 AM |
SRW lock implementation - url broken | gpderetta | 2020/01/15 04:28 AM |
SRW lock implementation - url broken | Travis Downs | 2020/01/15 11:16 AM |
SRW lock implementation - url broken | Linus Torvalds | 2020/01/15 11:20 AM |
SRW lock implementation - url broken | Travis Downs | 2020/01/15 11:35 AM |
SRW lock implementation - url broken | Linus Torvalds | 2020/01/16 11:24 AM |
SRW lock implementation - url broken | Konrad Schwarz | 2020/02/05 10:19 AM |
SRW lock implementation - url broken | nksingh | 2020/02/05 02:42 PM |
Cross-platform code | Linus Torvalds | 2020/01/06 01:57 PM |