By: Brendan (btrotter.delete@this.gmail.com), January 14, 2020 8:48 am
Room: Moderated Discussions
Hi,
Jukka Larja (roskakori2006.delete@this.gmail.com) on January 14, 2020 6:37 am wrote:
> Brendan (btrotter.delete@this.gmail.com) on January 13, 2020 12:02 pm wrote:
> > Jukka Larja (roskakori2006.delete@this.gmail.com) on January 13, 2020 8:43 am wrote:
> > > Brendan (btrotter.delete@this.gmail.com) on January 13, 2020 7:21 am wrote:
> > > > Jukka Larja (roskakori2006.delete@this.gmail.com) on January 13, 2020 6:41 am wrote:
> > > > > Brendan (btrotter.delete@this.gmail.com) on January 12, 2020 9:19 pm wrote:
> > > > > > anon (anon.delete@this.b.c) on January 12, 2020 11:30 am wrote:
> > > > > > >
> > > > > > > Even if you write your own code extremely carefully, unless you a programming in an embedded context
> > > > > > > where you wrote all the code running on the platform, you can be pretty damn certain there is a
> > > > > > > lot of other parts in your system that are not able to gracefully handle OOM. There are just no
> > > > > > > server or desktop platforms in widespread use that can deal with OOM. It's entirely possible that
> > > > > > > your program sucks up all the memory, and then OOM is triggered by some crucial background service
> > > > > > > that nothing can live without, and which does not have viable alternatives to allocating.
> > > > > > >
> > > > > > > It's because of this that the sane option for desktop or server is to just ignore OOM and
> > > > > > > pretend it doesn't exist. Just hardening your own software is the worst kind of idiocy
> > > > > > > -- it's a massive waste of time, and it doesn't actually protect you from anything.
> > > > > >
> > > > > > This kind of thinking is the reason why the world sucks ("Some software is shit, therefore all software
> > > > > > should be forced to be shit forever and nobody should ever try to make software better").
> > > > >
> > > > > This assumes there's an easy way to "make software better". Sure, if during large parallel
> > > > > compile system runs out of memory, the compile can be restarted with less parallelism. And
> > > > > when our custom game asset builder runs out of memory, just restart from some previous known
> > > > > good state. Oh wait, that already happens when we just hit "Build" button again (or someone
> > > > > makes an SVN commit and build system automatically start making a new build).
> > > > >
> > > > > OOMs are practically never a problem for us. Running with not-quite-enough-physical-memory is sometimes
> > > > > a problem, but gets handled nicely by virtual memory. On average, we have plenty of RAM. Sometimes
> > > > > various independent processes with large memory footprint just happen to run at the same time.
> > > > >
> > > > > If I had to come up with some manual system to handle the problem, I don't really see what I could
> > > > > do to improve it. There could be some heuristics about available memory that would affect how much
> > > > > parallel processes are launched. The problem is coming up with good heuristics. I have no idea how
> > > > > much the shader compiler will need this time, or if it needs to run at all. There's hundreds of variables
> > > > > to consider, each only making sense to couple of programmers in our thirty-ish member team.
> > > > >
> > > > > Virtual memory is by no means necessary to solve this problem easily. We have build
> > > > > servers mostly configured with static page files, so as long as it is possible to
> > > > > just add equivalent amount of RAM, the OOM problem will be solved just as well.
> > > >
> > > > Maybe the only thing you're thinking about is your build system,
> > > > and the only people that care about your build system is you?
> > > >
> > > > Consider a HTTP server - do you want to (e.g.) drop one connection or abort
> > > > one request, or do you want to get killed and drop all current connections?
> > > >
> > > > Consider a word processor - do you want to (e.g.) free memory from the "undo buffer" and retry
> > > > (and if that doesn't work display a dialog box informing the user and save the current document
> > > > and shut down gracefully); or do you want to get killed and lose all the user's unsaved work;
> > > > or do you want kill X11 and screw up every single app that's currently running?
> > > >
> > > > How about a database management engine with 8 GiB of cached data it can easily discard - would you
> > > > want everything that depends on it to suffer "sudden database un-availability" for no reason?
> > >
> > > In my mind, it is "do I want those developers to fix some other bugs or spent
> > > their time on already well handled OOM problem?"
> >
> > My perspective is more like; do I trust software written
> > by people that are so incompetent they don't care if
> > it randomly "crashes" (due to OOM) without warning and without
> > any safeguards (and possibly for no sane reason
> > at all)? If the developers are this stupid, how many other things have they completely screwed up/ignored?
>
> There's always compromizes to make. Since OOM practically never happens during normal running (and would
> equally seldom be avoidable with any sensible tricks), My view of preparing for it is the same as with any
> (premature) micro-optimization. It doesn't hurt, but I wouldn't want to pay people much for doing it.
Some things I'm just unwilling to accept. For example, I can accept software failing due to OOM, but I can't accept software not failing gracefully (losing data, not informing user of the problem in the same way it provides feedback of other error conditions, etc).
> I'm very familiar with OOM on consoles (where there is no virtual memory), and we have thought
> about possibly doing some last-resortish in that case (Unreal Engine, for example, has by default
> 32 MB extra OOM buffer for tight spots). That, however, is on a platform where one has a promise
> of certain amount of memory. We haven't actually done anything, because doing it right is hard
> (consider that memory manager may run out of memory while several threads are requesting more.
> Freeing some cache or other unimportant data in such environment is asking for deadlocks and random
> crashes) and benefits compared to just reducing overall memory usage would be too small.
>
> > > Personally, I can't see what
> > > benefits of not having virtual memory could offset the benefits of having it.
> >
> > I'm not sure if there's some kind of misunderstanding here,
> > or where your "not having virtual memory" came from.
>
> Sorry, I must have mixed up your argument with someone else's at some point.
>
> > I'm mostly advocating "virtual memory, without overcommit (and with swap)" and thought you are
> > advocating "virtual memory, with overcommit (and therefore with OOM killer preventing any software
> > from doing anything correctly even when it's entirely possible and very beneficial)".
>
> I mostly don't care, since I just increase page file size until OOMs don't happen anymore. I think
> Windows doesn't overcommit and that's the only platform I'm really familiar with (from those platforms
> that have virtual memory). OOM induced crash on our build servers basically means we should reboot
> anyway, because it's too much trouble to check if something important has crashed.
Yeah, Windows does a lot of things right, and I suspect that at least part of the problem on *nix operating systems is "fork()". Specifically; if you have a large process with many GiB of data, "fork()" means the OS would have to commit to being able to provide copies of most of that data (anything writable becomes "potentially copy on writable"), even though it's very unlikely to be needed (it's likely the child won't modify much before it does "exec()" anyway), and this makes overcommit a lot more attractive.
> > > If we have page file backed virtual memory, the examples you cite above won't happen that way. Instead
> > > you'll see gradual decrease in performance, which may get really bad before total failure (whether that's
> > > OOM or slowdown so bad that it disrupts the service, depends on virtual memory configuration).
> >
> > No (or not necessarily). For example, if there's 2 GiB of
> > RAM and 20 GiB of swap space; and software frequently
> > uses 1 GiB of data and rarely uses 21 GiB of data; then it will not be slow at all. Note that this is very
> > common for desktop users - e.g. multiple applications and/or multiple browser tabs, with large amounts of
> > data, that are left "running" (sometimes for days) where all the data isn't actually accessed.
>
> I have hard time seeing that as anything else but memory leak. In case of memory leak it doesn't
> matter what you do. You'll run out of any amount of physical and virtual memory, with or without
> overcommit (except that with overcommit one can leak huge amounts of untouched memory).
I have a hard time seeing how anyone can consider it a memory leak. If the user opens 10 applications and leaves 9 of them "running/idle" while they use one of them, then none of them are leaking any memory (but 9 applications that are "running/idle" aren't actually using any of the memory).
You said you work on game consoles. If I'm playing a game that consumes 6 GiB of memory, but pause the game for a few hours while I go shopping, has that 6 GiB of memory been leaked? Does the memory become "unleaked" again when I unpause and continue playing the game later?
> > > It's a nice idea that some process which holds large amounts
> > > of memory for caching purposes could release it
> > > when server starts to run out. But that doesn't really work
> > > on general purpose server. What if there are several
> > > independent processes, but they have different idea of what's
> > > the limit of "server starting to run out of memory"?
> > > The one with more relaxed limit won't free any of its cache before the other one has freed its.
> >
> > A process releasing its own memory when (e.g.) it tries to allocate more and "mmap()" returns ENOMEM and
> > retrying (and succeeding); or cancelling whatever it wanted the extra memory for (and not anything else);
> > or providing feadback directly to user/s (e.g. a dialog box so the user immediately knows what happened
> > and doesn't have to go hunting for obscure logs); or giving the process a chance to save important data
> > to disk so that it's not lost; are all important "first steps" to making software suck less.
>
> You can't presume any of those actions will succeed, if the original memory allocation failed,
> unless you somehow know that the OS can do everything you ask without using any extra memory itself.
> Even if you free significant amount yourself, another process may be hogging it immediately. Actually,
> unless you are very careful, another thread in your own process may be hogging it.
>
> So basically you are doing complicated things (that will hardly ever get tested
> and will most likely be buggy anyway) and still can't guarantee a good outcome.
You've implied that you're a software developer, and yet you think this is "complicated" and "untestable"?
> > They are not the last steps by any means. For example; the system can be augmented by
> > per process quotas, or cgroups, and/or by some kind of "low memory notifications" sent
> > out by OS (to ask process/es to release some amount of memory if they're able).
>
> I'm pretty sure all OSes offer APIs to ask for amount of available memory. I don't believe
> it matters that it's "pull" instead of "push".
Having to constantly monitor the OS (rather than the OS notifying you) would increase the hassle; especially for "otherwise idle" processes.
> Most projects have better things to do.
If most people were incompetent, would you make incompetence mandatory?
- Brendan
Jukka Larja (roskakori2006.delete@this.gmail.com) on January 14, 2020 6:37 am wrote:
> Brendan (btrotter.delete@this.gmail.com) on January 13, 2020 12:02 pm wrote:
> > Jukka Larja (roskakori2006.delete@this.gmail.com) on January 13, 2020 8:43 am wrote:
> > > Brendan (btrotter.delete@this.gmail.com) on January 13, 2020 7:21 am wrote:
> > > > Jukka Larja (roskakori2006.delete@this.gmail.com) on January 13, 2020 6:41 am wrote:
> > > > > Brendan (btrotter.delete@this.gmail.com) on January 12, 2020 9:19 pm wrote:
> > > > > > anon (anon.delete@this.b.c) on January 12, 2020 11:30 am wrote:
> > > > > > >
> > > > > > > Even if you write your own code extremely carefully, unless you a programming in an embedded context
> > > > > > > where you wrote all the code running on the platform, you can be pretty damn certain there is a
> > > > > > > lot of other parts in your system that are not able to gracefully handle OOM. There are just no
> > > > > > > server or desktop platforms in widespread use that can deal with OOM. It's entirely possible that
> > > > > > > your program sucks up all the memory, and then OOM is triggered by some crucial background service
> > > > > > > that nothing can live without, and which does not have viable alternatives to allocating.
> > > > > > >
> > > > > > > It's because of this that the sane option for desktop or server is to just ignore OOM and
> > > > > > > pretend it doesn't exist. Just hardening your own software is the worst kind of idiocy
> > > > > > > -- it's a massive waste of time, and it doesn't actually protect you from anything.
> > > > > >
> > > > > > This kind of thinking is the reason why the world sucks ("Some software is shit, therefore all software
> > > > > > should be forced to be shit forever and nobody should ever try to make software better").
> > > > >
> > > > > This assumes there's an easy way to "make software better". Sure, if during large parallel
> > > > > compile system runs out of memory, the compile can be restarted with less parallelism. And
> > > > > when our custom game asset builder runs out of memory, just restart from some previous known
> > > > > good state. Oh wait, that already happens when we just hit "Build" button again (or someone
> > > > > makes an SVN commit and build system automatically start making a new build).
> > > > >
> > > > > OOMs are practically never a problem for us. Running with not-quite-enough-physical-memory is sometimes
> > > > > a problem, but gets handled nicely by virtual memory. On average, we have plenty of RAM. Sometimes
> > > > > various independent processes with large memory footprint just happen to run at the same time.
> > > > >
> > > > > If I had to come up with some manual system to handle the problem, I don't really see what I could
> > > > > do to improve it. There could be some heuristics about available memory that would affect how much
> > > > > parallel processes are launched. The problem is coming up with good heuristics. I have no idea how
> > > > > much the shader compiler will need this time, or if it needs to run at all. There's hundreds of variables
> > > > > to consider, each only making sense to couple of programmers in our thirty-ish member team.
> > > > >
> > > > > Virtual memory is by no means necessary to solve this problem easily. We have build
> > > > > servers mostly configured with static page files, so as long as it is possible to
> > > > > just add equivalent amount of RAM, the OOM problem will be solved just as well.
> > > >
> > > > Maybe the only thing you're thinking about is your build system,
> > > > and the only people that care about your build system is you?
> > > >
> > > > Consider a HTTP server - do you want to (e.g.) drop one connection or abort
> > > > one request, or do you want to get killed and drop all current connections?
> > > >
> > > > Consider a word processor - do you want to (e.g.) free memory from the "undo buffer" and retry
> > > > (and if that doesn't work display a dialog box informing the user and save the current document
> > > > and shut down gracefully); or do you want to get killed and lose all the user's unsaved work;
> > > > or do you want kill X11 and screw up every single app that's currently running?
> > > >
> > > > How about a database management engine with 8 GiB of cached data it can easily discard - would you
> > > > want everything that depends on it to suffer "sudden database un-availability" for no reason?
> > >
> > > In my mind, it is "do I want those developers to fix some other bugs or spent
> > > their time on already well handled OOM problem?"
> >
> > My perspective is more like; do I trust software written
> > by people that are so incompetent they don't care if
> > it randomly "crashes" (due to OOM) without warning and without
> > any safeguards (and possibly for no sane reason
> > at all)? If the developers are this stupid, how many other things have they completely screwed up/ignored?
>
> There's always compromizes to make. Since OOM practically never happens during normal running (and would
> equally seldom be avoidable with any sensible tricks), My view of preparing for it is the same as with any
> (premature) micro-optimization. It doesn't hurt, but I wouldn't want to pay people much for doing it.
Some things I'm just unwilling to accept. For example, I can accept software failing due to OOM, but I can't accept software not failing gracefully (losing data, not informing user of the problem in the same way it provides feedback of other error conditions, etc).
> I'm very familiar with OOM on consoles (where there is no virtual memory), and we have thought
> about possibly doing some last-resortish in that case (Unreal Engine, for example, has by default
> 32 MB extra OOM buffer for tight spots). That, however, is on a platform where one has a promise
> of certain amount of memory. We haven't actually done anything, because doing it right is hard
> (consider that memory manager may run out of memory while several threads are requesting more.
> Freeing some cache or other unimportant data in such environment is asking for deadlocks and random
> crashes) and benefits compared to just reducing overall memory usage would be too small.
>
> > > Personally, I can't see what
> > > benefits of not having virtual memory could offset the benefits of having it.
> >
> > I'm not sure if there's some kind of misunderstanding here,
> > or where your "not having virtual memory" came from.
>
> Sorry, I must have mixed up your argument with someone else's at some point.
>
> > I'm mostly advocating "virtual memory, without overcommit (and with swap)" and thought you are
> > advocating "virtual memory, with overcommit (and therefore with OOM killer preventing any software
> > from doing anything correctly even when it's entirely possible and very beneficial)".
>
> I mostly don't care, since I just increase page file size until OOMs don't happen anymore. I think
> Windows doesn't overcommit and that's the only platform I'm really familiar with (from those platforms
> that have virtual memory). OOM induced crash on our build servers basically means we should reboot
> anyway, because it's too much trouble to check if something important has crashed.
Yeah, Windows does a lot of things right, and I suspect that at least part of the problem on *nix operating systems is "fork()". Specifically; if you have a large process with many GiB of data, "fork()" means the OS would have to commit to being able to provide copies of most of that data (anything writable becomes "potentially copy on writable"), even though it's very unlikely to be needed (it's likely the child won't modify much before it does "exec()" anyway), and this makes overcommit a lot more attractive.
> > > If we have page file backed virtual memory, the examples you cite above won't happen that way. Instead
> > > you'll see gradual decrease in performance, which may get really bad before total failure (whether that's
> > > OOM or slowdown so bad that it disrupts the service, depends on virtual memory configuration).
> >
> > No (or not necessarily). For example, if there's 2 GiB of
> > RAM and 20 GiB of swap space; and software frequently
> > uses 1 GiB of data and rarely uses 21 GiB of data; then it will not be slow at all. Note that this is very
> > common for desktop users - e.g. multiple applications and/or multiple browser tabs, with large amounts of
> > data, that are left "running" (sometimes for days) where all the data isn't actually accessed.
>
> I have hard time seeing that as anything else but memory leak. In case of memory leak it doesn't
> matter what you do. You'll run out of any amount of physical and virtual memory, with or without
> overcommit (except that with overcommit one can leak huge amounts of untouched memory).
I have a hard time seeing how anyone can consider it a memory leak. If the user opens 10 applications and leaves 9 of them "running/idle" while they use one of them, then none of them are leaking any memory (but 9 applications that are "running/idle" aren't actually using any of the memory).
You said you work on game consoles. If I'm playing a game that consumes 6 GiB of memory, but pause the game for a few hours while I go shopping, has that 6 GiB of memory been leaked? Does the memory become "unleaked" again when I unpause and continue playing the game later?
> > > It's a nice idea that some process which holds large amounts
> > > of memory for caching purposes could release it
> > > when server starts to run out. But that doesn't really work
> > > on general purpose server. What if there are several
> > > independent processes, but they have different idea of what's
> > > the limit of "server starting to run out of memory"?
> > > The one with more relaxed limit won't free any of its cache before the other one has freed its.
> >
> > A process releasing its own memory when (e.g.) it tries to allocate more and "mmap()" returns ENOMEM and
> > retrying (and succeeding); or cancelling whatever it wanted the extra memory for (and not anything else);
> > or providing feadback directly to user/s (e.g. a dialog box so the user immediately knows what happened
> > and doesn't have to go hunting for obscure logs); or giving the process a chance to save important data
> > to disk so that it's not lost; are all important "first steps" to making software suck less.
>
> You can't presume any of those actions will succeed, if the original memory allocation failed,
> unless you somehow know that the OS can do everything you ask without using any extra memory itself.
> Even if you free significant amount yourself, another process may be hogging it immediately. Actually,
> unless you are very careful, another thread in your own process may be hogging it.
>
> So basically you are doing complicated things (that will hardly ever get tested
> and will most likely be buggy anyway) and still can't guarantee a good outcome.
You've implied that you're a software developer, and yet you think this is "complicated" and "untestable"?
> > They are not the last steps by any means. For example; the system can be augmented by
> > per process quotas, or cgroups, and/or by some kind of "low memory notifications" sent
> > out by OS (to ask process/es to release some amount of memory if they're able).
>
> I'm pretty sure all OSes offer APIs to ask for amount of available memory. I don't believe
> it matters that it's "pull" instead of "push".
Having to constantly monitor the OS (rather than the OS notifying you) would increase the hassle; especially for "otherwise idle" processes.
> Most projects have better things to do.
If most people were incompetent, would you make incompetence mandatory?
- Brendan
Topic | Posted By | Date |
---|---|---|
Nuances related to Spinlock implementation and the Linux Scheduler | Beastian | 2020/01/03 12:46 PM |
Nuances related to Spinlock implementation and the Linux Scheduler | Montaray Jack | 2020/01/03 01:14 PM |
Nuances related to Spinlock implementation and the Linux Scheduler | Montaray Jack | 2020/01/03 01:49 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/03 07:05 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Beastian | 2020/01/04 12:03 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Malte Skarupke | 2020/01/04 12:22 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/04 01:31 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | dmcq | 2020/01/05 07:33 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | smeuletz | 2020/01/06 02:05 AM |
Do not blame others for your unfinished job | smeuletz | 2020/01/06 02:08 AM |
Where did all the experts come from? Did Linus get linked? (NT) | anon | 2020/01/06 04:27 AM |
Phoronix | Gabriele Svelto | 2020/01/06 05:04 AM |
Phoronix | Salvatore De Dominicis | 2020/01/06 07:59 AM |
Do not blame anyone. Please give polite, constructive criticism | Chester | 2020/01/06 09:17 AM |
Do not blame anyone. Please give polite, constructive criticism | smeuletz | 2020/01/06 10:11 AM |
Do not blame anyone. Please give polite, constructive criticism | Chester | 2020/01/06 10:54 AM |
Do not blame anyone. Please give polite, constructive criticism | smeuletz | 2020/01/06 11:33 AM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/06 12:58 PM |
Do not blame anyone. Please give polite, constructive criticism | Gionatan Danti | 2020/01/06 01:13 PM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/06 01:28 PM |
Do not blame anyone. Please give polite, constructive criticism | Gionatan Danti | 2020/01/06 01:52 PM |
Do not blame anyone. Please give polite, constructive criticism | John Scott | 2020/01/10 08:48 AM |
Do not blame anyone. Please give polite, constructive criticism | supernovas | 2020/01/10 10:01 AM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/10 12:45 PM |
Do not blame anyone. Please give polite, constructive criticism | GDan | 2020/04/06 03:10 AM |
Oracle | Anon3 | 2020/04/07 06:42 AM |
Do not blame anyone. Please give polite, constructive criticism | smeuletz | 2020/01/07 04:07 AM |
Do not blame anyone. Please give polite, constructive criticism | Simon Farnsworth | 2020/01/07 01:40 PM |
Do not blame anyone. Please give polite, constructive criticism | Etienne | 2020/01/08 02:08 AM |
Do not blame anyone. Please give polite, constructive criticism | smeuletz | 2020/01/08 02:18 AM |
Do not blame anyone. Please give polite, constructive criticism | Michael S | 2020/01/08 02:56 AM |
Not deprecating irrelevant API: sched_yield() on quantum computers? | smeuletz | 2020/01/07 04:34 AM |
Do not blame anyone. Please give polite, constructive criticism | magicalgoat | 2020/01/09 05:58 PM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/09 10:37 PM |
Do not blame anyone. Please give polite, constructive criticism | Anon3 | 2020/01/10 04:40 PM |
Do not blame anyone. Please give polite, constructive criticism | rwessel | 2020/01/06 10:04 PM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/06 12:11 PM |
Do not blame anyone. Please give polite, constructive criticism | Gabriele Svelto | 2020/01/06 02:36 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Howard Chu | 2020/01/09 11:39 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/10 12:30 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | president ltd | 2020/01/04 02:44 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Jörn Engel | 2020/01/04 12:34 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Emil Briggs | 2020/01/04 01:13 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Jörn Engel | 2020/01/04 01:46 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/04 02:24 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/04 03:54 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Jörn Engel | 2020/01/05 10:21 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/05 12:42 PM |
FUTEX_LOCK_PI performance | Jörn Engel | 2020/01/05 02:45 PM |
FUTEX_LOCK_PI performance | Linus Torvalds | 2020/01/05 04:30 PM |
FUTEX_LOCK_PI performance | Jörn Engel | 2020/01/05 07:03 PM |
FUTEX_LOCK_PI performance | RichardC | 2020/01/06 07:11 AM |
FUTEX_LOCK_PI performance | Linus Torvalds | 2020/01/06 01:11 PM |
FUTEX_LOCK_PI performance | Gabriele Svelto | 2020/01/06 03:20 AM |
FUTEX_LOCK_PI performance | xilun | 2020/01/06 05:19 PM |
FUTEX_LOCK_PI performance | Konrad Schwarz | 2020/01/13 04:36 AM |
FUTEX_LOCK_PI performance | Gabriele Svelto | 2020/01/13 04:53 AM |
FUTEX_LOCK_PI performance | Simon Farnsworth | 2020/01/13 05:36 AM |
FUTEX_LOCK_PI performance | rwessel | 2020/01/13 06:22 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | rainstar | 2020/01/04 10:58 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Charles Ellis | 2020/01/05 04:00 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Richard | 2020/01/05 09:58 AM |
It's hard to separate | Michael S | 2020/01/05 11:17 AM |
It's hard to separate | rainstared | 2020/01/06 01:52 AM |
It's hard to separate | David Kanter | 2020/01/08 09:27 AM |
It's hard to separate | Anon | 2020/01/08 09:37 PM |
It's hard to separate | none | 2020/01/08 11:50 PM |
It's hard to separate | Anon | 2020/01/09 01:41 AM |
It's hard to separate | none | 2020/01/09 03:54 AM |
It's hard to separate | gallier2 | 2020/01/09 04:19 AM |
It's hard to separate | Anon | 2020/01/09 05:12 AM |
It's hard to separate | Adrian | 2020/01/09 05:24 AM |
It's hard to separate | gallier2 | 2020/01/09 05:58 AM |
It's hard to separate | Adrian | 2020/01/09 07:09 AM |
It's hard to separate | gallier2 | 2020/01/09 05:42 AM |
It's hard to separate | Adrian | 2020/01/09 04:41 AM |
It's hard to separate | Anon | 2020/01/09 05:24 AM |
It's hard to separate | gallier2 | 2020/01/09 06:07 AM |
It's hard to separate | David Hess | 2020/01/09 09:27 AM |
It's hard to separate | Adrian | 2020/01/09 10:15 AM |
It's hard to separate | David Hess | 2020/01/09 10:45 AM |
It's hard to separate | Anon | 2020/01/09 11:15 AM |
It's hard to separate | Adrian | 2020/01/09 11:51 AM |
It's hard to separate | Brett | 2020/01/09 01:49 PM |
Zilog Z8000 | Brett | 2020/01/10 10:53 PM |
Zilog Z8000 | David Hess | 2020/01/11 07:06 AM |
Zilog Z8000 | Adrian | 2020/01/11 07:29 AM |
Zilog Z8000 | David Hess | 2020/01/11 08:45 AM |
Zilog Z8000 | Ricardo B | 2020/01/11 08:04 PM |
Zilog Z8000 | Ronald Maas | 2020/01/12 10:47 AM |
Zilog Z8000 | Ricardo B | 2020/01/12 12:15 PM |
Zilog Z8000 | Anon | 2020/01/12 11:34 PM |
Zilog Z8000 | Jose | 2020/01/13 01:23 AM |
Zilog Z8000 | gallier2 | 2020/01/13 01:42 AM |
Zilog Z8000 | Jose | 2020/01/13 10:04 PM |
Zilog Z8000 | rwessel | 2020/01/13 10:40 PM |
Zilog Z8000 | David Hess | 2020/01/13 11:35 PM |
Zilog Z8000 | Simon Farnsworth | 2020/01/14 03:56 AM |
Zilog Z8000 | Michael S | 2020/01/14 04:09 AM |
Zilog Z8000 | Simon Farnsworth | 2020/01/14 05:06 AM |
Zilog Z8000 | David Hess | 2020/01/14 10:22 AM |
Zilog Z8000 | David Hess | 2020/01/14 10:15 AM |
Zilog Z8000 | rwessel | 2020/01/14 04:12 PM |
286 16 bit I/O | Tim McCaffrey | 2020/01/15 11:25 AM |
286 16 bit I/O | David Hess | 2020/01/15 09:17 PM |
Zilog Z8000 | Ricardo B | 2020/01/13 11:52 AM |
Zilog Z8000 | Anon | 2020/01/13 12:25 PM |
Zilog Z8000 | David Hess | 2020/01/13 06:38 PM |
Zilog Z8000 | rwessel | 2020/01/13 07:16 PM |
Zilog Z8000 | David Hess | 2020/01/13 07:47 PM |
Zilog Z8000 | someone | 2020/01/14 07:54 AM |
Zilog Z8000 | Anon | 2020/01/14 08:31 AM |
Zilog Z8000 | Ricardo B | 2020/01/14 06:29 PM |
Zilog Z8000 | Simon Farnsworth | 2020/01/15 03:26 AM |
Zilog Z8000 | Tim McCaffrey | 2020/01/15 11:27 AM |
Zilog Z8000 | Simon Farnsworth | 2020/01/15 02:32 PM |
Zilog Z8000 | Ricardo B | 2020/01/15 03:47 PM |
Zilog Z8000 | Anon | 2020/01/15 04:08 PM |
Zilog Z8000 | Ricardo B | 2020/01/15 05:16 PM |
Zilog Z8000 | Anon | 2020/01/15 05:31 PM |
Zilog Z8000 | Ricardo B | 2020/01/15 06:46 PM |
Zilog Z8000 | Anon | 2020/01/15 07:04 PM |
Zilog Z8000 | David Hess | 2020/01/15 09:53 PM |
Zilog Z8000 | Ricardo B | 2020/01/16 07:27 PM |
Zilog Z8000 | Anon | 2020/01/16 08:33 PM |
Zilog Z8000 | Ronald Maas | 2020/01/17 12:05 AM |
Zilog Z8000 | Anon | 2020/01/17 08:15 AM |
Zilog Z8000 | Ricardo B | 2020/01/17 02:59 PM |
Zilog Z8000 | Anon | 2020/01/17 07:40 PM |
Zilog Z8000 | Ricardo B | 2020/01/18 08:42 AM |
Zilog Z8000 | gallier2 | 2020/01/19 08:02 AM |
Zilog Z8000 | David Hess | 2020/01/18 07:12 AM |
Zilog Z8000 | David Hess | 2020/01/15 09:49 PM |
Zilog Z8000 | gallier2 | 2020/01/16 12:57 AM |
Zilog Z8000 | Simon Farnsworth | 2020/01/16 02:30 AM |
IBM PC success | Etienne | 2020/01/16 06:42 AM |
Zilog Z8000 | Ricardo B | 2020/01/16 07:32 PM |
Zilog Z8000 | Brett | 2020/01/17 01:38 AM |
Zilog Z8000 | David Hess | 2020/01/18 07:28 AM |
Zilog Z8000 | David Hess | 2020/01/18 07:22 AM |
Zilog Z8000 | David Hess | 2020/01/15 09:30 PM |
Zilog Z8000 | Maxwell | 2020/01/11 09:07 AM |
Zilog Z8000 | David Hess | 2020/01/11 09:40 AM |
Zilog Z8000 | Maxwell | 2020/01/11 10:08 AM |
Zilog Z8000 | Ricardo B | 2020/01/11 08:42 PM |
8086 does NOT have those addressing modes | Devin | 2020/01/12 02:13 PM |
8086 does NOT have those addressing modes | Ricardo B | 2020/01/12 06:46 PM |
8086 does NOT have those addressing modes | Anon | 2020/01/13 05:10 AM |
8086 does NOT have those addressing modes | gallier2 | 2020/01/13 06:07 AM |
8086 does NOT have those addressing modes | Anon | 2020/01/13 07:09 AM |
8086 does NOT have those addressing modes | Ricardo B | 2020/01/13 11:48 AM |
8086 does NOT have those addressing modes | Michael S | 2020/01/13 07:40 AM |
Zilog Z8000 | Ronald Maas | 2020/01/13 09:44 AM |
Zilog Z8000 | Anon | 2020/01/13 04:32 PM |
8086 does NOT have those addressing modes | Ricardo B | 2020/01/13 11:24 AM |
8086 does NOT have those addressing modes | rwessel | 2020/01/13 03:59 PM |
8086 does NOT have those addressing modes | David Hess | 2020/01/13 07:12 PM |
8086 does NOT have those addressing modes | rwessel | 2020/01/13 07:28 PM |
8086 does NOT have those addressing modes | David Hess | 2020/01/13 07:51 PM |
8086 does NOT have those addressing modes | David Hess | 2020/01/13 06:55 PM |
Zilog Z8000 | rwessel | 2020/01/11 01:26 PM |
Zilog Z8000 | Brett | 2020/01/11 03:16 PM |
Zilog Z8000 | rwessel | 2020/01/11 08:20 PM |
Zilog Z8000 | Brett | 2020/01/12 01:02 PM |
Zilog Z8000 | rwessel | 2020/01/12 10:06 PM |
Zilog Z8000 | Brett | 2020/01/12 11:02 PM |
Zilog Z8000 | James | 2020/01/13 06:12 AM |
Zilog Z8000 | Adrian | 2020/01/12 12:38 AM |
PDP-11 | Michael S | 2020/01/12 02:33 AM |
Zilog Z8000 | rwessel | 2020/01/12 07:01 AM |
Zilog Z8000 | Ronald Maas | 2020/01/12 11:03 AM |
Zilog Z8000 | Konrad Schwarz | 2020/01/13 04:49 AM |
Zilog Z8000 | Adrian | 2020/01/14 12:38 AM |
Zilog Z8000 | konrad.schwarz | 2020/01/15 05:50 AM |
Zilog Z8000 | Adrian | 2020/01/15 11:24 PM |
It's hard to separate | David Hess | 2020/01/11 07:08 AM |
It's hard to separate | David Hess | 2020/01/11 07:11 AM |
It's hard to separate | Adrian | 2020/01/09 12:16 PM |
It's hard to separate | David Hess | 2020/01/11 07:17 AM |
It's hard to separate | gallier2 | 2020/01/10 01:11 AM |
It's hard to separate | none | 2020/01/10 02:58 AM |
It's hard to separate | rwessel | 2020/01/09 08:00 AM |
It's hard to separate | David Hess | 2020/01/09 09:10 AM |
It's hard to separate | rwessel | 2020/01/09 09:51 AM |
It's hard to separate | Adrian | 2020/01/08 11:58 PM |
It's hard to separate | rwessel | 2020/01/09 07:31 AM |
It's hard to separate | Adrian | 2020/01/09 07:44 AM |
It's hard to separate | David Hess | 2020/01/09 09:37 AM |
It's hard to separate | none | 2020/01/09 10:34 AM |
Are segments so bad? | Paul A. Clayton | 2020/01/09 03:15 PM |
Yes, they are terrible (NT) | Anon | 2020/01/09 03:20 PM |
Are segments so bad? | Adrian | 2020/01/10 12:49 AM |
Are segments so bad? | Etienne | 2020/01/10 02:28 AM |
Are segments so bad? | gallier2 | 2020/01/10 02:37 AM |
Are segments so bad? | Adrian | 2020/01/10 03:19 AM |
Are segments so bad? | Adrian | 2020/01/10 04:27 AM |
Are segments so bad? | Etienne | 2020/01/10 04:41 AM |
Are segments so bad? | Adrian | 2020/01/10 03:05 AM |
Are segments so bad? | gallier2 | 2020/01/10 03:13 AM |
Are segments so bad? | Anon3 | 2020/01/10 11:37 AM |
Are segments so bad? | Adrian | 2020/01/10 11:47 AM |
Are segments so bad? | Brendan | 2020/01/11 01:43 AM |
Are segments so bad? | Anon | 2020/01/10 06:51 PM |
Are segments so bad? | Adrian | 2020/01/11 01:05 AM |
Are segments so bad? | Jukka Larja | 2020/01/11 08:20 AM |
Are segments so bad? | Brendan | 2020/01/11 10:14 AM |
Are segments so bad? | Jukka Larja | 2020/01/11 09:15 PM |
Are segments so bad? | Brendan | 2020/01/11 11:15 PM |
Are segments so bad? | Jukka Larja | 2020/01/12 04:18 AM |
Are segments so bad? | anon | 2020/01/12 12:30 PM |
Are segments so bad? | Brendan | 2020/01/12 10:19 PM |
the world sucks worse than you're aware of | Michael S | 2020/01/13 01:50 AM |
the world sucks worse than you're aware of | Brendan | 2020/01/13 03:56 AM |
the world sucks worse than you're aware of | Gabriele Svelto | 2020/01/13 04:46 AM |
Are segments so bad? | Jukka Larja | 2020/01/13 07:41 AM |
Are segments so bad? | Brendan | 2020/01/13 08:21 AM |
Are segments so bad? | Jukka Larja | 2020/01/13 09:43 AM |
Are segments so bad? | Brendan | 2020/01/13 01:02 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/13 01:22 PM |
Are segments so bad? | Brendan | 2020/01/13 02:50 PM |
actor of around 200? | Michael S | 2020/01/14 03:58 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/14 12:50 PM |
Not overcomitting leads to more OOMs, not less | Brendan | 2020/01/14 01:40 PM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/15 03:17 AM |
Not overcomitting leads to more OOMs, not less | Anon | 2020/01/15 04:43 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/15 05:09 AM |
Not overcomitting leads to more OOMs, not less | Anon | 2020/01/15 05:16 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/15 06:58 AM |
Not overcomitting leads to more OOMs, not less | Anon | 2020/01/15 09:08 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/16 04:05 AM |
Not overcomitting leads to more OOMs, not less | Michael S | 2020/01/15 04:48 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/15 05:10 AM |
Not overcomitting leads to more OOMs, not less | Michael S | 2020/01/15 08:13 AM |
Not overcomitting leads to more OOMs, not less | Jukka Larja | 2020/01/15 08:46 AM |
Not overcomitting leads to more OOMs, not less | Jukka Larja | 2020/01/15 06:08 AM |
Thanks for the info (NT) | Gabriele Svelto | 2020/01/15 07:00 AM |
Not overcomitting leads to more OOMs, not less | Linus Torvalds | 2020/01/15 12:30 PM |
OOM killer complains | Anon | 2020/01/15 12:44 PM |
OOM killer complains | anon | 2020/01/15 04:26 PM |
Not overcomitting leads to more OOMs, not less | Brendan | 2020/01/16 07:26 AM |
Not overcomitting leads to more OOMs, not less | Linus Torvalds | 2020/01/16 10:17 AM |
Not overcomitting leads to more OOMs, not less | Linus Torvalds | 2020/01/16 10:48 AM |
Not overcomitting leads to more OOMs, not less | Doug S | 2020/01/16 03:41 PM |
Not overcomitting leads to more OOMs, not less | Doug S | 2020/01/16 03:44 PM |
Are segments so bad? | rwessel | 2020/01/13 04:11 PM |
Are segments so bad? | Jukka Larja | 2020/01/14 07:37 AM |
Are segments so bad? | Brendan | 2020/01/14 08:48 AM |
Are segments so bad? | Jukka Larja | 2020/01/14 11:13 AM |
Are segments so bad? | Brendan | 2020/01/14 02:30 PM |
Are segments so bad? | Brett | 2020/01/14 10:13 PM |
Are segments so bad? | Jukka Larja | 2020/01/15 07:04 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/15 03:35 AM |
Specifying cost of dropping pages | Paul A. Clayton | 2020/01/13 03:00 PM |
Specifying cost of dropping pages | rwessel | 2020/01/13 04:19 PM |
Specifying cost of dropping pages | Gabriele Svelto | 2020/01/15 03:23 AM |
Are segments so bad? | anon | 2020/01/14 02:15 AM |
Are segments so bad? | Brendan | 2020/01/14 06:13 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/14 12:57 PM |
Are segments so bad? | Brendan | 2020/01/14 02:58 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/15 03:33 AM |
Are segments so bad? | Anon | 2020/01/15 05:24 AM |
Are segments so bad? | Jukka Larja | 2020/01/15 06:20 AM |
Are segments so bad? | Etienne | 2020/01/15 05:56 AM |
Are segments so bad? | Jukka Larja | 2020/01/15 08:53 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/16 06:12 AM |
Are segments so bad? | Jukka Larja | 2020/01/16 10:56 AM |
Are segments so bad? | Brendan | 2020/01/15 06:20 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/15 06:56 AM |
Are segments so bad? | Brendan | 2020/01/16 07:16 AM |
Are segments so bad? | Jukka Larja | 2020/01/16 11:08 AM |
Are segments so bad? | Brendan | 2020/01/17 01:52 PM |
Are segments so bad? | Jukka Larja | 2020/01/17 10:08 PM |
Are segments so bad? | Brendan | 2020/01/18 12:40 PM |
Are segments so bad? | Jukka Larja | 2020/01/18 10:13 PM |
Are segments so bad? | Brendan | 2020/01/19 12:25 PM |
Are segments so bad? | Brett | 2020/01/19 03:18 PM |
Are segments so bad? | Brett | 2020/01/19 03:34 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 12:57 AM |
Are segments so bad? | Jukka Larja | 2020/01/20 05:54 AM |
Are segments so bad? | Brendan | 2020/01/20 12:43 PM |
Are segments so bad? | Jukka Larja | 2020/01/21 07:01 AM |
Are segments so bad? | Brendan | 2020/01/21 06:04 PM |
Are segments so bad? | Jukka Larja | 2020/01/22 07:30 AM |
Are segments so bad? | Brendan | 2020/01/22 03:56 PM |
Are segments so bad? | Jukka Larja | 2020/01/23 08:44 AM |
Are segments so bad? | rwessel | 2020/01/16 03:06 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/16 03:13 PM |
Are segments so bad? | Brendan | 2020/01/17 01:51 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/17 03:18 PM |
Are segments so bad? | Anon | 2020/01/17 08:01 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 01:06 AM |
Are segments so bad? | Brendan | 2020/01/18 03:15 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 12:55 AM |
Are segments so bad? | Michael S | 2020/01/20 05:30 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 08:02 AM |
Are segments so bad? | Jukka Larja | 2020/01/20 08:41 AM |
Are segments so bad? | Michael S | 2020/01/20 08:45 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 09:36 AM |
Are segments so bad? | Brendan | 2020/01/20 11:04 AM |
Are segments so bad? | Michael S | 2020/01/20 01:22 PM |
Are segments so bad? | Brendan | 2020/01/20 02:38 PM |
Are segments so bad? | Simon Farnsworth | 2020/01/20 03:40 PM |
Are segments so bad? | Anon | 2020/01/20 04:35 PM |
Are segments so bad? | Simon Farnsworth | 2020/01/20 05:30 PM |
Are segments so bad? | Michael S | 2020/01/20 05:20 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/21 05:08 AM |
Are segments so bad? | Brendan | 2020/01/21 06:07 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/22 01:53 AM |
Are segments so bad? | Brendan | 2020/01/22 04:32 AM |
Are segments so bad? | Jukka Larja | 2020/01/22 07:12 AM |
Are segments so bad? | Brendan | 2020/01/22 04:28 PM |
Are segments so bad? | Jukka Larja | 2020/01/23 07:36 AM |
Are segments so bad? | Brendan | 2020/01/24 07:27 PM |
Are segments so bad? | Jukka Larja | 2020/01/24 10:42 PM |
Are segments so bad? | Brendan | 2020/01/25 02:46 AM |
Are segments so bad? | Jukka Larja | 2020/01/25 08:29 AM |
Are segments so bad? | Brendan | 2020/01/26 11:17 PM |
Are segments so bad? | Jukka Larja | 2020/01/27 07:55 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/27 04:33 PM |
Are segments so bad? | Jukka Larja | 2020/01/28 06:28 AM |
DDS assets and MipMap chains | Montaray Jack | 2020/01/29 03:26 AM |
Are segments so bad? | gallier2 | 2020/01/27 03:58 AM |
Are segments so bad? | Jukka Larja | 2020/01/27 06:19 AM |
Are segments so bad? | Anne O. Nymous | 2020/01/25 03:23 AM |
Are segments so bad? | Anon | 2020/01/22 05:52 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/23 01:24 AM |
Are segments so bad? | Anon | 2020/01/23 05:24 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/24 12:43 AM |
Are segments so bad? | Anon | 2020/01/24 04:04 AM |
Are segments so bad? | Etienne | 2020/01/24 06:10 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/23 01:48 AM |
Are segments so bad? | Michael S | 2020/01/23 03:48 AM |
Are segments so bad? | Jukka Larja | 2020/01/23 07:38 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/23 01:29 PM |
Are segments so bad? | Anon | 2020/01/23 06:08 PM |
Are segments so bad? | Jukka Larja | 2020/01/24 09:51 PM |
Are segments so bad? | Anon | 2020/01/23 06:02 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/24 03:57 AM |
Are segments so bad? | Anon | 2020/01/24 04:17 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/24 09:23 AM |
Are segments so bad? | Anon | 2020/02/02 10:15 PM |
Are segments so bad? | Gabriele Svelto | 2020/02/03 01:47 AM |
Are segments so bad? | Anon | 2020/02/03 02:34 AM |
Are segments so bad? | Gabriele Svelto | 2020/02/03 05:36 AM |
Are segments so bad? | Anon3 | 2020/02/03 08:47 AM |
Are segments so bad? | Anon | 2020/02/04 05:49 PM |
Are segments so bad? | Jukka Larja | 2020/01/24 10:10 PM |
Are segments so bad? | Jukka Larja | 2020/01/17 10:26 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/12 04:18 AM |
Are segments so bad? | Jukka Larja | 2020/01/12 08:41 AM |
Are segments so bad? | rwessel | 2020/01/11 01:31 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/11 08:22 AM |
Are segments so bad? | Ricardo B | 2020/01/11 08:01 PM |
Are segments so bad? | Adrian | 2020/01/12 12:18 AM |
Are segments so bad? | Michael S | 2020/01/12 02:43 AM |
Are segments so bad? | Adrian | 2020/01/12 04:35 AM |
Are segments so bad? | Ricardo B | 2020/01/12 12:04 PM |
Are segments so bad? | Anon3 | 2020/01/12 05:52 PM |
Are segments so bad? | Brendan | 2020/01/12 09:58 PM |
Are segments so bad? | Paul A. Clayton | 2020/01/13 09:11 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | rainstared | 2020/01/06 01:43 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Foo_ | 2020/01/06 05:33 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | dmcq | 2020/01/06 06:03 AM |
changes in context | Carlie Coats | 2020/01/09 09:06 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | rainstar | 2020/01/09 10:16 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Montaray Jack | 2020/01/09 11:11 PM |
Suggested reading for the author | anon | 2020/01/04 11:16 PM |
Suggested reading for the author | ab | 2020/01/05 05:15 AM |
Looking at the other side (frequency scaling) | Chester | 2020/01/06 10:19 AM |
Looking at the other side (frequency scaling) | Foo_ | 2020/01/06 11:00 AM |
Why spinlocks were used | Foo_ | 2020/01/06 11:06 AM |
Why spinlocks were used | Jukka Larja | 2020/01/06 12:59 PM |
Why spinlocks were used | Simon Cooke | 2020/01/06 03:16 PM |
Why spinlocks were used | Rizzo | 2020/01/07 01:18 AM |
Looking at the other side (frequency scaling) | ab | 2020/01/07 01:14 AM |
Cross-platform code | Gian-Carlo Pascutto | 2020/01/06 08:00 AM |
Cross-platform code | Michael S | 2020/01/06 09:11 AM |
Cross-platform code | Gian-Carlo Pascutto | 2020/01/06 12:33 PM |
Cross-platform code | Michael S | 2020/01/06 01:59 PM |
Cross-platform code | Nksingh | 2020/01/07 12:09 AM |
Cross-platform code | Michael S | 2020/01/07 02:00 AM |
SRW lock implementation | Michael S | 2020/01/07 02:35 AM |
SRW lock implementation | Nksingh | 2020/01/09 02:17 PM |
broken URL in Linux source code | Michael S | 2020/01/14 01:56 AM |
broken URL in Linux source code | Travis Downs | 2020/01/14 10:14 AM |
broken URL in Linux source code | Michael S | 2020/01/14 10:48 AM |
broken URL in Linux source code | Travis Downs | 2020/01/14 04:43 PM |
SRW lock implementation - url broken | Michael S | 2020/01/14 03:07 AM |
SRW lock implementation - url broken | Travis Downs | 2020/01/14 11:06 AM |
SRW lock implementation - url broken | gpderetta | 2020/01/15 04:28 AM |
SRW lock implementation - url broken | Travis Downs | 2020/01/15 11:16 AM |
SRW lock implementation - url broken | Linus Torvalds | 2020/01/15 11:20 AM |
SRW lock implementation - url broken | Travis Downs | 2020/01/15 11:35 AM |
SRW lock implementation - url broken | Linus Torvalds | 2020/01/16 11:24 AM |
SRW lock implementation - url broken | Konrad Schwarz | 2020/02/05 10:19 AM |
SRW lock implementation - url broken | nksingh | 2020/02/05 02:42 PM |
Cross-platform code | Linus Torvalds | 2020/01/06 01:57 PM |