By: Brendan (btrotter.delete@this.gmail.com), January 16, 2020 7:16 am
Room: Moderated Discussions
Hi,
Gabriele Svelto (gabriele.svelto.delete@this.gmail.com) on January 15, 2020 5:56 am wrote:
> Brendan (btrotter.delete@this.gmail.com) on January 15, 2020 5:20 am wrote:
> > No, you're assuming overcommit exists and its leading you to false assumptions.
> >
> > If an allocation fails and there is overcommit; yes, you're probably screwed regardless.
> >
> > If an allocation fails and there is no overcommit; then it's not a problem at all -
> > you're able to do anything you like to handle that (without needing any notifications)
> > without worrying about "OOM killer" preventing you from doing anything sane.
>
> No, you assume that you can't detect the OOM killer killing you. You can, you just need a handler
> for SIGBUS. The only scenario in which you can't detect the OOM killer is IIRC when the kernel
> needs memory and then it will take it no matter what. I don't know how the Windows kernel deals
> with that but I doubt it will fall over and panic instead of killing a user process.
Oh, that sounds easy. When my process receives a new TCP/IP connection request it creates a thread for it (memory for thread stack, etc), then allocates thread local data I use to keep track of the connection and some buffers, and because all that "succeeded" I send a "You connected, everything is fine, here's RSA public key info for login details" packet back to the client. Then, because of what I allocated, a completely different process that has nothing to do with me gets a SIGBUS signal and (because apparently every process anyone will ever write will immediately know about my thread's memory usage) somehow will inform my process (without using any extra memory) that the kernel is an idiotic piece of shit. Then my process travels back in time so that it can tell the client "Sorry, not enough resources for a new connection at this time" so that the end user (who is using a different computer) can see a nice dialog box telling them why they won't be able to log in.
> > No, it's easy to pre-allocate whatever you need to handle errors like this (e.g. during process
> > startup) to ensure you don't need to allocate when you can't. Sometimes that's not even necessary
> > - in my case (server with one thread per connection) I can output to log without allocating
> > and then terminate the thread/connection (which also causes its "thread local data" to be
> > freed and helps other threads/connections avoid future allocation failures).
>
> OK, your assumption of "handling OOM situations correctly" is:
> log and abort". That is not what I call handling an OOM condition. We have plenty of fallible
> allocation points in Firefox where we can actually fully recover from an allocation failure
> and go on, but it's not possible to do that for every single allocation, especially if you
> don't control all your dependencies (and that's practically impossible nowadays).
You control which dependencies you depend on; and if you choose to depend on a hideous design failure (e.g. that has functions that need to allocate memory but don't tell you they failed and force you to assume something succeeded when it didn't) then that is your fault for making bad choices.
> That also assumes that all allocation points (including indirect ones) have proper checking. Because if they don't
> you'll have NULL pointers spreading through your data possibly causing a random crash at a later point where you
> can do absolutely nothing about it. You can't just assume that software has perfect error detection.
Um, what?
If there's a library function that always returns "success" (but the docs don't say that success is guaranteed in all future versions of the library) and another programmer calls it without checking the return value (that is always "success"), then I tell the other programmer they're incompetent and refuse to work with that programmer ever again (because a future version of that library might return something else).
Do you honestly think I worry about "without proper checking you'll have NULL pointers spreading everywhere"?
> > Of course if you do have overcommit, then things like modifying data or writing to a log
> > file could cause "commited to but not actually allocated" pages to be allocated (and trigger
> > "OOM killer"); so trying to handle OOM conditions does become a "tip-toe through the minefield"
> > nightmare scenario that you've falsely assumed applies to "without overcommit".
>
> I'm not sure what you're talking about. The OOM killer will send you a signal which you can handle in a signal
> handler and you can even resume from it.
Have you ever tried to use signals before? You get a signal while your in the middle of who-knows-what; then what? Cause a deadlock while trying to acquire a lock you need to free memory you were using to cache something? Inform a thread that it needs to terminate to save other threads and wait for it to also receive SIGBUS and cause more problems?
> The process doesn't disappear into thin air unless the kernel is running
> out of memory and then it's obviously better to kill a user process than the entire machine. As I pointed out
> the above applies to Windows as well. Also, you can control the OOM killer from user-space.
LOL. It's not possible for a sane person to read the first 2 paragraphs of that page without loud alarm bells screaming "WTF? WTF? WTF?" going off in their head. The entire history of "OOM killer" on Linux (all the complaints, all the "admin tunables/knobs", all the extra interfaces slapped on top) is like a long running series of "The Three Stooges Attempt To Work Around Symptoms of a Batishit Insane Design Flaw".
> > https://stackoverflow.com/questions/45605806/out-of-memory-allocating-bytes
> >
> > Note: "failed to allocate" was correctly handled by logging/reporting the error (instead
> > of letting a random and potentially unrelated process get terminated by "OOM killer").
>
> LOL. GCC is just quitting at that point possibly leaving behind a half-written
> object file. I don't see how that's better than dying at a random point.
>
> GCC is also not a really good example because it handles SIGBUS on Linux
> and when it does it tries to quit more gracefully (though not by much).
Yes; GCC was just the easiest example of "bare minimum handling of allocation failures".
> > You're being disingenuous by assuming a system without overcommit
> > will be set up the same as a system without overcommit.
> >
> > If you have one system that overcommits, and another system that doesn't overcommit but is given
> > adequate swap space to ensure that it can handle the exact same load as the first system; then
> > the only difference will be that the first system has "OOM killer" and may terminate unrelated
> > processes (which is incredibly idiotic) and the second system will be able to handle OOM in
> > whatever manner the developers wanted to (which can be significantly less idiotic).
> >
> > The important thing is how OOM is handled, not how many cents
> > worth of extra swap space are needed to avoid idiocy.
>
> You're handwaving stuff away without presenting data to back your assumptions. I have crash data
> - from million of Firefox users - that shows over two thirds of OOM crashes on Windows happening
> on machines with plenty of physical memory available. That means that Windows lack of overcommit
> causes a very large amount of crashes that would simply not happen on a system with overcommit.
Heh, no. What you have is proof that a well known resource hog is actually a resource hog. Look at the crash reports - they're all saying "90%+ of system memory used, available page file size reduced to zero". Tell me, how do you think Windows is supposed to hibernate properly when you've gobbled 100% of RAM and 100% of disk? The only problem I see here is that Windows isn't using per-process quotas to stop your memory leak sooner.
As far as I can tell (I'm "not great" at C++ and only looked at small scraps of a large project) the reason they're crashes (and not "reductions in amount of data cached for later" or "dialog boxes telling user one of their tabs needs to be closed" or anything else that could be considered acceptable) is that FireFox's screws up "new()" so that (despite the "noexcept(false)") it fails to throw an exception when there's OOM (preventing the caller, who has the knowledge needed to figure out the best way to handle OOM and/or can send the error up the call stack to whatever is most suited to handling the OOM, from being involved) and just ends up at an incredibly stupid "We might proceed to a stage 2 in which an attempt is made to reclaim memory" comment in unfinished code that will never attempt to reclaim memory (because actually trying to reclaim memory would be horrifically complex when you don't have any of the caller's knowledge about the current context - e.g. which mutexes are currently held, etc).
- Brendan
Gabriele Svelto (gabriele.svelto.delete@this.gmail.com) on January 15, 2020 5:56 am wrote:
> Brendan (btrotter.delete@this.gmail.com) on January 15, 2020 5:20 am wrote:
> > No, you're assuming overcommit exists and its leading you to false assumptions.
> >
> > If an allocation fails and there is overcommit; yes, you're probably screwed regardless.
> >
> > If an allocation fails and there is no overcommit; then it's not a problem at all -
> > you're able to do anything you like to handle that (without needing any notifications)
> > without worrying about "OOM killer" preventing you from doing anything sane.
>
> No, you assume that you can't detect the OOM killer killing you. You can, you just need a handler
> for SIGBUS. The only scenario in which you can't detect the OOM killer is IIRC when the kernel
> needs memory and then it will take it no matter what. I don't know how the Windows kernel deals
> with that but I doubt it will fall over and panic instead of killing a user process.
Oh, that sounds easy. When my process receives a new TCP/IP connection request it creates a thread for it (memory for thread stack, etc), then allocates thread local data I use to keep track of the connection and some buffers, and because all that "succeeded" I send a "You connected, everything is fine, here's RSA public key info for login details" packet back to the client. Then, because of what I allocated, a completely different process that has nothing to do with me gets a SIGBUS signal and (because apparently every process anyone will ever write will immediately know about my thread's memory usage) somehow will inform my process (without using any extra memory) that the kernel is an idiotic piece of shit. Then my process travels back in time so that it can tell the client "Sorry, not enough resources for a new connection at this time" so that the end user (who is using a different computer) can see a nice dialog box telling them why they won't be able to log in.
> > No, it's easy to pre-allocate whatever you need to handle errors like this (e.g. during process
> > startup) to ensure you don't need to allocate when you can't. Sometimes that's not even necessary
> > - in my case (server with one thread per connection) I can output to log without allocating
> > and then terminate the thread/connection (which also causes its "thread local data" to be
> > freed and helps other threads/connections avoid future allocation failures).
>
> OK, your assumption of "handling OOM situations correctly" is:
- You can recover from
> the failure of every allocation - You will have error handling for every allocation
>
> log and abort". That is not what I call handling an OOM condition. We have plenty of fallible
> allocation points in Firefox where we can actually fully recover from an allocation failure
> and go on, but it's not possible to do that for every single allocation, especially if you
> don't control all your dependencies (and that's practically impossible nowadays).
You control which dependencies you depend on; and if you choose to depend on a hideous design failure (e.g. that has functions that need to allocate memory but don't tell you they failed and force you to assume something succeeded when it didn't) then that is your fault for making bad choices.
> That also assumes that all allocation points (including indirect ones) have proper checking. Because if they don't
> you'll have NULL pointers spreading through your data possibly causing a random crash at a later point where you
> can do absolutely nothing about it. You can't just assume that software has perfect error detection.
Um, what?
If there's a library function that always returns "success" (but the docs don't say that success is guaranteed in all future versions of the library) and another programmer calls it without checking the return value (that is always "success"), then I tell the other programmer they're incompetent and refuse to work with that programmer ever again (because a future version of that library might return something else).
Do you honestly think I worry about "without proper checking you'll have NULL pointers spreading everywhere"?
> > Of course if you do have overcommit, then things like modifying data or writing to a log
> > file could cause "commited to but not actually allocated" pages to be allocated (and trigger
> > "OOM killer"); so trying to handle OOM conditions does become a "tip-toe through the minefield"
> > nightmare scenario that you've falsely assumed applies to "without overcommit".
>
> I'm not sure what you're talking about. The OOM killer will send you a signal which you can handle in a signal
> handler and you can even resume from it.
Have you ever tried to use signals before? You get a signal while your in the middle of who-knows-what; then what? Cause a deadlock while trying to acquire a lock you need to free memory you were using to cache something? Inform a thread that it needs to terminate to save other threads and wait for it to also receive SIGBUS and cause more problems?
> The process doesn't disappear into thin air unless the kernel is running
> out of memory and then it's obviously better to kill a user process than the entire machine. As I pointed out
> the above applies to Windows as well. Also, you can control the OOM killer from user-space.
LOL. It's not possible for a sane person to read the first 2 paragraphs of that page without loud alarm bells screaming "WTF? WTF? WTF?" going off in their head. The entire history of "OOM killer" on Linux (all the complaints, all the "admin tunables/knobs", all the extra interfaces slapped on top) is like a long running series of "The Three Stooges Attempt To Work Around Symptoms of a Batishit Insane Design Flaw".
> > https://stackoverflow.com/questions/45605806/out-of-memory-allocating-bytes
> >
> > Note: "failed to allocate" was correctly handled by logging/reporting the error (instead
> > of letting a random and potentially unrelated process get terminated by "OOM killer").
>
> LOL. GCC is just quitting at that point possibly leaving behind a half-written
> object file. I don't see how that's better than dying at a random point.
>
> GCC is also not a really good example because it handles SIGBUS on Linux
> and when it does it tries to quit more gracefully (though not by much).
Yes; GCC was just the easiest example of "bare minimum handling of allocation failures".
> > You're being disingenuous by assuming a system without overcommit
> > will be set up the same as a system without overcommit.
> >
> > If you have one system that overcommits, and another system that doesn't overcommit but is given
> > adequate swap space to ensure that it can handle the exact same load as the first system; then
> > the only difference will be that the first system has "OOM killer" and may terminate unrelated
> > processes (which is incredibly idiotic) and the second system will be able to handle OOM in
> > whatever manner the developers wanted to (which can be significantly less idiotic).
> >
> > The important thing is how OOM is handled, not how many cents
> > worth of extra swap space are needed to avoid idiocy.
>
> You're handwaving stuff away without presenting data to back your assumptions. I have crash data
> - from million of Firefox users - that shows over two thirds of OOM crashes on Windows happening
> on machines with plenty of physical memory available. That means that Windows lack of overcommit
> causes a very large amount of crashes that would simply not happen on a system with overcommit.
Heh, no. What you have is proof that a well known resource hog is actually a resource hog. Look at the crash reports - they're all saying "90%+ of system memory used, available page file size reduced to zero". Tell me, how do you think Windows is supposed to hibernate properly when you've gobbled 100% of RAM and 100% of disk? The only problem I see here is that Windows isn't using per-process quotas to stop your memory leak sooner.
As far as I can tell (I'm "not great" at C++ and only looked at small scraps of a large project) the reason they're crashes (and not "reductions in amount of data cached for later" or "dialog boxes telling user one of their tabs needs to be closed" or anything else that could be considered acceptable) is that FireFox's screws up "new()" so that (despite the "noexcept(false)") it fails to throw an exception when there's OOM (preventing the caller, who has the knowledge needed to figure out the best way to handle OOM and/or can send the error up the call stack to whatever is most suited to handling the OOM, from being involved) and just ends up at an incredibly stupid "We might proceed to a stage 2 in which an attempt is made to reclaim memory" comment in unfinished code that will never attempt to reclaim memory (because actually trying to reclaim memory would be horrifically complex when you don't have any of the caller's knowledge about the current context - e.g. which mutexes are currently held, etc).
- Brendan
Topic | Posted By | Date |
---|---|---|
Nuances related to Spinlock implementation and the Linux Scheduler | Beastian | 2020/01/03 12:46 PM |
Nuances related to Spinlock implementation and the Linux Scheduler | Montaray Jack | 2020/01/03 01:14 PM |
Nuances related to Spinlock implementation and the Linux Scheduler | Montaray Jack | 2020/01/03 01:49 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/03 07:05 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Beastian | 2020/01/04 12:03 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Malte Skarupke | 2020/01/04 12:22 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/04 01:31 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | dmcq | 2020/01/05 07:33 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | smeuletz | 2020/01/06 02:05 AM |
Do not blame others for your unfinished job | smeuletz | 2020/01/06 02:08 AM |
Where did all the experts come from? Did Linus get linked? (NT) | anon | 2020/01/06 04:27 AM |
Phoronix | Gabriele Svelto | 2020/01/06 05:04 AM |
Phoronix | Salvatore De Dominicis | 2020/01/06 07:59 AM |
Do not blame anyone. Please give polite, constructive criticism | Chester | 2020/01/06 09:17 AM |
Do not blame anyone. Please give polite, constructive criticism | smeuletz | 2020/01/06 10:11 AM |
Do not blame anyone. Please give polite, constructive criticism | Chester | 2020/01/06 10:54 AM |
Do not blame anyone. Please give polite, constructive criticism | smeuletz | 2020/01/06 11:33 AM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/06 12:58 PM |
Do not blame anyone. Please give polite, constructive criticism | Gionatan Danti | 2020/01/06 01:13 PM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/06 01:28 PM |
Do not blame anyone. Please give polite, constructive criticism | Gionatan Danti | 2020/01/06 01:52 PM |
Do not blame anyone. Please give polite, constructive criticism | John Scott | 2020/01/10 08:48 AM |
Do not blame anyone. Please give polite, constructive criticism | supernovas | 2020/01/10 10:01 AM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/10 12:45 PM |
Do not blame anyone. Please give polite, constructive criticism | GDan | 2020/04/06 03:10 AM |
Oracle | Anon3 | 2020/04/07 06:42 AM |
Do not blame anyone. Please give polite, constructive criticism | smeuletz | 2020/01/07 04:07 AM |
Do not blame anyone. Please give polite, constructive criticism | Simon Farnsworth | 2020/01/07 01:40 PM |
Do not blame anyone. Please give polite, constructive criticism | Etienne | 2020/01/08 02:08 AM |
Do not blame anyone. Please give polite, constructive criticism | smeuletz | 2020/01/08 02:18 AM |
Do not blame anyone. Please give polite, constructive criticism | Michael S | 2020/01/08 02:56 AM |
Not deprecating irrelevant API: sched_yield() on quantum computers? | smeuletz | 2020/01/07 04:34 AM |
Do not blame anyone. Please give polite, constructive criticism | magicalgoat | 2020/01/09 05:58 PM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/09 10:37 PM |
Do not blame anyone. Please give polite, constructive criticism | Anon3 | 2020/01/10 04:40 PM |
Do not blame anyone. Please give polite, constructive criticism | rwessel | 2020/01/06 10:04 PM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/06 12:11 PM |
Do not blame anyone. Please give polite, constructive criticism | Gabriele Svelto | 2020/01/06 02:36 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Howard Chu | 2020/01/09 11:39 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/10 12:30 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | president ltd | 2020/01/04 02:44 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Jörn Engel | 2020/01/04 12:34 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Emil Briggs | 2020/01/04 01:13 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Jörn Engel | 2020/01/04 01:46 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/04 02:24 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/04 03:54 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Jörn Engel | 2020/01/05 10:21 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/05 12:42 PM |
FUTEX_LOCK_PI performance | Jörn Engel | 2020/01/05 02:45 PM |
FUTEX_LOCK_PI performance | Linus Torvalds | 2020/01/05 04:30 PM |
FUTEX_LOCK_PI performance | Jörn Engel | 2020/01/05 07:03 PM |
FUTEX_LOCK_PI performance | RichardC | 2020/01/06 07:11 AM |
FUTEX_LOCK_PI performance | Linus Torvalds | 2020/01/06 01:11 PM |
FUTEX_LOCK_PI performance | Gabriele Svelto | 2020/01/06 03:20 AM |
FUTEX_LOCK_PI performance | xilun | 2020/01/06 05:19 PM |
FUTEX_LOCK_PI performance | Konrad Schwarz | 2020/01/13 04:36 AM |
FUTEX_LOCK_PI performance | Gabriele Svelto | 2020/01/13 04:53 AM |
FUTEX_LOCK_PI performance | Simon Farnsworth | 2020/01/13 05:36 AM |
FUTEX_LOCK_PI performance | rwessel | 2020/01/13 06:22 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | rainstar | 2020/01/04 10:58 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Charles Ellis | 2020/01/05 04:00 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Richard | 2020/01/05 09:58 AM |
It's hard to separate | Michael S | 2020/01/05 11:17 AM |
It's hard to separate | rainstared | 2020/01/06 01:52 AM |
It's hard to separate | David Kanter | 2020/01/08 09:27 AM |
It's hard to separate | Anon | 2020/01/08 09:37 PM |
It's hard to separate | none | 2020/01/08 11:50 PM |
It's hard to separate | Anon | 2020/01/09 01:41 AM |
It's hard to separate | none | 2020/01/09 03:54 AM |
It's hard to separate | gallier2 | 2020/01/09 04:19 AM |
It's hard to separate | Anon | 2020/01/09 05:12 AM |
It's hard to separate | Adrian | 2020/01/09 05:24 AM |
It's hard to separate | gallier2 | 2020/01/09 05:58 AM |
It's hard to separate | Adrian | 2020/01/09 07:09 AM |
It's hard to separate | gallier2 | 2020/01/09 05:42 AM |
It's hard to separate | Adrian | 2020/01/09 04:41 AM |
It's hard to separate | Anon | 2020/01/09 05:24 AM |
It's hard to separate | gallier2 | 2020/01/09 06:07 AM |
It's hard to separate | David Hess | 2020/01/09 09:27 AM |
It's hard to separate | Adrian | 2020/01/09 10:15 AM |
It's hard to separate | David Hess | 2020/01/09 10:45 AM |
It's hard to separate | Anon | 2020/01/09 11:15 AM |
It's hard to separate | Adrian | 2020/01/09 11:51 AM |
It's hard to separate | Brett | 2020/01/09 01:49 PM |
Zilog Z8000 | Brett | 2020/01/10 10:53 PM |
Zilog Z8000 | David Hess | 2020/01/11 07:06 AM |
Zilog Z8000 | Adrian | 2020/01/11 07:29 AM |
Zilog Z8000 | David Hess | 2020/01/11 08:45 AM |
Zilog Z8000 | Ricardo B | 2020/01/11 08:04 PM |
Zilog Z8000 | Ronald Maas | 2020/01/12 10:47 AM |
Zilog Z8000 | Ricardo B | 2020/01/12 12:15 PM |
Zilog Z8000 | Anon | 2020/01/12 11:34 PM |
Zilog Z8000 | Jose | 2020/01/13 01:23 AM |
Zilog Z8000 | gallier2 | 2020/01/13 01:42 AM |
Zilog Z8000 | Jose | 2020/01/13 10:04 PM |
Zilog Z8000 | rwessel | 2020/01/13 10:40 PM |
Zilog Z8000 | David Hess | 2020/01/13 11:35 PM |
Zilog Z8000 | Simon Farnsworth | 2020/01/14 03:56 AM |
Zilog Z8000 | Michael S | 2020/01/14 04:09 AM |
Zilog Z8000 | Simon Farnsworth | 2020/01/14 05:06 AM |
Zilog Z8000 | David Hess | 2020/01/14 10:22 AM |
Zilog Z8000 | David Hess | 2020/01/14 10:15 AM |
Zilog Z8000 | rwessel | 2020/01/14 04:12 PM |
286 16 bit I/O | Tim McCaffrey | 2020/01/15 11:25 AM |
286 16 bit I/O | David Hess | 2020/01/15 09:17 PM |
Zilog Z8000 | Ricardo B | 2020/01/13 11:52 AM |
Zilog Z8000 | Anon | 2020/01/13 12:25 PM |
Zilog Z8000 | David Hess | 2020/01/13 06:38 PM |
Zilog Z8000 | rwessel | 2020/01/13 07:16 PM |
Zilog Z8000 | David Hess | 2020/01/13 07:47 PM |
Zilog Z8000 | someone | 2020/01/14 07:54 AM |
Zilog Z8000 | Anon | 2020/01/14 08:31 AM |
Zilog Z8000 | Ricardo B | 2020/01/14 06:29 PM |
Zilog Z8000 | Simon Farnsworth | 2020/01/15 03:26 AM |
Zilog Z8000 | Tim McCaffrey | 2020/01/15 11:27 AM |
Zilog Z8000 | Simon Farnsworth | 2020/01/15 02:32 PM |
Zilog Z8000 | Ricardo B | 2020/01/15 03:47 PM |
Zilog Z8000 | Anon | 2020/01/15 04:08 PM |
Zilog Z8000 | Ricardo B | 2020/01/15 05:16 PM |
Zilog Z8000 | Anon | 2020/01/15 05:31 PM |
Zilog Z8000 | Ricardo B | 2020/01/15 06:46 PM |
Zilog Z8000 | Anon | 2020/01/15 07:04 PM |
Zilog Z8000 | David Hess | 2020/01/15 09:53 PM |
Zilog Z8000 | Ricardo B | 2020/01/16 07:27 PM |
Zilog Z8000 | Anon | 2020/01/16 08:33 PM |
Zilog Z8000 | Ronald Maas | 2020/01/17 12:05 AM |
Zilog Z8000 | Anon | 2020/01/17 08:15 AM |
Zilog Z8000 | Ricardo B | 2020/01/17 02:59 PM |
Zilog Z8000 | Anon | 2020/01/17 07:40 PM |
Zilog Z8000 | Ricardo B | 2020/01/18 08:42 AM |
Zilog Z8000 | gallier2 | 2020/01/19 08:02 AM |
Zilog Z8000 | David Hess | 2020/01/18 07:12 AM |
Zilog Z8000 | David Hess | 2020/01/15 09:49 PM |
Zilog Z8000 | gallier2 | 2020/01/16 12:57 AM |
Zilog Z8000 | Simon Farnsworth | 2020/01/16 02:30 AM |
IBM PC success | Etienne | 2020/01/16 06:42 AM |
Zilog Z8000 | Ricardo B | 2020/01/16 07:32 PM |
Zilog Z8000 | Brett | 2020/01/17 01:38 AM |
Zilog Z8000 | David Hess | 2020/01/18 07:28 AM |
Zilog Z8000 | David Hess | 2020/01/18 07:22 AM |
Zilog Z8000 | David Hess | 2020/01/15 09:30 PM |
Zilog Z8000 | Maxwell | 2020/01/11 09:07 AM |
Zilog Z8000 | David Hess | 2020/01/11 09:40 AM |
Zilog Z8000 | Maxwell | 2020/01/11 10:08 AM |
Zilog Z8000 | Ricardo B | 2020/01/11 08:42 PM |
8086 does NOT have those addressing modes | Devin | 2020/01/12 02:13 PM |
8086 does NOT have those addressing modes | Ricardo B | 2020/01/12 06:46 PM |
8086 does NOT have those addressing modes | Anon | 2020/01/13 05:10 AM |
8086 does NOT have those addressing modes | gallier2 | 2020/01/13 06:07 AM |
8086 does NOT have those addressing modes | Anon | 2020/01/13 07:09 AM |
8086 does NOT have those addressing modes | Ricardo B | 2020/01/13 11:48 AM |
8086 does NOT have those addressing modes | Michael S | 2020/01/13 07:40 AM |
Zilog Z8000 | Ronald Maas | 2020/01/13 09:44 AM |
Zilog Z8000 | Anon | 2020/01/13 04:32 PM |
8086 does NOT have those addressing modes | Ricardo B | 2020/01/13 11:24 AM |
8086 does NOT have those addressing modes | rwessel | 2020/01/13 03:59 PM |
8086 does NOT have those addressing modes | David Hess | 2020/01/13 07:12 PM |
8086 does NOT have those addressing modes | rwessel | 2020/01/13 07:28 PM |
8086 does NOT have those addressing modes | David Hess | 2020/01/13 07:51 PM |
8086 does NOT have those addressing modes | David Hess | 2020/01/13 06:55 PM |
Zilog Z8000 | rwessel | 2020/01/11 01:26 PM |
Zilog Z8000 | Brett | 2020/01/11 03:16 PM |
Zilog Z8000 | rwessel | 2020/01/11 08:20 PM |
Zilog Z8000 | Brett | 2020/01/12 01:02 PM |
Zilog Z8000 | rwessel | 2020/01/12 10:06 PM |
Zilog Z8000 | Brett | 2020/01/12 11:02 PM |
Zilog Z8000 | James | 2020/01/13 06:12 AM |
Zilog Z8000 | Adrian | 2020/01/12 12:38 AM |
PDP-11 | Michael S | 2020/01/12 02:33 AM |
Zilog Z8000 | rwessel | 2020/01/12 07:01 AM |
Zilog Z8000 | Ronald Maas | 2020/01/12 11:03 AM |
Zilog Z8000 | Konrad Schwarz | 2020/01/13 04:49 AM |
Zilog Z8000 | Adrian | 2020/01/14 12:38 AM |
Zilog Z8000 | konrad.schwarz | 2020/01/15 05:50 AM |
Zilog Z8000 | Adrian | 2020/01/15 11:24 PM |
It's hard to separate | David Hess | 2020/01/11 07:08 AM |
It's hard to separate | David Hess | 2020/01/11 07:11 AM |
It's hard to separate | Adrian | 2020/01/09 12:16 PM |
It's hard to separate | David Hess | 2020/01/11 07:17 AM |
It's hard to separate | gallier2 | 2020/01/10 01:11 AM |
It's hard to separate | none | 2020/01/10 02:58 AM |
It's hard to separate | rwessel | 2020/01/09 08:00 AM |
It's hard to separate | David Hess | 2020/01/09 09:10 AM |
It's hard to separate | rwessel | 2020/01/09 09:51 AM |
It's hard to separate | Adrian | 2020/01/08 11:58 PM |
It's hard to separate | rwessel | 2020/01/09 07:31 AM |
It's hard to separate | Adrian | 2020/01/09 07:44 AM |
It's hard to separate | David Hess | 2020/01/09 09:37 AM |
It's hard to separate | none | 2020/01/09 10:34 AM |
Are segments so bad? | Paul A. Clayton | 2020/01/09 03:15 PM |
Yes, they are terrible (NT) | Anon | 2020/01/09 03:20 PM |
Are segments so bad? | Adrian | 2020/01/10 12:49 AM |
Are segments so bad? | Etienne | 2020/01/10 02:28 AM |
Are segments so bad? | gallier2 | 2020/01/10 02:37 AM |
Are segments so bad? | Adrian | 2020/01/10 03:19 AM |
Are segments so bad? | Adrian | 2020/01/10 04:27 AM |
Are segments so bad? | Etienne | 2020/01/10 04:41 AM |
Are segments so bad? | Adrian | 2020/01/10 03:05 AM |
Are segments so bad? | gallier2 | 2020/01/10 03:13 AM |
Are segments so bad? | Anon3 | 2020/01/10 11:37 AM |
Are segments so bad? | Adrian | 2020/01/10 11:47 AM |
Are segments so bad? | Brendan | 2020/01/11 01:43 AM |
Are segments so bad? | Anon | 2020/01/10 06:51 PM |
Are segments so bad? | Adrian | 2020/01/11 01:05 AM |
Are segments so bad? | Jukka Larja | 2020/01/11 08:20 AM |
Are segments so bad? | Brendan | 2020/01/11 10:14 AM |
Are segments so bad? | Jukka Larja | 2020/01/11 09:15 PM |
Are segments so bad? | Brendan | 2020/01/11 11:15 PM |
Are segments so bad? | Jukka Larja | 2020/01/12 04:18 AM |
Are segments so bad? | anon | 2020/01/12 12:30 PM |
Are segments so bad? | Brendan | 2020/01/12 10:19 PM |
the world sucks worse than you're aware of | Michael S | 2020/01/13 01:50 AM |
the world sucks worse than you're aware of | Brendan | 2020/01/13 03:56 AM |
the world sucks worse than you're aware of | Gabriele Svelto | 2020/01/13 04:46 AM |
Are segments so bad? | Jukka Larja | 2020/01/13 07:41 AM |
Are segments so bad? | Brendan | 2020/01/13 08:21 AM |
Are segments so bad? | Jukka Larja | 2020/01/13 09:43 AM |
Are segments so bad? | Brendan | 2020/01/13 01:02 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/13 01:22 PM |
Are segments so bad? | Brendan | 2020/01/13 02:50 PM |
actor of around 200? | Michael S | 2020/01/14 03:58 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/14 12:50 PM |
Not overcomitting leads to more OOMs, not less | Brendan | 2020/01/14 01:40 PM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/15 03:17 AM |
Not overcomitting leads to more OOMs, not less | Anon | 2020/01/15 04:43 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/15 05:09 AM |
Not overcomitting leads to more OOMs, not less | Anon | 2020/01/15 05:16 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/15 06:58 AM |
Not overcomitting leads to more OOMs, not less | Anon | 2020/01/15 09:08 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/16 04:05 AM |
Not overcomitting leads to more OOMs, not less | Michael S | 2020/01/15 04:48 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/15 05:10 AM |
Not overcomitting leads to more OOMs, not less | Michael S | 2020/01/15 08:13 AM |
Not overcomitting leads to more OOMs, not less | Jukka Larja | 2020/01/15 08:46 AM |
Not overcomitting leads to more OOMs, not less | Jukka Larja | 2020/01/15 06:08 AM |
Thanks for the info (NT) | Gabriele Svelto | 2020/01/15 07:00 AM |
Not overcomitting leads to more OOMs, not less | Linus Torvalds | 2020/01/15 12:30 PM |
OOM killer complains | Anon | 2020/01/15 12:44 PM |
OOM killer complains | anon | 2020/01/15 04:26 PM |
Not overcomitting leads to more OOMs, not less | Brendan | 2020/01/16 07:26 AM |
Not overcomitting leads to more OOMs, not less | Linus Torvalds | 2020/01/16 10:17 AM |
Not overcomitting leads to more OOMs, not less | Linus Torvalds | 2020/01/16 10:48 AM |
Not overcomitting leads to more OOMs, not less | Doug S | 2020/01/16 03:41 PM |
Not overcomitting leads to more OOMs, not less | Doug S | 2020/01/16 03:44 PM |
Are segments so bad? | rwessel | 2020/01/13 04:11 PM |
Are segments so bad? | Jukka Larja | 2020/01/14 07:37 AM |
Are segments so bad? | Brendan | 2020/01/14 08:48 AM |
Are segments so bad? | Jukka Larja | 2020/01/14 11:13 AM |
Are segments so bad? | Brendan | 2020/01/14 02:30 PM |
Are segments so bad? | Brett | 2020/01/14 10:13 PM |
Are segments so bad? | Jukka Larja | 2020/01/15 07:04 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/15 03:35 AM |
Specifying cost of dropping pages | Paul A. Clayton | 2020/01/13 03:00 PM |
Specifying cost of dropping pages | rwessel | 2020/01/13 04:19 PM |
Specifying cost of dropping pages | Gabriele Svelto | 2020/01/15 03:23 AM |
Are segments so bad? | anon | 2020/01/14 02:15 AM |
Are segments so bad? | Brendan | 2020/01/14 06:13 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/14 12:57 PM |
Are segments so bad? | Brendan | 2020/01/14 02:58 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/15 03:33 AM |
Are segments so bad? | Anon | 2020/01/15 05:24 AM |
Are segments so bad? | Jukka Larja | 2020/01/15 06:20 AM |
Are segments so bad? | Etienne | 2020/01/15 05:56 AM |
Are segments so bad? | Jukka Larja | 2020/01/15 08:53 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/16 06:12 AM |
Are segments so bad? | Jukka Larja | 2020/01/16 10:56 AM |
Are segments so bad? | Brendan | 2020/01/15 06:20 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/15 06:56 AM |
Are segments so bad? | Brendan | 2020/01/16 07:16 AM |
Are segments so bad? | Jukka Larja | 2020/01/16 11:08 AM |
Are segments so bad? | Brendan | 2020/01/17 01:52 PM |
Are segments so bad? | Jukka Larja | 2020/01/17 10:08 PM |
Are segments so bad? | Brendan | 2020/01/18 12:40 PM |
Are segments so bad? | Jukka Larja | 2020/01/18 10:13 PM |
Are segments so bad? | Brendan | 2020/01/19 12:25 PM |
Are segments so bad? | Brett | 2020/01/19 03:18 PM |
Are segments so bad? | Brett | 2020/01/19 03:34 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 12:57 AM |
Are segments so bad? | Jukka Larja | 2020/01/20 05:54 AM |
Are segments so bad? | Brendan | 2020/01/20 12:43 PM |
Are segments so bad? | Jukka Larja | 2020/01/21 07:01 AM |
Are segments so bad? | Brendan | 2020/01/21 06:04 PM |
Are segments so bad? | Jukka Larja | 2020/01/22 07:30 AM |
Are segments so bad? | Brendan | 2020/01/22 03:56 PM |
Are segments so bad? | Jukka Larja | 2020/01/23 08:44 AM |
Are segments so bad? | rwessel | 2020/01/16 03:06 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/16 03:13 PM |
Are segments so bad? | Brendan | 2020/01/17 01:51 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/17 03:18 PM |
Are segments so bad? | Anon | 2020/01/17 08:01 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 01:06 AM |
Are segments so bad? | Brendan | 2020/01/18 03:15 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 12:55 AM |
Are segments so bad? | Michael S | 2020/01/20 05:30 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 08:02 AM |
Are segments so bad? | Jukka Larja | 2020/01/20 08:41 AM |
Are segments so bad? | Michael S | 2020/01/20 08:45 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 09:36 AM |
Are segments so bad? | Brendan | 2020/01/20 11:04 AM |
Are segments so bad? | Michael S | 2020/01/20 01:22 PM |
Are segments so bad? | Brendan | 2020/01/20 02:38 PM |
Are segments so bad? | Simon Farnsworth | 2020/01/20 03:40 PM |
Are segments so bad? | Anon | 2020/01/20 04:35 PM |
Are segments so bad? | Simon Farnsworth | 2020/01/20 05:30 PM |
Are segments so bad? | Michael S | 2020/01/20 05:20 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/21 05:08 AM |
Are segments so bad? | Brendan | 2020/01/21 06:07 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/22 01:53 AM |
Are segments so bad? | Brendan | 2020/01/22 04:32 AM |
Are segments so bad? | Jukka Larja | 2020/01/22 07:12 AM |
Are segments so bad? | Brendan | 2020/01/22 04:28 PM |
Are segments so bad? | Jukka Larja | 2020/01/23 07:36 AM |
Are segments so bad? | Brendan | 2020/01/24 07:27 PM |
Are segments so bad? | Jukka Larja | 2020/01/24 10:42 PM |
Are segments so bad? | Brendan | 2020/01/25 02:46 AM |
Are segments so bad? | Jukka Larja | 2020/01/25 08:29 AM |
Are segments so bad? | Brendan | 2020/01/26 11:17 PM |
Are segments so bad? | Jukka Larja | 2020/01/27 07:55 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/27 04:33 PM |
Are segments so bad? | Jukka Larja | 2020/01/28 06:28 AM |
DDS assets and MipMap chains | Montaray Jack | 2020/01/29 03:26 AM |
Are segments so bad? | gallier2 | 2020/01/27 03:58 AM |
Are segments so bad? | Jukka Larja | 2020/01/27 06:19 AM |
Are segments so bad? | Anne O. Nymous | 2020/01/25 03:23 AM |
Are segments so bad? | Anon | 2020/01/22 05:52 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/23 01:24 AM |
Are segments so bad? | Anon | 2020/01/23 05:24 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/24 12:43 AM |
Are segments so bad? | Anon | 2020/01/24 04:04 AM |
Are segments so bad? | Etienne | 2020/01/24 06:10 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/23 01:48 AM |
Are segments so bad? | Michael S | 2020/01/23 03:48 AM |
Are segments so bad? | Jukka Larja | 2020/01/23 07:38 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/23 01:29 PM |
Are segments so bad? | Anon | 2020/01/23 06:08 PM |
Are segments so bad? | Jukka Larja | 2020/01/24 09:51 PM |
Are segments so bad? | Anon | 2020/01/23 06:02 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/24 03:57 AM |
Are segments so bad? | Anon | 2020/01/24 04:17 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/24 09:23 AM |
Are segments so bad? | Anon | 2020/02/02 10:15 PM |
Are segments so bad? | Gabriele Svelto | 2020/02/03 01:47 AM |
Are segments so bad? | Anon | 2020/02/03 02:34 AM |
Are segments so bad? | Gabriele Svelto | 2020/02/03 05:36 AM |
Are segments so bad? | Anon3 | 2020/02/03 08:47 AM |
Are segments so bad? | Anon | 2020/02/04 05:49 PM |
Are segments so bad? | Jukka Larja | 2020/01/24 10:10 PM |
Are segments so bad? | Jukka Larja | 2020/01/17 10:26 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/12 04:18 AM |
Are segments so bad? | Jukka Larja | 2020/01/12 08:41 AM |
Are segments so bad? | rwessel | 2020/01/11 01:31 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/11 08:22 AM |
Are segments so bad? | Ricardo B | 2020/01/11 08:01 PM |
Are segments so bad? | Adrian | 2020/01/12 12:18 AM |
Are segments so bad? | Michael S | 2020/01/12 02:43 AM |
Are segments so bad? | Adrian | 2020/01/12 04:35 AM |
Are segments so bad? | Ricardo B | 2020/01/12 12:04 PM |
Are segments so bad? | Anon3 | 2020/01/12 05:52 PM |
Are segments so bad? | Brendan | 2020/01/12 09:58 PM |
Are segments so bad? | Paul A. Clayton | 2020/01/13 09:11 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | rainstared | 2020/01/06 01:43 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Foo_ | 2020/01/06 05:33 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | dmcq | 2020/01/06 06:03 AM |
changes in context | Carlie Coats | 2020/01/09 09:06 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | rainstar | 2020/01/09 10:16 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Montaray Jack | 2020/01/09 11:11 PM |
Suggested reading for the author | anon | 2020/01/04 11:16 PM |
Suggested reading for the author | ab | 2020/01/05 05:15 AM |
Looking at the other side (frequency scaling) | Chester | 2020/01/06 10:19 AM |
Looking at the other side (frequency scaling) | Foo_ | 2020/01/06 11:00 AM |
Why spinlocks were used | Foo_ | 2020/01/06 11:06 AM |
Why spinlocks were used | Jukka Larja | 2020/01/06 12:59 PM |
Why spinlocks were used | Simon Cooke | 2020/01/06 03:16 PM |
Why spinlocks were used | Rizzo | 2020/01/07 01:18 AM |
Looking at the other side (frequency scaling) | ab | 2020/01/07 01:14 AM |
Cross-platform code | Gian-Carlo Pascutto | 2020/01/06 08:00 AM |
Cross-platform code | Michael S | 2020/01/06 09:11 AM |
Cross-platform code | Gian-Carlo Pascutto | 2020/01/06 12:33 PM |
Cross-platform code | Michael S | 2020/01/06 01:59 PM |
Cross-platform code | Nksingh | 2020/01/07 12:09 AM |
Cross-platform code | Michael S | 2020/01/07 02:00 AM |
SRW lock implementation | Michael S | 2020/01/07 02:35 AM |
SRW lock implementation | Nksingh | 2020/01/09 02:17 PM |
broken URL in Linux source code | Michael S | 2020/01/14 01:56 AM |
broken URL in Linux source code | Travis Downs | 2020/01/14 10:14 AM |
broken URL in Linux source code | Michael S | 2020/01/14 10:48 AM |
broken URL in Linux source code | Travis Downs | 2020/01/14 04:43 PM |
SRW lock implementation - url broken | Michael S | 2020/01/14 03:07 AM |
SRW lock implementation - url broken | Travis Downs | 2020/01/14 11:06 AM |
SRW lock implementation - url broken | gpderetta | 2020/01/15 04:28 AM |
SRW lock implementation - url broken | Travis Downs | 2020/01/15 11:16 AM |
SRW lock implementation - url broken | Linus Torvalds | 2020/01/15 11:20 AM |
SRW lock implementation - url broken | Travis Downs | 2020/01/15 11:35 AM |
SRW lock implementation - url broken | Linus Torvalds | 2020/01/16 11:24 AM |
SRW lock implementation - url broken | Konrad Schwarz | 2020/02/05 10:19 AM |
SRW lock implementation - url broken | nksingh | 2020/02/05 02:42 PM |
Cross-platform code | Linus Torvalds | 2020/01/06 01:57 PM |