By: Brendan (btrotter.delete@this.gmail.com), January 26, 2020 11:17 pm
Room: Moderated Discussions
Hi,
Jukka Larja (roskakori2006.delete@this.gmail.com) on January 25, 2020 7:29 am wrote:
> Brendan (btrotter.delete@this.gmail.com) on January 25, 2020 1:46 am wrote:
> > Jukka Larja (roskakori2006.delete@this.gmail.com) on January 24, 2020 9:42 pm wrote:
> > > Brendan (btrotter.delete@this.gmail.com) on January 24, 2020 6:27 pm wrote:
> > > > Jukka Larja (roskakori2006.delete@this.gmail.com) on January 23, 2020 6:36 am wrote:
> > > > > Brendan (btrotter.delete@this.gmail.com) on January 22, 2020 3:28 pm wrote:
> > > > > > Jukka Larja (roskakori2006.delete@this.gmail.com) on January 22, 2020 6:12 am wrote:
> > > > > > > Brendan (btrotter.delete@this.gmail.com) on January 22, 2020 3:32 am wrote:
> > > > > > >
> > > > > > > > The right way to think about the way memory management works in Windows (unless swap is
> > > > > > > > disabled by an idiot) is to think of physical memory as nothing more than a cache of the
> > > > > > > > page file; where "max. virtual memory size = page file size". Physical memory size and
> > > > > > > > physical memory availability are irrelevant - they have no effect on OOM whatsoever.
> > > > > > >
> > > > > > > No, that's not right (you may be right for some older Windows version, but I'm pretty sure even
> > > > > > > 2000 didn't work like that). By default, Windows allocates very small page file, especially on a
> > > > > > > system with large RAM. Like I wrote in the other post, on 64 GB system, page file was 9 GB[1].
> > > > > >
> > > > > > OK; my description was a little incomplete, and could've been more like "think of physical memory
> > > > > > as nothing more than a cache of the page file; where "max. virtual memory size = page file size",
> > > > > > and where "page file size" may not be a fixed size and may grow as needed (if possible)."
> > > > > >
> > > > > > I probably should've also said "swap size" instead of "page file size" to cover non-default configurations.
> > > > >
> > > > > Still wrong. Windows is quite happy to run with page file significantly smaller than physical memory
> > > > > (running completely without page file may be stupid for unrelated reasons, of which I'm not that
> > > > > familiar with). That's probably the normal mode of operation with current memory amounts.
> > > >
> > > > Why wrong?
> > > >
> > > > I'd expect that with page file significantly smaller than physical memory Windows
> > > > would only commit to "max. swap"; and then the rest of the physical memory (that can't
> > > > be used for "cache of swap") would be used for "cache of file data" (and anything
> > > > else Windows might cache that is not part of "memory processes committed to").
> > >
> > > I'm not sure I understand, but if I do, I should not be able to run a program that does "malloc(15GB)"
> > > on my system with 5 GB page file (that's not the case in case you're wondering).
> >
> > You shouldn't be able to "malloc(15GB)" on a system that can't increase the size of the
> > page file up to 15+ GiB later (if/when necessary). Of course if you don't use (modify)
> > all of the 15 GB you allocated it might never increase the size of the page file.
>
> Just to be extra sure (though there really wasn't any doubt really), before I wrote
> my last comment I wrote a program that allocated 15 GBs, wrote it full of stuff and
> exited. Commit went up by 15 GBs (at the time of allocation, it wasn't increasing gradually)
> just as expected and page file size didn't increase, just as expected.
>
> If you actually use Windows with normal out-of-the box settings,
> it is trivial to note that it doesn't work like you decribe.
For default settings; you'd need to fill up your file system with normal files to determine how much "max. commit size" depends on "how much Windows could increase the page file size later if necessary". That is not as trivial.
> > > > Note: All files (and parts of files) mapped as "read-only" wouldn't be included in
> > > > "how much OS has committed to" (those area can be backed by file system not swap).
> > >
> > > Yes, I mentioned this earlier (our game Trine 4 maps about 14 GBs of
> > > asset files on start and that doesn't add up against commit limits).
> > >
> > > > Oh my - I did some checking and it seems that Windows doesn't even support swap partitions
> > > > (it only supports "page file on normal file system"); so it always has to worry about
> > > > normal files and swap competing for the same space, can't avoid "file system meta-data"
> > > > overhead, and has to worry about file fragmentation for the page file.
> > >
> > > Windows doesn't start with zero size page file. It starts with some relatively large one (like five GBs
> > > on my 32 GB laptop, nine on that 64 GB system I mentioned earlier) and generally expects not to need
> > > to increase the size. Page file persists over boots, so fragmentation is only an issue if one fragments
> > > the disk before page file is create (or page file size is increased a lot on a fragmented disk).
> >
> > I'm more used to the old rule of thumb, which is "twice as much swap as you have RAM"
> > (e.g. 64 GiB of swap on a system with 32 GiB of RAM), and your idea of "relatively large"
> > (5 GiB vs. 64 GiB) sounds more like my idea of tiny (less than 10% of 64 GiB).
>
> I believe that rule of thumb hasn't been relevant for about 20 years, but I'm not
> 100 % sure. Could have been changed as late as some Windows XP service pack.
I doubt the rule of thumb was ever considered accurate for any specific case. If you assume that the user expects the system to cope with "RAM full of working set" (which is reasonable - more RAM is a waste of $$ and less RAM is a performance problem), then ideal swap size depends on the ratio of "working set size to memory OS committed to" which varies for different software.
> For the usual usage patterns on laptops and desktops, large page file doesn't make sense. I'm
> a very unusual case with hundreds of tabs open in Firefox and Chrome, couple of instant messengers
> (each hundreds of megabytes, because of course you need a browser to make a desktop app these
> days), Thunderbird (working set 600 MBs), LibreOffice Calc and some smaller things. All that
> nets commit of about 16 GBs. With common usage patterns (mostly running one or two apps at a
> time, closing the browser and all tabs after each session) you're at fraction of that.
>
> Then consider how small the SSDs are and it makes even less sense. For common usage patterns
> more memory means less swap space. If user does something special, then page file will grow,
> but it makes no sense to steal even a single gigabyte out of some 120-250 GB SSD.
"Usual usage patterns" are a myth. E.g. if you ask an accountant, an aerospace engineer, an architect, and artist what software they use you'll get 4 very different answers. I also find it odd that you (a game developer) didn't include games as part of your "common" usage.
For SSD sizes, a GiB is worth about 12 cents now, so 64 GiB of "SSD space" is worth less than a single meal at a decent restaurant. However; I still think that "tiered swap" could be significantly better economics (e.g. $1 of SSD space and $2 of "rotating disk" space; especially when you're expecting that most of the swap will only be for "committed to but not modified" and therefore expecting the performance will be irrelevant); except that I'm still skeptical about "SSD longevity" claims (especially for consumer grade hardware where I think it's all based on "disk not full" statistics) and would want more "unlikely to be used" SSD space (in the form of swap space that's not expected to actually be used) to give wear leveling more freedom to level wear (and improve longevity, and reduce replacement costs).
> > Yeah, I get the impression that the "auto-increase page file size' in Windows isn't great (I'd prefer
> > swap partitions; and can't see why it doesn't make processes wait while it increases page file size
> > and can't be better at leaving more of page file's disk space allocated across reboots).
>
> I think the algorithm tries to increase page file size beforehand, but when something
> like 32^32 compile processes get launched during a three second window or some shader
> compiler just decides it needs 28 GBs for something, it's hard to keep up.
>
> As long as I've used SSDs, I haven't actually seen the Windows warning popup on my own systems, even
> when page file size has been increased. I've only seen it on our build servers, which do experience
> those extreme cases of memory consumption increases. So maybe the page file increase algorithm is
> actually good enough for most users and MS just expects server people to set the size by hand.
>
> That said, my experiment with 15 GB memory allocation pushed the total commit to bit over 34 GBs (was running
> Visual Studio at the time too), while the limit (currently, with that 5 GB page file still in place) is 38.
>
> Just for science I tried allocating 25 GBs. The allocation took a long time, but there was no warning popup from
> Windows. The program then run quickly (it just touched every 2048th byte in allocation) and quit. To my surprise,
> the page file was now 8 GBs. I run the program again and monitored page file more closely. It was actually 16
> GBs during program run but was immediately (within a second or two) reduced to 8 GBs after the run.
>
> So it seems Windows is actually pretty good in this runtime page file resizing. Not sure
> whether that's because the SSD in this laptop is quite a bit faster than anything in our
> servers or if there's been some recent change in how the size increases are handled.
>
> Out of interest, I also tried larger allocations. 75 GB one failed, even though there
> would have been about 13 GBs free space on disk after appropriate page file size increase.
> 70 GB one worked, and system commit went to about 90 GBs as expected.
>
> Since that was so fun, I decided to try allocating that 70 GBs 2 kBs at a time. It went fine close to the
> end. I was monitoring the page file size with
> small increments) until I got: "
>
> (That's "not enough memory resources to handle the command") and after another try simply "
> Visual Studio crashed, Firefox crashed, Thunderbird crashed, Process Explorer crashed and something
> called "Dell Client Management Service" crashed too (seen in Event Viewer. I don't even know what
> that is). And I still didn't get that popup from Windows, though according to Event Viewer, there
> was an Application Popup with the familiar text in it. Maybe it crashed too.
>
> Nice detail about Firefox: I didn't lose any of the text I had written here so far.
>
> > > As for "avoiding file system meta-data", why wouldn't Windows be able to do that?
> > > It's not like NTFS needs to be aware of what goes on inside a page file.
> >
> > When the OS wants to access data at "offset in page file" it'd have to convert that into "offset
> > in partition" (while taking into account page file fragmentation, etc) instead of just doing
> > a fast/simple addition ("offset in partition = start of partition + offset in swap").
>
> Each fragment could be treated as separate page file.
With separate file system meta-data (like "C://pagefile000.sys", "C://pagefile001.sys", ...) to keep track of each individual "unfragmented extent"?
> And why use offset in partition
> instead of offset on disk (if such optimization really makes any difference).
>
> > Also; increasing the page file size would involve finding free blocks of disk space and marking
> > them as "not free" in whatever NTFS uses to keep track, then adding them to whatever NTFS uses
> > to keep track of which parts of a disk are used by a file, plus updating the directory entry
> > (e.g. file size field) and any other accountancy/statistics (e.g. free space stats).
>
> You talk like the paging system needs to do all that without help from NTFS. It's not like MS needs
> to even support a page file on every file system, for example, on ReFS it's not supported.
>
> It's also pretty uncommon to need to increase page file size.
I was mostly thinking about Windows' "auto-manage page file size" where it frequently increases page file size. Also note that (in Windows 8) I could only find "initial size + max. size" settings, where Windows would still increase the page file size from the initial size up to the max. size if you don't set the initial size equal to the max. size to prevent it.
> > My main problem with "page file" is what happens when you reach
> > an "almost full" state - it creates a bizarre/unintuitive
> > relationship where (e.g.) allocating memory reduces the space you have left for files, and creating files
> > reduces how much memory you can allocate. Of course this problem goes away if you have a fixed size page
> > file, but then what's the point of not having a fixed size swap partition instead?
>
> With file, you can decide later. You can also change the fixed size later. From my point of view,
> the question is why have any more partitions than necessary at all? Partitions are unflexible.
You can resize partitions now (if you want to avoid the "backup, re-partition, reinstall/restore" option).
There's multiple reasons for multiple partitions - to enforce quotas without actually having quotas (e.g. to make sure "/home" can't gobble all disk space and prevent updates), for performance reasons, etc.
I also think it's sad that there isn't any "common swap partition standard" (e.g. as part of UEFI's GPT) so that when you have multiple operating systems installed (e.g. dual boot between Windows and Linux) they can't both use the same swap partition/s.
> I have a Linux box with 10 GB swap partition and 32 GBs of memory, because that computer started
> with just 8 and resizing partition is too much work for me. It's likely to be 10 + 64 or 10 +
> 128 in the future, when I update the hardware and just clone the disk or use it as is.
>
> > There'd also be work-arounds in various utilities (doing a back-up of the file system?
> > Better add a special case to avoid backing up the page file. Trying to restore from a
> > checkpoint? Better not try to restore the old page file. Want to search for all files
> > modified in the last 20 minutes? Better hide the page file in the search results.)
>
> If you want to backup Windows, or any OS for that matter, I'm sure there's more than one file to exclude.
> Why would restoring old page file be a problem anyway (unless you are talking about restoring over running
> system, which doesn't really make sense in this context, if you have any idea about how Windows works)?
For restoring in abnormal situations where the OS isn't running, restoring the page file is a waste of time. In normal situations (whether it's "system restore" or something else) restoring the obsolete page file is a disaster (in addition to being a waste of time).
> > Then there's things like RAID and whole disk encryption (do you really want "RAID-5" and encryption
> > for your page file, just because you did want those things for normal files on "C:/"?).
>
> No idea what Windows does with RAID 5 page file. No idea if your C: can be RAID
> 5 in Windows either. As for encryption, you'll probably want (all) your page
> file(s) encrypted, if you decide system drive encryption is your thing.
Windows does support RAID 5 (but not RAID 6?); and I'd expect that (as RAID is at a lower level than file system) your page file would also using RAID.
For encryption; you might want swap encrypted but not file system, or files encrypted but not swap, or you might want different types of encryption for different cases. My main point here is that "artificial conflation of file system and swap" prevents flexibility (for redundancy/RAID and encryption, and anything else).
> > I think ReadyBoost was to improve latency for normal files (especially
> > during boot where OS is trying to fetch all drivers, GUI, etc).
>
> I thought it worked for page file too, but I could be wrong. It hasn't been relevant since 2011 for me.
Maybe; but that (using SSD to cache swap on slow drive instead of just having 2 swap providers) would seem silly to me.
- Brendan
Jukka Larja (roskakori2006.delete@this.gmail.com) on January 25, 2020 7:29 am wrote:
> Brendan (btrotter.delete@this.gmail.com) on January 25, 2020 1:46 am wrote:
> > Jukka Larja (roskakori2006.delete@this.gmail.com) on January 24, 2020 9:42 pm wrote:
> > > Brendan (btrotter.delete@this.gmail.com) on January 24, 2020 6:27 pm wrote:
> > > > Jukka Larja (roskakori2006.delete@this.gmail.com) on January 23, 2020 6:36 am wrote:
> > > > > Brendan (btrotter.delete@this.gmail.com) on January 22, 2020 3:28 pm wrote:
> > > > > > Jukka Larja (roskakori2006.delete@this.gmail.com) on January 22, 2020 6:12 am wrote:
> > > > > > > Brendan (btrotter.delete@this.gmail.com) on January 22, 2020 3:32 am wrote:
> > > > > > >
> > > > > > > > The right way to think about the way memory management works in Windows (unless swap is
> > > > > > > > disabled by an idiot) is to think of physical memory as nothing more than a cache of the
> > > > > > > > page file; where "max. virtual memory size = page file size". Physical memory size and
> > > > > > > > physical memory availability are irrelevant - they have no effect on OOM whatsoever.
> > > > > > >
> > > > > > > No, that's not right (you may be right for some older Windows version, but I'm pretty sure even
> > > > > > > 2000 didn't work like that). By default, Windows allocates very small page file, especially on a
> > > > > > > system with large RAM. Like I wrote in the other post, on 64 GB system, page file was 9 GB[1].
> > > > > >
> > > > > > OK; my description was a little incomplete, and could've been more like "think of physical memory
> > > > > > as nothing more than a cache of the page file; where "max. virtual memory size = page file size",
> > > > > > and where "page file size" may not be a fixed size and may grow as needed (if possible)."
> > > > > >
> > > > > > I probably should've also said "swap size" instead of "page file size" to cover non-default configurations.
> > > > >
> > > > > Still wrong. Windows is quite happy to run with page file significantly smaller than physical memory
> > > > > (running completely without page file may be stupid for unrelated reasons, of which I'm not that
> > > > > familiar with). That's probably the normal mode of operation with current memory amounts.
> > > >
> > > > Why wrong?
> > > >
> > > > I'd expect that with page file significantly smaller than physical memory Windows
> > > > would only commit to "max. swap"; and then the rest of the physical memory (that can't
> > > > be used for "cache of swap") would be used for "cache of file data" (and anything
> > > > else Windows might cache that is not part of "memory processes committed to").
> > >
> > > I'm not sure I understand, but if I do, I should not be able to run a program that does "malloc(15GB)"
> > > on my system with 5 GB page file (that's not the case in case you're wondering).
> >
> > You shouldn't be able to "malloc(15GB)" on a system that can't increase the size of the
> > page file up to 15+ GiB later (if/when necessary). Of course if you don't use (modify)
> > all of the 15 GB you allocated it might never increase the size of the page file.
>
> Just to be extra sure (though there really wasn't any doubt really), before I wrote
> my last comment I wrote a program that allocated 15 GBs, wrote it full of stuff and
> exited. Commit went up by 15 GBs (at the time of allocation, it wasn't increasing gradually)
> just as expected and page file size didn't increase, just as expected.
>
> If you actually use Windows with normal out-of-the box settings,
> it is trivial to note that it doesn't work like you decribe.
For default settings; you'd need to fill up your file system with normal files to determine how much "max. commit size" depends on "how much Windows could increase the page file size later if necessary". That is not as trivial.
> > > > Note: All files (and parts of files) mapped as "read-only" wouldn't be included in
> > > > "how much OS has committed to" (those area can be backed by file system not swap).
> > >
> > > Yes, I mentioned this earlier (our game Trine 4 maps about 14 GBs of
> > > asset files on start and that doesn't add up against commit limits).
> > >
> > > > Oh my - I did some checking and it seems that Windows doesn't even support swap partitions
> > > > (it only supports "page file on normal file system"); so it always has to worry about
> > > > normal files and swap competing for the same space, can't avoid "file system meta-data"
> > > > overhead, and has to worry about file fragmentation for the page file.
> > >
> > > Windows doesn't start with zero size page file. It starts with some relatively large one (like five GBs
> > > on my 32 GB laptop, nine on that 64 GB system I mentioned earlier) and generally expects not to need
> > > to increase the size. Page file persists over boots, so fragmentation is only an issue if one fragments
> > > the disk before page file is create (or page file size is increased a lot on a fragmented disk).
> >
> > I'm more used to the old rule of thumb, which is "twice as much swap as you have RAM"
> > (e.g. 64 GiB of swap on a system with 32 GiB of RAM), and your idea of "relatively large"
> > (5 GiB vs. 64 GiB) sounds more like my idea of tiny (less than 10% of 64 GiB).
>
> I believe that rule of thumb hasn't been relevant for about 20 years, but I'm not
> 100 % sure. Could have been changed as late as some Windows XP service pack.
I doubt the rule of thumb was ever considered accurate for any specific case. If you assume that the user expects the system to cope with "RAM full of working set" (which is reasonable - more RAM is a waste of $$ and less RAM is a performance problem), then ideal swap size depends on the ratio of "working set size to memory OS committed to" which varies for different software.
> For the usual usage patterns on laptops and desktops, large page file doesn't make sense. I'm
> a very unusual case with hundreds of tabs open in Firefox and Chrome, couple of instant messengers
> (each hundreds of megabytes, because of course you need a browser to make a desktop app these
> days), Thunderbird (working set 600 MBs), LibreOffice Calc and some smaller things. All that
> nets commit of about 16 GBs. With common usage patterns (mostly running one or two apps at a
> time, closing the browser and all tabs after each session) you're at fraction of that.
>
> Then consider how small the SSDs are and it makes even less sense. For common usage patterns
> more memory means less swap space. If user does something special, then page file will grow,
> but it makes no sense to steal even a single gigabyte out of some 120-250 GB SSD.
"Usual usage patterns" are a myth. E.g. if you ask an accountant, an aerospace engineer, an architect, and artist what software they use you'll get 4 very different answers. I also find it odd that you (a game developer) didn't include games as part of your "common" usage.
For SSD sizes, a GiB is worth about 12 cents now, so 64 GiB of "SSD space" is worth less than a single meal at a decent restaurant. However; I still think that "tiered swap" could be significantly better economics (e.g. $1 of SSD space and $2 of "rotating disk" space; especially when you're expecting that most of the swap will only be for "committed to but not modified" and therefore expecting the performance will be irrelevant); except that I'm still skeptical about "SSD longevity" claims (especially for consumer grade hardware where I think it's all based on "disk not full" statistics) and would want more "unlikely to be used" SSD space (in the form of swap space that's not expected to actually be used) to give wear leveling more freedom to level wear (and improve longevity, and reduce replacement costs).
> > Yeah, I get the impression that the "auto-increase page file size' in Windows isn't great (I'd prefer
> > swap partitions; and can't see why it doesn't make processes wait while it increases page file size
> > and can't be better at leaving more of page file's disk space allocated across reboots).
>
> I think the algorithm tries to increase page file size beforehand, but when something
> like 32^32 compile processes get launched during a three second window or some shader
> compiler just decides it needs 28 GBs for something, it's hard to keep up.
>
> As long as I've used SSDs, I haven't actually seen the Windows warning popup on my own systems, even
> when page file size has been increased. I've only seen it on our build servers, which do experience
> those extreme cases of memory consumption increases. So maybe the page file increase algorithm is
> actually good enough for most users and MS just expects server people to set the size by hand.
>
> That said, my experiment with 15 GB memory allocation pushed the total commit to bit over 34 GBs (was running
> Visual Studio at the time too), while the limit (currently, with that 5 GB page file still in place) is 38.
>
> Just for science I tried allocating 25 GBs. The allocation took a long time, but there was no warning popup from
> Windows. The program then run quickly (it just touched every 2048th byte in allocation) and quit. To my surprise,
> the page file was now 8 GBs. I run the program again and monitored page file more closely. It was actually 16
> GBs during program run but was immediately (within a second or two) reduced to 8 GBs after the run.
>
> So it seems Windows is actually pretty good in this runtime page file resizing. Not sure
> whether that's because the SSD in this laptop is quite a bit faster than anything in our
> servers or if there's been some recent change in how the size increases are handled.
>
> Out of interest, I also tried larger allocations. 75 GB one failed, even though there
> would have been about 13 GBs free space on disk after appropriate page file size increase.
> 70 GB one worked, and system commit went to about 90 GBs as expected.
>
> Since that was so fun, I decided to try allocating that 70 GBs 2 kBs at a time. It went fine close to the
> end. I was monitoring the page file size with
dir /Ahs
and. It was gradually being increased (in rather > small increments) until I got: "
Ei riittävästi muistiresursseja komennon käsittelyä varten.
">
> (That's "not enough memory resources to handle the command") and after another try simply "
Out
> of memory.
" (no more localization, eh?). At that point, screen went black (not sure what crashed), > Visual Studio crashed, Firefox crashed, Thunderbird crashed, Process Explorer crashed and something
> called "Dell Client Management Service" crashed too (seen in Event Viewer. I don't even know what
> that is). And I still didn't get that popup from Windows, though according to Event Viewer, there
> was an Application Popup with the familiar text in it. Maybe it crashed too.
>
> Nice detail about Firefox: I didn't lose any of the text I had written here so far.
>
> > > As for "avoiding file system meta-data", why wouldn't Windows be able to do that?
> > > It's not like NTFS needs to be aware of what goes on inside a page file.
> >
> > When the OS wants to access data at "offset in page file" it'd have to convert that into "offset
> > in partition" (while taking into account page file fragmentation, etc) instead of just doing
> > a fast/simple addition ("offset in partition = start of partition + offset in swap").
>
> Each fragment could be treated as separate page file.
With separate file system meta-data (like "C://pagefile000.sys", "C://pagefile001.sys", ...) to keep track of each individual "unfragmented extent"?
> And why use offset in partition
> instead of offset on disk (if such optimization really makes any difference).
>
> > Also; increasing the page file size would involve finding free blocks of disk space and marking
> > them as "not free" in whatever NTFS uses to keep track, then adding them to whatever NTFS uses
> > to keep track of which parts of a disk are used by a file, plus updating the directory entry
> > (e.g. file size field) and any other accountancy/statistics (e.g. free space stats).
>
> You talk like the paging system needs to do all that without help from NTFS. It's not like MS needs
> to even support a page file on every file system, for example, on ReFS it's not supported.
>
> It's also pretty uncommon to need to increase page file size.
I was mostly thinking about Windows' "auto-manage page file size" where it frequently increases page file size. Also note that (in Windows 8) I could only find "initial size + max. size" settings, where Windows would still increase the page file size from the initial size up to the max. size if you don't set the initial size equal to the max. size to prevent it.
> > My main problem with "page file" is what happens when you reach
> > an "almost full" state - it creates a bizarre/unintuitive
> > relationship where (e.g.) allocating memory reduces the space you have left for files, and creating files
> > reduces how much memory you can allocate. Of course this problem goes away if you have a fixed size page
> > file, but then what's the point of not having a fixed size swap partition instead?
>
> With file, you can decide later. You can also change the fixed size later. From my point of view,
> the question is why have any more partitions than necessary at all? Partitions are unflexible.
You can resize partitions now (if you want to avoid the "backup, re-partition, reinstall/restore" option).
There's multiple reasons for multiple partitions - to enforce quotas without actually having quotas (e.g. to make sure "/home" can't gobble all disk space and prevent updates), for performance reasons, etc.
I also think it's sad that there isn't any "common swap partition standard" (e.g. as part of UEFI's GPT) so that when you have multiple operating systems installed (e.g. dual boot between Windows and Linux) they can't both use the same swap partition/s.
> I have a Linux box with 10 GB swap partition and 32 GBs of memory, because that computer started
> with just 8 and resizing partition is too much work for me. It's likely to be 10 + 64 or 10 +
> 128 in the future, when I update the hardware and just clone the disk or use it as is.
>
> > There'd also be work-arounds in various utilities (doing a back-up of the file system?
> > Better add a special case to avoid backing up the page file. Trying to restore from a
> > checkpoint? Better not try to restore the old page file. Want to search for all files
> > modified in the last 20 minutes? Better hide the page file in the search results.)
>
> If you want to backup Windows, or any OS for that matter, I'm sure there's more than one file to exclude.
> Why would restoring old page file be a problem anyway (unless you are talking about restoring over running
> system, which doesn't really make sense in this context, if you have any idea about how Windows works)?
For restoring in abnormal situations where the OS isn't running, restoring the page file is a waste of time. In normal situations (whether it's "system restore" or something else) restoring the obsolete page file is a disaster (in addition to being a waste of time).
> > Then there's things like RAID and whole disk encryption (do you really want "RAID-5" and encryption
> > for your page file, just because you did want those things for normal files on "C:/"?).
>
> No idea what Windows does with RAID 5 page file. No idea if your C: can be RAID
> 5 in Windows either. As for encryption, you'll probably want (all) your page
> file(s) encrypted, if you decide system drive encryption is your thing.
Windows does support RAID 5 (but not RAID 6?); and I'd expect that (as RAID is at a lower level than file system) your page file would also using RAID.
For encryption; you might want swap encrypted but not file system, or files encrypted but not swap, or you might want different types of encryption for different cases. My main point here is that "artificial conflation of file system and swap" prevents flexibility (for redundancy/RAID and encryption, and anything else).
> > I think ReadyBoost was to improve latency for normal files (especially
> > during boot where OS is trying to fetch all drivers, GUI, etc).
>
> I thought it worked for page file too, but I could be wrong. It hasn't been relevant since 2011 for me.
Maybe; but that (using SSD to cache swap on slow drive instead of just having 2 swap providers) would seem silly to me.
- Brendan
Topic | Posted By | Date |
---|---|---|
Nuances related to Spinlock implementation and the Linux Scheduler | Beastian | 2020/01/03 12:46 PM |
Nuances related to Spinlock implementation and the Linux Scheduler | Montaray Jack | 2020/01/03 01:14 PM |
Nuances related to Spinlock implementation and the Linux Scheduler | Montaray Jack | 2020/01/03 01:49 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/03 07:05 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Beastian | 2020/01/04 12:03 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Malte Skarupke | 2020/01/04 12:22 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/04 01:31 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | dmcq | 2020/01/05 07:33 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | smeuletz | 2020/01/06 02:05 AM |
Do not blame others for your unfinished job | smeuletz | 2020/01/06 02:08 AM |
Where did all the experts come from? Did Linus get linked? (NT) | anon | 2020/01/06 04:27 AM |
Phoronix | Gabriele Svelto | 2020/01/06 05:04 AM |
Phoronix | Salvatore De Dominicis | 2020/01/06 07:59 AM |
Do not blame anyone. Please give polite, constructive criticism | Chester | 2020/01/06 09:17 AM |
Do not blame anyone. Please give polite, constructive criticism | smeuletz | 2020/01/06 10:11 AM |
Do not blame anyone. Please give polite, constructive criticism | Chester | 2020/01/06 10:54 AM |
Do not blame anyone. Please give polite, constructive criticism | smeuletz | 2020/01/06 11:33 AM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/06 12:58 PM |
Do not blame anyone. Please give polite, constructive criticism | Gionatan Danti | 2020/01/06 01:13 PM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/06 01:28 PM |
Do not blame anyone. Please give polite, constructive criticism | Gionatan Danti | 2020/01/06 01:52 PM |
Do not blame anyone. Please give polite, constructive criticism | John Scott | 2020/01/10 08:48 AM |
Do not blame anyone. Please give polite, constructive criticism | supernovas | 2020/01/10 10:01 AM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/10 12:45 PM |
Do not blame anyone. Please give polite, constructive criticism | GDan | 2020/04/06 03:10 AM |
Oracle | Anon3 | 2020/04/07 06:42 AM |
Do not blame anyone. Please give polite, constructive criticism | smeuletz | 2020/01/07 04:07 AM |
Do not blame anyone. Please give polite, constructive criticism | Simon Farnsworth | 2020/01/07 01:40 PM |
Do not blame anyone. Please give polite, constructive criticism | Etienne | 2020/01/08 02:08 AM |
Do not blame anyone. Please give polite, constructive criticism | smeuletz | 2020/01/08 02:18 AM |
Do not blame anyone. Please give polite, constructive criticism | Michael S | 2020/01/08 02:56 AM |
Not deprecating irrelevant API: sched_yield() on quantum computers? | smeuletz | 2020/01/07 04:34 AM |
Do not blame anyone. Please give polite, constructive criticism | magicalgoat | 2020/01/09 05:58 PM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/09 10:37 PM |
Do not blame anyone. Please give polite, constructive criticism | Anon3 | 2020/01/10 04:40 PM |
Do not blame anyone. Please give polite, constructive criticism | rwessel | 2020/01/06 10:04 PM |
Do not blame anyone. Please give polite, constructive criticism | Linus Torvalds | 2020/01/06 12:11 PM |
Do not blame anyone. Please give polite, constructive criticism | Gabriele Svelto | 2020/01/06 02:36 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Howard Chu | 2020/01/09 11:39 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/10 12:30 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | president ltd | 2020/01/04 02:44 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Jörn Engel | 2020/01/04 12:34 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Emil Briggs | 2020/01/04 01:13 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Jörn Engel | 2020/01/04 01:46 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/04 02:24 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/04 03:54 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Jörn Engel | 2020/01/05 10:21 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Linus Torvalds | 2020/01/05 12:42 PM |
FUTEX_LOCK_PI performance | Jörn Engel | 2020/01/05 02:45 PM |
FUTEX_LOCK_PI performance | Linus Torvalds | 2020/01/05 04:30 PM |
FUTEX_LOCK_PI performance | Jörn Engel | 2020/01/05 07:03 PM |
FUTEX_LOCK_PI performance | RichardC | 2020/01/06 07:11 AM |
FUTEX_LOCK_PI performance | Linus Torvalds | 2020/01/06 01:11 PM |
FUTEX_LOCK_PI performance | Gabriele Svelto | 2020/01/06 03:20 AM |
FUTEX_LOCK_PI performance | xilun | 2020/01/06 05:19 PM |
FUTEX_LOCK_PI performance | Konrad Schwarz | 2020/01/13 04:36 AM |
FUTEX_LOCK_PI performance | Gabriele Svelto | 2020/01/13 04:53 AM |
FUTEX_LOCK_PI performance | Simon Farnsworth | 2020/01/13 05:36 AM |
FUTEX_LOCK_PI performance | rwessel | 2020/01/13 06:22 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | rainstar | 2020/01/04 10:58 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Charles Ellis | 2020/01/05 04:00 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Richard | 2020/01/05 09:58 AM |
It's hard to separate | Michael S | 2020/01/05 11:17 AM |
It's hard to separate | rainstared | 2020/01/06 01:52 AM |
It's hard to separate | David Kanter | 2020/01/08 09:27 AM |
It's hard to separate | Anon | 2020/01/08 09:37 PM |
It's hard to separate | none | 2020/01/08 11:50 PM |
It's hard to separate | Anon | 2020/01/09 01:41 AM |
It's hard to separate | none | 2020/01/09 03:54 AM |
It's hard to separate | gallier2 | 2020/01/09 04:19 AM |
It's hard to separate | Anon | 2020/01/09 05:12 AM |
It's hard to separate | Adrian | 2020/01/09 05:24 AM |
It's hard to separate | gallier2 | 2020/01/09 05:58 AM |
It's hard to separate | Adrian | 2020/01/09 07:09 AM |
It's hard to separate | gallier2 | 2020/01/09 05:42 AM |
It's hard to separate | Adrian | 2020/01/09 04:41 AM |
It's hard to separate | Anon | 2020/01/09 05:24 AM |
It's hard to separate | gallier2 | 2020/01/09 06:07 AM |
It's hard to separate | David Hess | 2020/01/09 09:27 AM |
It's hard to separate | Adrian | 2020/01/09 10:15 AM |
It's hard to separate | David Hess | 2020/01/09 10:45 AM |
It's hard to separate | Anon | 2020/01/09 11:15 AM |
It's hard to separate | Adrian | 2020/01/09 11:51 AM |
It's hard to separate | Brett | 2020/01/09 01:49 PM |
Zilog Z8000 | Brett | 2020/01/10 10:53 PM |
Zilog Z8000 | David Hess | 2020/01/11 07:06 AM |
Zilog Z8000 | Adrian | 2020/01/11 07:29 AM |
Zilog Z8000 | David Hess | 2020/01/11 08:45 AM |
Zilog Z8000 | Ricardo B | 2020/01/11 08:04 PM |
Zilog Z8000 | Ronald Maas | 2020/01/12 10:47 AM |
Zilog Z8000 | Ricardo B | 2020/01/12 12:15 PM |
Zilog Z8000 | Anon | 2020/01/12 11:34 PM |
Zilog Z8000 | Jose | 2020/01/13 01:23 AM |
Zilog Z8000 | gallier2 | 2020/01/13 01:42 AM |
Zilog Z8000 | Jose | 2020/01/13 10:04 PM |
Zilog Z8000 | rwessel | 2020/01/13 10:40 PM |
Zilog Z8000 | David Hess | 2020/01/13 11:35 PM |
Zilog Z8000 | Simon Farnsworth | 2020/01/14 03:56 AM |
Zilog Z8000 | Michael S | 2020/01/14 04:09 AM |
Zilog Z8000 | Simon Farnsworth | 2020/01/14 05:06 AM |
Zilog Z8000 | David Hess | 2020/01/14 10:22 AM |
Zilog Z8000 | David Hess | 2020/01/14 10:15 AM |
Zilog Z8000 | rwessel | 2020/01/14 04:12 PM |
286 16 bit I/O | Tim McCaffrey | 2020/01/15 11:25 AM |
286 16 bit I/O | David Hess | 2020/01/15 09:17 PM |
Zilog Z8000 | Ricardo B | 2020/01/13 11:52 AM |
Zilog Z8000 | Anon | 2020/01/13 12:25 PM |
Zilog Z8000 | David Hess | 2020/01/13 06:38 PM |
Zilog Z8000 | rwessel | 2020/01/13 07:16 PM |
Zilog Z8000 | David Hess | 2020/01/13 07:47 PM |
Zilog Z8000 | someone | 2020/01/14 07:54 AM |
Zilog Z8000 | Anon | 2020/01/14 08:31 AM |
Zilog Z8000 | Ricardo B | 2020/01/14 06:29 PM |
Zilog Z8000 | Simon Farnsworth | 2020/01/15 03:26 AM |
Zilog Z8000 | Tim McCaffrey | 2020/01/15 11:27 AM |
Zilog Z8000 | Simon Farnsworth | 2020/01/15 02:32 PM |
Zilog Z8000 | Ricardo B | 2020/01/15 03:47 PM |
Zilog Z8000 | Anon | 2020/01/15 04:08 PM |
Zilog Z8000 | Ricardo B | 2020/01/15 05:16 PM |
Zilog Z8000 | Anon | 2020/01/15 05:31 PM |
Zilog Z8000 | Ricardo B | 2020/01/15 06:46 PM |
Zilog Z8000 | Anon | 2020/01/15 07:04 PM |
Zilog Z8000 | David Hess | 2020/01/15 09:53 PM |
Zilog Z8000 | Ricardo B | 2020/01/16 07:27 PM |
Zilog Z8000 | Anon | 2020/01/16 08:33 PM |
Zilog Z8000 | Ronald Maas | 2020/01/17 12:05 AM |
Zilog Z8000 | Anon | 2020/01/17 08:15 AM |
Zilog Z8000 | Ricardo B | 2020/01/17 02:59 PM |
Zilog Z8000 | Anon | 2020/01/17 07:40 PM |
Zilog Z8000 | Ricardo B | 2020/01/18 08:42 AM |
Zilog Z8000 | gallier2 | 2020/01/19 08:02 AM |
Zilog Z8000 | David Hess | 2020/01/18 07:12 AM |
Zilog Z8000 | David Hess | 2020/01/15 09:49 PM |
Zilog Z8000 | gallier2 | 2020/01/16 12:57 AM |
Zilog Z8000 | Simon Farnsworth | 2020/01/16 02:30 AM |
IBM PC success | Etienne | 2020/01/16 06:42 AM |
Zilog Z8000 | Ricardo B | 2020/01/16 07:32 PM |
Zilog Z8000 | Brett | 2020/01/17 01:38 AM |
Zilog Z8000 | David Hess | 2020/01/18 07:28 AM |
Zilog Z8000 | David Hess | 2020/01/18 07:22 AM |
Zilog Z8000 | David Hess | 2020/01/15 09:30 PM |
Zilog Z8000 | Maxwell | 2020/01/11 09:07 AM |
Zilog Z8000 | David Hess | 2020/01/11 09:40 AM |
Zilog Z8000 | Maxwell | 2020/01/11 10:08 AM |
Zilog Z8000 | Ricardo B | 2020/01/11 08:42 PM |
8086 does NOT have those addressing modes | Devin | 2020/01/12 02:13 PM |
8086 does NOT have those addressing modes | Ricardo B | 2020/01/12 06:46 PM |
8086 does NOT have those addressing modes | Anon | 2020/01/13 05:10 AM |
8086 does NOT have those addressing modes | gallier2 | 2020/01/13 06:07 AM |
8086 does NOT have those addressing modes | Anon | 2020/01/13 07:09 AM |
8086 does NOT have those addressing modes | Ricardo B | 2020/01/13 11:48 AM |
8086 does NOT have those addressing modes | Michael S | 2020/01/13 07:40 AM |
Zilog Z8000 | Ronald Maas | 2020/01/13 09:44 AM |
Zilog Z8000 | Anon | 2020/01/13 04:32 PM |
8086 does NOT have those addressing modes | Ricardo B | 2020/01/13 11:24 AM |
8086 does NOT have those addressing modes | rwessel | 2020/01/13 03:59 PM |
8086 does NOT have those addressing modes | David Hess | 2020/01/13 07:12 PM |
8086 does NOT have those addressing modes | rwessel | 2020/01/13 07:28 PM |
8086 does NOT have those addressing modes | David Hess | 2020/01/13 07:51 PM |
8086 does NOT have those addressing modes | David Hess | 2020/01/13 06:55 PM |
Zilog Z8000 | rwessel | 2020/01/11 01:26 PM |
Zilog Z8000 | Brett | 2020/01/11 03:16 PM |
Zilog Z8000 | rwessel | 2020/01/11 08:20 PM |
Zilog Z8000 | Brett | 2020/01/12 01:02 PM |
Zilog Z8000 | rwessel | 2020/01/12 10:06 PM |
Zilog Z8000 | Brett | 2020/01/12 11:02 PM |
Zilog Z8000 | James | 2020/01/13 06:12 AM |
Zilog Z8000 | Adrian | 2020/01/12 12:38 AM |
PDP-11 | Michael S | 2020/01/12 02:33 AM |
Zilog Z8000 | rwessel | 2020/01/12 07:01 AM |
Zilog Z8000 | Ronald Maas | 2020/01/12 11:03 AM |
Zilog Z8000 | Konrad Schwarz | 2020/01/13 04:49 AM |
Zilog Z8000 | Adrian | 2020/01/14 12:38 AM |
Zilog Z8000 | konrad.schwarz | 2020/01/15 05:50 AM |
Zilog Z8000 | Adrian | 2020/01/15 11:24 PM |
It's hard to separate | David Hess | 2020/01/11 07:08 AM |
It's hard to separate | David Hess | 2020/01/11 07:11 AM |
It's hard to separate | Adrian | 2020/01/09 12:16 PM |
It's hard to separate | David Hess | 2020/01/11 07:17 AM |
It's hard to separate | gallier2 | 2020/01/10 01:11 AM |
It's hard to separate | none | 2020/01/10 02:58 AM |
It's hard to separate | rwessel | 2020/01/09 08:00 AM |
It's hard to separate | David Hess | 2020/01/09 09:10 AM |
It's hard to separate | rwessel | 2020/01/09 09:51 AM |
It's hard to separate | Adrian | 2020/01/08 11:58 PM |
It's hard to separate | rwessel | 2020/01/09 07:31 AM |
It's hard to separate | Adrian | 2020/01/09 07:44 AM |
It's hard to separate | David Hess | 2020/01/09 09:37 AM |
It's hard to separate | none | 2020/01/09 10:34 AM |
Are segments so bad? | Paul A. Clayton | 2020/01/09 03:15 PM |
Yes, they are terrible (NT) | Anon | 2020/01/09 03:20 PM |
Are segments so bad? | Adrian | 2020/01/10 12:49 AM |
Are segments so bad? | Etienne | 2020/01/10 02:28 AM |
Are segments so bad? | gallier2 | 2020/01/10 02:37 AM |
Are segments so bad? | Adrian | 2020/01/10 03:19 AM |
Are segments so bad? | Adrian | 2020/01/10 04:27 AM |
Are segments so bad? | Etienne | 2020/01/10 04:41 AM |
Are segments so bad? | Adrian | 2020/01/10 03:05 AM |
Are segments so bad? | gallier2 | 2020/01/10 03:13 AM |
Are segments so bad? | Anon3 | 2020/01/10 11:37 AM |
Are segments so bad? | Adrian | 2020/01/10 11:47 AM |
Are segments so bad? | Brendan | 2020/01/11 01:43 AM |
Are segments so bad? | Anon | 2020/01/10 06:51 PM |
Are segments so bad? | Adrian | 2020/01/11 01:05 AM |
Are segments so bad? | Jukka Larja | 2020/01/11 08:20 AM |
Are segments so bad? | Brendan | 2020/01/11 10:14 AM |
Are segments so bad? | Jukka Larja | 2020/01/11 09:15 PM |
Are segments so bad? | Brendan | 2020/01/11 11:15 PM |
Are segments so bad? | Jukka Larja | 2020/01/12 04:18 AM |
Are segments so bad? | anon | 2020/01/12 12:30 PM |
Are segments so bad? | Brendan | 2020/01/12 10:19 PM |
the world sucks worse than you're aware of | Michael S | 2020/01/13 01:50 AM |
the world sucks worse than you're aware of | Brendan | 2020/01/13 03:56 AM |
the world sucks worse than you're aware of | Gabriele Svelto | 2020/01/13 04:46 AM |
Are segments so bad? | Jukka Larja | 2020/01/13 07:41 AM |
Are segments so bad? | Brendan | 2020/01/13 08:21 AM |
Are segments so bad? | Jukka Larja | 2020/01/13 09:43 AM |
Are segments so bad? | Brendan | 2020/01/13 01:02 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/13 01:22 PM |
Are segments so bad? | Brendan | 2020/01/13 02:50 PM |
actor of around 200? | Michael S | 2020/01/14 03:58 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/14 12:50 PM |
Not overcomitting leads to more OOMs, not less | Brendan | 2020/01/14 01:40 PM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/15 03:17 AM |
Not overcomitting leads to more OOMs, not less | Anon | 2020/01/15 04:43 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/15 05:09 AM |
Not overcomitting leads to more OOMs, not less | Anon | 2020/01/15 05:16 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/15 06:58 AM |
Not overcomitting leads to more OOMs, not less | Anon | 2020/01/15 09:08 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/16 04:05 AM |
Not overcomitting leads to more OOMs, not less | Michael S | 2020/01/15 04:48 AM |
Not overcomitting leads to more OOMs, not less | Gabriele Svelto | 2020/01/15 05:10 AM |
Not overcomitting leads to more OOMs, not less | Michael S | 2020/01/15 08:13 AM |
Not overcomitting leads to more OOMs, not less | Jukka Larja | 2020/01/15 08:46 AM |
Not overcomitting leads to more OOMs, not less | Jukka Larja | 2020/01/15 06:08 AM |
Thanks for the info (NT) | Gabriele Svelto | 2020/01/15 07:00 AM |
Not overcomitting leads to more OOMs, not less | Linus Torvalds | 2020/01/15 12:30 PM |
OOM killer complains | Anon | 2020/01/15 12:44 PM |
OOM killer complains | anon | 2020/01/15 04:26 PM |
Not overcomitting leads to more OOMs, not less | Brendan | 2020/01/16 07:26 AM |
Not overcomitting leads to more OOMs, not less | Linus Torvalds | 2020/01/16 10:17 AM |
Not overcomitting leads to more OOMs, not less | Linus Torvalds | 2020/01/16 10:48 AM |
Not overcomitting leads to more OOMs, not less | Doug S | 2020/01/16 03:41 PM |
Not overcomitting leads to more OOMs, not less | Doug S | 2020/01/16 03:44 PM |
Are segments so bad? | rwessel | 2020/01/13 04:11 PM |
Are segments so bad? | Jukka Larja | 2020/01/14 07:37 AM |
Are segments so bad? | Brendan | 2020/01/14 08:48 AM |
Are segments so bad? | Jukka Larja | 2020/01/14 11:13 AM |
Are segments so bad? | Brendan | 2020/01/14 02:30 PM |
Are segments so bad? | Brett | 2020/01/14 10:13 PM |
Are segments so bad? | Jukka Larja | 2020/01/15 07:04 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/15 03:35 AM |
Specifying cost of dropping pages | Paul A. Clayton | 2020/01/13 03:00 PM |
Specifying cost of dropping pages | rwessel | 2020/01/13 04:19 PM |
Specifying cost of dropping pages | Gabriele Svelto | 2020/01/15 03:23 AM |
Are segments so bad? | anon | 2020/01/14 02:15 AM |
Are segments so bad? | Brendan | 2020/01/14 06:13 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/14 12:57 PM |
Are segments so bad? | Brendan | 2020/01/14 02:58 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/15 03:33 AM |
Are segments so bad? | Anon | 2020/01/15 05:24 AM |
Are segments so bad? | Jukka Larja | 2020/01/15 06:20 AM |
Are segments so bad? | Etienne | 2020/01/15 05:56 AM |
Are segments so bad? | Jukka Larja | 2020/01/15 08:53 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/16 06:12 AM |
Are segments so bad? | Jukka Larja | 2020/01/16 10:56 AM |
Are segments so bad? | Brendan | 2020/01/15 06:20 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/15 06:56 AM |
Are segments so bad? | Brendan | 2020/01/16 07:16 AM |
Are segments so bad? | Jukka Larja | 2020/01/16 11:08 AM |
Are segments so bad? | Brendan | 2020/01/17 01:52 PM |
Are segments so bad? | Jukka Larja | 2020/01/17 10:08 PM |
Are segments so bad? | Brendan | 2020/01/18 12:40 PM |
Are segments so bad? | Jukka Larja | 2020/01/18 10:13 PM |
Are segments so bad? | Brendan | 2020/01/19 12:25 PM |
Are segments so bad? | Brett | 2020/01/19 03:18 PM |
Are segments so bad? | Brett | 2020/01/19 03:34 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 12:57 AM |
Are segments so bad? | Jukka Larja | 2020/01/20 05:54 AM |
Are segments so bad? | Brendan | 2020/01/20 12:43 PM |
Are segments so bad? | Jukka Larja | 2020/01/21 07:01 AM |
Are segments so bad? | Brendan | 2020/01/21 06:04 PM |
Are segments so bad? | Jukka Larja | 2020/01/22 07:30 AM |
Are segments so bad? | Brendan | 2020/01/22 03:56 PM |
Are segments so bad? | Jukka Larja | 2020/01/23 08:44 AM |
Are segments so bad? | rwessel | 2020/01/16 03:06 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/16 03:13 PM |
Are segments so bad? | Brendan | 2020/01/17 01:51 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/17 03:18 PM |
Are segments so bad? | Anon | 2020/01/17 08:01 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 01:06 AM |
Are segments so bad? | Brendan | 2020/01/18 03:15 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 12:55 AM |
Are segments so bad? | Michael S | 2020/01/20 05:30 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 08:02 AM |
Are segments so bad? | Jukka Larja | 2020/01/20 08:41 AM |
Are segments so bad? | Michael S | 2020/01/20 08:45 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/20 09:36 AM |
Are segments so bad? | Brendan | 2020/01/20 11:04 AM |
Are segments so bad? | Michael S | 2020/01/20 01:22 PM |
Are segments so bad? | Brendan | 2020/01/20 02:38 PM |
Are segments so bad? | Simon Farnsworth | 2020/01/20 03:40 PM |
Are segments so bad? | Anon | 2020/01/20 04:35 PM |
Are segments so bad? | Simon Farnsworth | 2020/01/20 05:30 PM |
Are segments so bad? | Michael S | 2020/01/20 05:20 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/21 05:08 AM |
Are segments so bad? | Brendan | 2020/01/21 06:07 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/22 01:53 AM |
Are segments so bad? | Brendan | 2020/01/22 04:32 AM |
Are segments so bad? | Jukka Larja | 2020/01/22 07:12 AM |
Are segments so bad? | Brendan | 2020/01/22 04:28 PM |
Are segments so bad? | Jukka Larja | 2020/01/23 07:36 AM |
Are segments so bad? | Brendan | 2020/01/24 07:27 PM |
Are segments so bad? | Jukka Larja | 2020/01/24 10:42 PM |
Are segments so bad? | Brendan | 2020/01/25 02:46 AM |
Are segments so bad? | Jukka Larja | 2020/01/25 08:29 AM |
Are segments so bad? | Brendan | 2020/01/26 11:17 PM |
Are segments so bad? | Jukka Larja | 2020/01/27 07:55 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/27 04:33 PM |
Are segments so bad? | Jukka Larja | 2020/01/28 06:28 AM |
DDS assets and MipMap chains | Montaray Jack | 2020/01/29 03:26 AM |
Are segments so bad? | gallier2 | 2020/01/27 03:58 AM |
Are segments so bad? | Jukka Larja | 2020/01/27 06:19 AM |
Are segments so bad? | Anne O. Nymous | 2020/01/25 03:23 AM |
Are segments so bad? | Anon | 2020/01/22 05:52 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/23 01:24 AM |
Are segments so bad? | Anon | 2020/01/23 05:24 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/24 12:43 AM |
Are segments so bad? | Anon | 2020/01/24 04:04 AM |
Are segments so bad? | Etienne | 2020/01/24 06:10 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/23 01:48 AM |
Are segments so bad? | Michael S | 2020/01/23 03:48 AM |
Are segments so bad? | Jukka Larja | 2020/01/23 07:38 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/23 01:29 PM |
Are segments so bad? | Anon | 2020/01/23 06:08 PM |
Are segments so bad? | Jukka Larja | 2020/01/24 09:51 PM |
Are segments so bad? | Anon | 2020/01/23 06:02 PM |
Are segments so bad? | Gabriele Svelto | 2020/01/24 03:57 AM |
Are segments so bad? | Anon | 2020/01/24 04:17 AM |
Are segments so bad? | Gabriele Svelto | 2020/01/24 09:23 AM |
Are segments so bad? | Anon | 2020/02/02 10:15 PM |
Are segments so bad? | Gabriele Svelto | 2020/02/03 01:47 AM |
Are segments so bad? | Anon | 2020/02/03 02:34 AM |
Are segments so bad? | Gabriele Svelto | 2020/02/03 05:36 AM |
Are segments so bad? | Anon3 | 2020/02/03 08:47 AM |
Are segments so bad? | Anon | 2020/02/04 05:49 PM |
Are segments so bad? | Jukka Larja | 2020/01/24 10:10 PM |
Are segments so bad? | Jukka Larja | 2020/01/17 10:26 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/12 04:18 AM |
Are segments so bad? | Jukka Larja | 2020/01/12 08:41 AM |
Are segments so bad? | rwessel | 2020/01/11 01:31 PM |
Are segments so bad? | Anne O. Nymous | 2020/01/11 08:22 AM |
Are segments so bad? | Ricardo B | 2020/01/11 08:01 PM |
Are segments so bad? | Adrian | 2020/01/12 12:18 AM |
Are segments so bad? | Michael S | 2020/01/12 02:43 AM |
Are segments so bad? | Adrian | 2020/01/12 04:35 AM |
Are segments so bad? | Ricardo B | 2020/01/12 12:04 PM |
Are segments so bad? | Anon3 | 2020/01/12 05:52 PM |
Are segments so bad? | Brendan | 2020/01/12 09:58 PM |
Are segments so bad? | Paul A. Clayton | 2020/01/13 09:11 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | rainstared | 2020/01/06 01:43 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Foo_ | 2020/01/06 05:33 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | dmcq | 2020/01/06 06:03 AM |
changes in context | Carlie Coats | 2020/01/09 09:06 AM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | rainstar | 2020/01/09 10:16 PM |
No nuances, just buggy code (was: related to Spinlock implementation and the Linux Scheduler) | Montaray Jack | 2020/01/09 11:11 PM |
Suggested reading for the author | anon | 2020/01/04 11:16 PM |
Suggested reading for the author | ab | 2020/01/05 05:15 AM |
Looking at the other side (frequency scaling) | Chester | 2020/01/06 10:19 AM |
Looking at the other side (frequency scaling) | Foo_ | 2020/01/06 11:00 AM |
Why spinlocks were used | Foo_ | 2020/01/06 11:06 AM |
Why spinlocks were used | Jukka Larja | 2020/01/06 12:59 PM |
Why spinlocks were used | Simon Cooke | 2020/01/06 03:16 PM |
Why spinlocks were used | Rizzo | 2020/01/07 01:18 AM |
Looking at the other side (frequency scaling) | ab | 2020/01/07 01:14 AM |
Cross-platform code | Gian-Carlo Pascutto | 2020/01/06 08:00 AM |
Cross-platform code | Michael S | 2020/01/06 09:11 AM |
Cross-platform code | Gian-Carlo Pascutto | 2020/01/06 12:33 PM |
Cross-platform code | Michael S | 2020/01/06 01:59 PM |
Cross-platform code | Nksingh | 2020/01/07 12:09 AM |
Cross-platform code | Michael S | 2020/01/07 02:00 AM |
SRW lock implementation | Michael S | 2020/01/07 02:35 AM |
SRW lock implementation | Nksingh | 2020/01/09 02:17 PM |
broken URL in Linux source code | Michael S | 2020/01/14 01:56 AM |
broken URL in Linux source code | Travis Downs | 2020/01/14 10:14 AM |
broken URL in Linux source code | Michael S | 2020/01/14 10:48 AM |
broken URL in Linux source code | Travis Downs | 2020/01/14 04:43 PM |
SRW lock implementation - url broken | Michael S | 2020/01/14 03:07 AM |
SRW lock implementation - url broken | Travis Downs | 2020/01/14 11:06 AM |
SRW lock implementation - url broken | gpderetta | 2020/01/15 04:28 AM |
SRW lock implementation - url broken | Travis Downs | 2020/01/15 11:16 AM |
SRW lock implementation - url broken | Linus Torvalds | 2020/01/15 11:20 AM |
SRW lock implementation - url broken | Travis Downs | 2020/01/15 11:35 AM |
SRW lock implementation - url broken | Linus Torvalds | 2020/01/16 11:24 AM |
SRW lock implementation - url broken | Konrad Schwarz | 2020/02/05 10:19 AM |
SRW lock implementation - url broken | nksingh | 2020/02/05 02:42 PM |
Cross-platform code | Linus Torvalds | 2020/01/06 01:57 PM |