By: Andrey (andrey.semashev.delete@this.gmail.com), October 4, 2021 12:14 pm
Room: Moderated Discussions
Doug S (foo.delete@this.bar.bar) on October 4, 2021 10:23 am wrote:
> --- (---.delete@this.redheron.com) on October 4, 2021 9:41 am wrote:
> > > Why? Zeroing at the point of use means you can skip the DRAM step entirely.
> > > You save one store to DRAM, and possibly even one load from DRAM.
> >
> > True. But zero-ing at the point of use is basically a security bet that "this time,
> > trust us, there's absolutely no way anyone can break through our OS to construct a
> > mechanism by which they can read pages on the free, but not yet-laundered, list".
> > Is that a good bet?
> >
> > In principle (sure, in principle) it's no different from saying "trust us, there's absolutely no
> > way anyone can read pages in a different process", and if that fails, well, it's game over.
>
>
> Yeah given stuff like rowhammer, SPECTRE, and whatever comes next I'm as skeptical as you are. The question
> is, how much is gained by preventing an attacker from potentially reading invalidated pages? If they are
> zeroed immediately there are still plenty of in-use pages he could access. Unless the only thing of value
> to an attacker was highly transient (for example, a decrypted file that is used very briefly then the pages
> on which it resides returned to the free list) there is plenty of other data to steal.
>
> There's also the tradeoff of what happens if an attack allows writing values in pages that
> will be allocated and assumed to be zero? It would be complex since you wouldn't know what page
> will get allocated to the specific thing you're trying to attack, but if you the ability to attempt
> the attack enough times it is only a matter of time and patience before you get lucky.
I'm not a security expert, but I think security critical software has to launder sensitive data before freeing the memory (to the memory allocator or system). This makes sense because in general you don't know whether the freed memory (e.g. via free()) will be reallocated by a subsequent allocation or dumped in a core dump or swap. Given this, there is no security reason to zero pages upon reclaiming by the kernel, other than when the process is forcefully terminated by a signal. There's also a lot less sensitive data than non-sensitive, so zeroing pages unconditionally would mean unnecessary overhead for everyone.
Note I'm not talking about allocating pages to a process. Here, zeroing needs to happen, preferably lazily, to ensure no data is leaked between processes.
> --- (---.delete@this.redheron.com) on October 4, 2021 9:41 am wrote:
> > > Why? Zeroing at the point of use means you can skip the DRAM step entirely.
> > > You save one store to DRAM, and possibly even one load from DRAM.
> >
> > True. But zero-ing at the point of use is basically a security bet that "this time,
> > trust us, there's absolutely no way anyone can break through our OS to construct a
> > mechanism by which they can read pages on the free, but not yet-laundered, list".
> > Is that a good bet?
> >
> > In principle (sure, in principle) it's no different from saying "trust us, there's absolutely no
> > way anyone can read pages in a different process", and if that fails, well, it's game over.
>
>
> Yeah given stuff like rowhammer, SPECTRE, and whatever comes next I'm as skeptical as you are. The question
> is, how much is gained by preventing an attacker from potentially reading invalidated pages? If they are
> zeroed immediately there are still plenty of in-use pages he could access. Unless the only thing of value
> to an attacker was highly transient (for example, a decrypted file that is used very briefly then the pages
> on which it resides returned to the free list) there is plenty of other data to steal.
>
> There's also the tradeoff of what happens if an attack allows writing values in pages that
> will be allocated and assumed to be zero? It would be complex since you wouldn't know what page
> will get allocated to the specific thing you're trying to attack, but if you the ability to attempt
> the attack enough times it is only a matter of time and patience before you get lucky.
I'm not a security expert, but I think security critical software has to launder sensitive data before freeing the memory (to the memory allocator or system). This makes sense because in general you don't know whether the freed memory (e.g. via free()) will be reallocated by a subsequent allocation or dumped in a core dump or swap. Given this, there is no security reason to zero pages upon reclaiming by the kernel, other than when the process is forcefully terminated by a signal. There's also a lot less sensitive data than non-sensitive, so zeroing pages unconditionally would mean unnecessary overhead for everyone.
Note I'm not talking about allocating pages to a process. Here, zeroing needs to happen, preferably lazily, to ensure no data is leaked between processes.