By: Jörn Engel (joern.delete@this.purestorage.com), June 30, 2022 5:34 pm
Room: Moderated Discussions
Adrian (a.delete@this.acm.org) on June 30, 2022 5:36 am wrote:
>
> With CHERI, the memory protection and dynamic relocation functions of paging would become redundant, so the
> only valid justification for the great hardware complexity added by the support for paging in 64-bit ISAs would
> remain the avoidance of memory fragmentation in the memory allocators used by the operating systems.
>
>
> In the memory allocators used by individual processes it is possible to completely avoid
> memory fragmentation, e.g. by using separate memory pools for different object sizes.
While I usually value what you read, I have to join the huge choir that calls BS on these claims. You're basically saying that slab allocators completely avoid fragmentation because they have separate pools for different object sizes.
Let's run a simple thought experiment. We have 4k slabs split up into size classes of 64B, 128B, 256B, etc. The program allocates 1M 64B objects, then randomly frees 99.9% of them, holding on to just 1k.
In the common case, you end up with slabs containing a single object. Theoretically they could hold 63 objects (64 minus metadata overhead). The difference between 63 and ~1 is what? Not fragmentation?
If the program then wants to allocation a 128B objects, can it use any of the almost-free slabs for that purpose? You have nearly 4MB of free space available. So without fragmentation, why would you have to allocate a new slab for your allocation if you have that much free space?
>
> With CHERI, the memory protection and dynamic relocation functions of paging would become redundant, so the
> only valid justification for the great hardware complexity added by the support for paging in 64-bit ISAs would
> remain the avoidance of memory fragmentation in the memory allocators used by the operating systems.
>
>
> In the memory allocators used by individual processes it is possible to completely avoid
> memory fragmentation, e.g. by using separate memory pools for different object sizes.
While I usually value what you read, I have to join the huge choir that calls BS on these claims. You're basically saying that slab allocators completely avoid fragmentation because they have separate pools for different object sizes.
Let's run a simple thought experiment. We have 4k slabs split up into size classes of 64B, 128B, 256B, etc. The program allocates 1M 64B objects, then randomly frees 99.9% of them, holding on to just 1k.
In the common case, you end up with slabs containing a single object. Theoretically they could hold 63 objects (64 minus metadata overhead). The difference between 63 and ~1 is what? Not fragmentation?
If the program then wants to allocation a 128B objects, can it use any of the almost-free slabs for that purpose? You have nearly 4MB of free space available. So without fragmentation, why would you have to allocate a new slab for your allocation if you have that much free space?