By: Adrian (a.delete@this.acm.org), June 30, 2022 8:31 am
Room: Moderated Discussions
Foo_ (foo.delete@this.nomail.com) on June 30, 2022 7:43 am wrote:
> Adrian (a.delete@this.acm.org) on June 30, 2022 5:36 am wrote:
> >
> > In the memory allocators used by individual processes it is possible to completely avoid
> > memory fragmentation, e.g. by using separate memory pools for different object sizes.
>
> That doesn't make sense. Memory fragmentation is typically caused by differing
> *lifetimes* of heap allocations, not by differing object sizes.
>
Any heap allocations have necessarily different lifetimes, otherwise there would be no need to use a heap for their memory allocation, a stack would suffice. The memory fragmentation is always caused only by mixing allocations of different sizes from a single heap, which are freed in a different order.
When there is a dedicated memory pool size for each possible requested size of an allocation (typically the requested sizes are rounded up to a power of two, to avoid an excessive number of memory pools), then the list of free memory blocks can be implemented as a stack, not as a heap, because all the blocks are equivalent and any of them can be returned to an allocation request.
In this case, it no longer matters which is the sequence in time of allocate and free requests and which were the corresponding sizes, the state of the memory allocator from the point of view of the memory fragmentation is the same after one year of continuous running as at the start time.
When allocations of different sizes are satisfied from the same memory pool, then you must have some method of splitting and coalescing the free memory blocks, to approximately match the requested size. Depending on the strategy used, there are chances that for a long-running program, i.e. a daemon/service, a large part of the memory pool may become unusable due to fragmentation after many hours or days since startup.
The fragmentation risk is the largest for the memory allocator of the operating system, as it will always have the longest running times between reboots.
Due to paging, which remaps the addresses, all pages become equivalent, it does not matter which are allocated at a request. This reduces the memory allocation problem for the OS to the simple case of allocations of fixed size, when there is no danger of memory fragmentation.
Without paging it would be hard to guarantee that no memory would be lost after one year without reboots (not infrequent in servers).
> Adrian (a.delete@this.acm.org) on June 30, 2022 5:36 am wrote:
> >
> > In the memory allocators used by individual processes it is possible to completely avoid
> > memory fragmentation, e.g. by using separate memory pools for different object sizes.
>
> That doesn't make sense. Memory fragmentation is typically caused by differing
> *lifetimes* of heap allocations, not by differing object sizes.
>
Any heap allocations have necessarily different lifetimes, otherwise there would be no need to use a heap for their memory allocation, a stack would suffice. The memory fragmentation is always caused only by mixing allocations of different sizes from a single heap, which are freed in a different order.
When there is a dedicated memory pool size for each possible requested size of an allocation (typically the requested sizes are rounded up to a power of two, to avoid an excessive number of memory pools), then the list of free memory blocks can be implemented as a stack, not as a heap, because all the blocks are equivalent and any of them can be returned to an allocation request.
In this case, it no longer matters which is the sequence in time of allocate and free requests and which were the corresponding sizes, the state of the memory allocator from the point of view of the memory fragmentation is the same after one year of continuous running as at the start time.
When allocations of different sizes are satisfied from the same memory pool, then you must have some method of splitting and coalescing the free memory blocks, to approximately match the requested size. Depending on the strategy used, there are chances that for a long-running program, i.e. a daemon/service, a large part of the memory pool may become unusable due to fragmentation after many hours or days since startup.
The fragmentation risk is the largest for the memory allocator of the operating system, as it will always have the longest running times between reboots.
Due to paging, which remaps the addresses, all pages become equivalent, it does not matter which are allocated at a request. This reduces the memory allocation problem for the OS to the simple case of allocations of fixed size, when there is no danger of memory fragmentation.
Without paging it would be hard to guarantee that no memory would be lost after one year without reboots (not infrequent in servers).