By: ⚛ (0xe2.0x9a.0x9b.delete@this.gmail.com), June 30, 2022 10:09 am
Room: Moderated Discussions
Mark Roulo (nothanks.delete@this.xxx.com) on June 30, 2022 8:25 am wrote:
> Foo_ (foo.delete@this.nomail.com) on June 30, 2022 7:43 am wrote:
> > Adrian (a.delete@this.acm.org) on June 30, 2022 5:36 am wrote:
> > >
> > > In the memory allocators used by individual processes it is possible to completely avoid
> > > memory fragmentation, e.g. by using separate memory pools for different object sizes.
> >
> > That doesn't make sense. Memory fragmentation is typically caused by differing
> > *lifetimes* of heap allocations, not by differing object sizes.
> >
>
> It is both.
>
> All the allocations the same size (e.g. 1KB) mean there is no
> fragmentation because if you have enough memory for all your
> objects then there will be a slot. You won't have a situation
> where you need 1KB, have 5KB, but the 5KB is unusable because it
> is in 100 discontiguous 50B pieces.
>
> Alternately, if one wants to allocate a large number of randomly
> sized objects and then free them all at once (and you know this ...)
> then you can avoid fragmentation by simply allocating "from the end"
> and freeing by "resetting the 'next'" pointer.
>
> The intersection of different sizes and different lifetimes (and
> not knowing them up front ...) creates the fragmentation.
Ok ... so, in my not-so-humble-this-time opinion, both mentioned explanations about the "alleged" primary cause of memory fragmentation are actually invalid. The real/actual reason for memory fragmentation is the impossibility of computing the time-order of deallocations _before_ those deallocations are actually/physically executed. In cases in which it is possible - at the _allocation_ point - to compute the _deallocation_ order, memory fragmentation either does not occur or does not need to occur (unless, of course, the programmer decides for fragmentation to occur on purpose because of some strange reason).
The viewpoint that "memory fragmentation is caused by differences in sizes of objects" and the viewpoint that "memory fragmentation is caused by differing lifetimes of objects" are both limited/restricted viewpoints. The latter viewpoint is in my opinion slightly more accurate, but only slightly. The viewpoint that "there is some 'fundamental link' between [memory fragmentation] and [having multiple allocation pools instead of a single pool, or in reverse, having a single pool instead of multiple pools]" is a limited viewpoint as well.
In summary: If the _deallocation_ order was computable in advance in all cases, then it would be possible to always _allocate_ memory in a way that is optimized in respect to the ordering.
-atom
> Foo_ (foo.delete@this.nomail.com) on June 30, 2022 7:43 am wrote:
> > Adrian (a.delete@this.acm.org) on June 30, 2022 5:36 am wrote:
> > >
> > > In the memory allocators used by individual processes it is possible to completely avoid
> > > memory fragmentation, e.g. by using separate memory pools for different object sizes.
> >
> > That doesn't make sense. Memory fragmentation is typically caused by differing
> > *lifetimes* of heap allocations, not by differing object sizes.
> >
>
> It is both.
>
> All the allocations the same size (e.g. 1KB) mean there is no
> fragmentation because if you have enough memory for all your
> objects then there will be a slot. You won't have a situation
> where you need 1KB, have 5KB, but the 5KB is unusable because it
> is in 100 discontiguous 50B pieces.
>
> Alternately, if one wants to allocate a large number of randomly
> sized objects and then free them all at once (and you know this ...)
> then you can avoid fragmentation by simply allocating "from the end"
> and freeing by "resetting the 'next'" pointer.
>
> The intersection of different sizes and different lifetimes (and
> not knowing them up front ...) creates the fragmentation.
Ok ... so, in my not-so-humble-this-time opinion, both mentioned explanations about the "alleged" primary cause of memory fragmentation are actually invalid. The real/actual reason for memory fragmentation is the impossibility of computing the time-order of deallocations _before_ those deallocations are actually/physically executed. In cases in which it is possible - at the _allocation_ point - to compute the _deallocation_ order, memory fragmentation either does not occur or does not need to occur (unless, of course, the programmer decides for fragmentation to occur on purpose because of some strange reason).
The viewpoint that "memory fragmentation is caused by differences in sizes of objects" and the viewpoint that "memory fragmentation is caused by differing lifetimes of objects" are both limited/restricted viewpoints. The latter viewpoint is in my opinion slightly more accurate, but only slightly. The viewpoint that "there is some 'fundamental link' between [memory fragmentation] and [having multiple allocation pools instead of a single pool, or in reverse, having a single pool instead of multiple pools]" is a limited viewpoint as well.
In summary: If the _deallocation_ order was computable in advance in all cases, then it would be possible to always _allocate_ memory in a way that is optimized in respect to the ordering.
-atom