By: Foo_ (foo.delete@this.nomail.com), July 4, 2022 3:11 am
Room: Moderated Discussions
Adrian (a.delete@this.acm.org) on July 3, 2022 10:04 pm wrote:
>
> The reason why this is possible is that when a new "malloc" invocation happens (for
> a given size range), it will be possible to satisfy the request by pulling the last
> freed memory block from the list of free memory block, because it has the same size.
Sure. But that only happens *when* a new malloc invocation for the same size occurs. You are assuming that N frees for a given size are always followed, in very short order, by N mallocs for that size.
That's an idealistic view of long-running applications or servers, which may match the behaviour of some of them, but is certainly not the norm. Most systems responding to outside requests will be faced with bursts of activity followed by quietness periods; they will also be faced - more or less sporadically - with unusually long-running requests due to degraded conditions (e.g. network connectivity blips), which will still complicate the temporal distribution of memory activity.
And in addition to that, in many systems the allocation sizes actually depend on the requests coming in, which makes their statistically distribution variable over time (you don't necessarily get the same kinds of requests depending on the time of day, for example).
There's a reason efficient memory allocation (either manual or GC-based) is an advanced and delicate topic, and why work is always ongoing on better/different general-purpose allocators.
>
> The reason why this is possible is that when a new "malloc" invocation happens (for
> a given size range), it will be possible to satisfy the request by pulling the last
> freed memory block from the list of free memory block, because it has the same size.
Sure. But that only happens *when* a new malloc invocation for the same size occurs. You are assuming that N frees for a given size are always followed, in very short order, by N mallocs for that size.
That's an idealistic view of long-running applications or servers, which may match the behaviour of some of them, but is certainly not the norm. Most systems responding to outside requests will be faced with bursts of activity followed by quietness periods; they will also be faced - more or less sporadically - with unusually long-running requests due to degraded conditions (e.g. network connectivity blips), which will still complicate the temporal distribution of memory activity.
And in addition to that, in many systems the allocation sizes actually depend on the requests coming in, which makes their statistically distribution variable over time (you don't necessarily get the same kinds of requests depending on the time of day, for example).
There's a reason efficient memory allocation (either manual or GC-based) is an advanced and delicate topic, and why work is always ongoing on better/different general-purpose allocators.