By: Eric Fink (eric.delete@this.anon.com), June 30, 2022 1:43 am
Room: Moderated Discussions
Kester L (nobody.delete@this.nothing.com) on June 29, 2022 1:49 pm wrote:
> https://queue.acm.org/detail.cfm?id=3534854
>
> Your thoughts on this article? I was under the impression that a lot of the 80s attempts
> at capability machines (or really, anything that wasn't trying to be a glorified PDP-11)
> floundered because of performance and cost issues (i.e. the Intel i432).
>
I must say that I am a bit confused by the article. First, the title suggests that this is about safety, but the article itself doesn't really talk about safety that much — just that addressing a lot of memory is challenging. And then there are some bits that just strike me as odd. For example:
> Why do we even have linear physical and virtual addresses in the first place,
> when pretty much everything today is object-oriented?
Is it? I suppose it kind of depends on what one means by "object-oriented". It's all about structs, hashtables and pointer chasing, sure. Do I want to create a different hardware descriptor for every entry in my hashtable? Probably not.
> The global 64-bit address space is not linear; it is an object cache addressed with an
> (object + offset) tuple, and if that page of the object is not cached, a microcode trap
> will bring it in from disk.
Maybe I am misunderstanding something, but how is this fundamentally different from modern linear memory? You have the "object" (page id) + offset. Besides, how is this schema better that what we have now? You still need some mechanism for translating from this "cache address" to the physical memory address, and you still need to be able to deal with vast amounts of memory. And then, what's the maximal offset an object supports? What if I need a larger object? How do you deal with a lot of "small" objects? You probably need to support objects of different size classes that will also need to be handled differently if you don't want everything to run at acceptable speeds, so you are back to a system with different page sizes. Not to mention that 30 years ago the amount of available memory was so pitiful that you could do practically anything. Not as much today where you have completely different scalability issues.
Overall, it sounds to me that the author is suggesting to use a hardware memory allocator with policy enforcement ob every single "object" (whatever that means), and I am very sympathetic with the idea, but I just can't imagine how this can be done without killing performance. Unless I fundamentally misunderstand something, one would be replacing page walks and TLBs with some other kind of non-trivial —and larger — structure like the object "cache".
Frankly, if safety is a primary concern and performance is very low on the priority list, it does seem like the way to go is using a managed language that runs on a highly opinionated, validated VM.
> https://queue.acm.org/detail.cfm?id=3534854
>
> Your thoughts on this article? I was under the impression that a lot of the 80s attempts
> at capability machines (or really, anything that wasn't trying to be a glorified PDP-11)
> floundered because of performance and cost issues (i.e. the Intel i432).
>
I must say that I am a bit confused by the article. First, the title suggests that this is about safety, but the article itself doesn't really talk about safety that much — just that addressing a lot of memory is challenging. And then there are some bits that just strike me as odd. For example:
> Why do we even have linear physical and virtual addresses in the first place,
> when pretty much everything today is object-oriented?
Is it? I suppose it kind of depends on what one means by "object-oriented". It's all about structs, hashtables and pointer chasing, sure. Do I want to create a different hardware descriptor for every entry in my hashtable? Probably not.
> The global 64-bit address space is not linear; it is an object cache addressed with an
> (object + offset) tuple, and if that page of the object is not cached, a microcode trap
> will bring it in from disk.
Maybe I am misunderstanding something, but how is this fundamentally different from modern linear memory? You have the "object" (page id) + offset. Besides, how is this schema better that what we have now? You still need some mechanism for translating from this "cache address" to the physical memory address, and you still need to be able to deal with vast amounts of memory. And then, what's the maximal offset an object supports? What if I need a larger object? How do you deal with a lot of "small" objects? You probably need to support objects of different size classes that will also need to be handled differently if you don't want everything to run at acceptable speeds, so you are back to a system with different page sizes. Not to mention that 30 years ago the amount of available memory was so pitiful that you could do practically anything. Not as much today where you have completely different scalability issues.
Overall, it sounds to me that the author is suggesting to use a hardware memory allocator with policy enforcement ob every single "object" (whatever that means), and I am very sympathetic with the idea, but I just can't imagine how this can be done without killing performance. Unless I fundamentally misunderstand something, one would be replacing page walks and TLBs with some other kind of non-trivial —and larger — structure like the object "cache".
Frankly, if safety is a primary concern and performance is very low on the priority list, it does seem like the way to go is using a managed language that runs on a highly opinionated, validated VM.