By: rwessel (rwessel.delete@this.yahoo.com), October 1, 2021 6:19 am
Room: Moderated Discussions
sr (nobody.delete@this.nowhere.com) on October 1, 2021 12:22 am wrote:
> rwessel (rwessel.delete@this.yahoo.com) on September 25, 2021 3:29 pm wrote:
>
> > That's missing the whole point. You can't do any addressing without dragging the segment
> > IDs ("selectors") along. So you end up with 48 bit pointers with the 386 segmented model,
> > or you go back to a mixed memory model where some pointer are short ("near"), with an implied
> > segment, and others long (far") with an explicit selector dragged along.
>
> I might be wrong again, but as I recall 386 didn't have 48 bit virtual address space but
> 32 bit. So segmentation wasn't used to increase virtual address space like with 16 bit systems.
> So near pointer can cover whole address space as it should and segmentation is only used to
> slice that address space into smaller subsets where addressing starts from zero.
The 386 had a 48(-ish*) bit virtual address space, but that was mapped into a 32 bit linear address space. Increasing the size of the linear address space would be been fairly trivial, and essentially invisible from the application layer (given the era, it's obvious why they wouldn't have initially defined the linear address space bigger than 32 bits). Not really unlike how the 286 had a 32(-ish*) bit virtual address space mapped into a 24-bit physical address space (and how a 16-bit OS could have used the 386 enhancements to address more than 24-bits worth of memory, without using any "32-bit" feature of the CPU - just extra, previously unused, bits in the descriptor tables).
I don't think anyone ever actually did it, but ideas for mapping more than 4GB worth of segments into a process using PAE hacks were tossed about in the late x86-32 era.
*A bit in the selector to pick the GDT or LDT, 13 bits to pick a segment descriptor within one of those tables, and a 32 (16 for 286) bit offset within the segment. So 46 or 30 bits. Practically somewhat less, assuming some inefficiency in using the structured address.
> > There are other imaginable ways to do this (most obviously packing the selector into the high bits of a
> > 64-bit pointer), but that doesn't solve all the problems, and is not anything you can actually do on x86.
> > On the other hand, if you're just using the high bits of the pointer as a selector, you can do almost as
> > well with a sparse-ish address space and 4KB pages (IOW, any allocated object has guard pages around it).
> > Heck, we could tweak C malloc() to do that in about 15 minutes - storage allocation would just get a fair
> > bit slower (although using that allocated storage would not).
> > Neither approach really gets you to a capabilities-like
> > model where you can pass a *protected* subset of an allocated object to someone.
>
>
> Don't confuse simple segmentation to more complex software ways to protect memory segments.
> With segmentation you can segment your data structures and let hardware do boundary checking.
> Like with that Linus null pointer chaser example, with software every round of loop need to
> checked against table boundaries but if that table was segmented into it's own space that won't
> be necessarily. Algorithm can't go out from it's segment if some reason NULL check fails.
Sure, and that way lies capabilities.
A problem with x86 segments in that role is that establishing "sub" segments is going to be fairly painful, and the limited size (8K entries) of the descriptor tables puts a pretty hard limit to how much of that you can actually do. Even if you don't subset segments, and just assign segments to memory allocations, you don't really have enough. And as I mentioned, if you're just trying to catch allocated area overruns, you can do almost as well just by assigning guard pages around allocations. The quite high overhead of loading segment registers would have been an issue for all actual implementations (clearly at least some optimizations would have been possible for that problem).
> rwessel (rwessel.delete@this.yahoo.com) on September 25, 2021 3:29 pm wrote:
>
> > That's missing the whole point. You can't do any addressing without dragging the segment
> > IDs ("selectors") along. So you end up with 48 bit pointers with the 386 segmented model,
> > or you go back to a mixed memory model where some pointer are short ("near"), with an implied
> > segment, and others long (far") with an explicit selector dragged along.
>
> I might be wrong again, but as I recall 386 didn't have 48 bit virtual address space but
> 32 bit. So segmentation wasn't used to increase virtual address space like with 16 bit systems.
> So near pointer can cover whole address space as it should and segmentation is only used to
> slice that address space into smaller subsets where addressing starts from zero.
The 386 had a 48(-ish*) bit virtual address space, but that was mapped into a 32 bit linear address space. Increasing the size of the linear address space would be been fairly trivial, and essentially invisible from the application layer (given the era, it's obvious why they wouldn't have initially defined the linear address space bigger than 32 bits). Not really unlike how the 286 had a 32(-ish*) bit virtual address space mapped into a 24-bit physical address space (and how a 16-bit OS could have used the 386 enhancements to address more than 24-bits worth of memory, without using any "32-bit" feature of the CPU - just extra, previously unused, bits in the descriptor tables).
I don't think anyone ever actually did it, but ideas for mapping more than 4GB worth of segments into a process using PAE hacks were tossed about in the late x86-32 era.
*A bit in the selector to pick the GDT or LDT, 13 bits to pick a segment descriptor within one of those tables, and a 32 (16 for 286) bit offset within the segment. So 46 or 30 bits. Practically somewhat less, assuming some inefficiency in using the structured address.
> > There are other imaginable ways to do this (most obviously packing the selector into the high bits of a
> > 64-bit pointer), but that doesn't solve all the problems, and is not anything you can actually do on x86.
> > On the other hand, if you're just using the high bits of the pointer as a selector, you can do almost as
> > well with a sparse-ish address space and 4KB pages (IOW, any allocated object has guard pages around it).
> > Heck, we could tweak C malloc() to do that in about 15 minutes - storage allocation would just get a fair
> > bit slower (although using that allocated storage would not).
> > Neither approach really gets you to a capabilities-like
> > model where you can pass a *protected* subset of an allocated object to someone.
>
>
> Don't confuse simple segmentation to more complex software ways to protect memory segments.
> With segmentation you can segment your data structures and let hardware do boundary checking.
> Like with that Linus null pointer chaser example, with software every round of loop need to
> checked against table boundaries but if that table was segmented into it's own space that won't
> be necessarily. Algorithm can't go out from it's segment if some reason NULL check fails.
Sure, and that way lies capabilities.
A problem with x86 segments in that role is that establishing "sub" segments is going to be fairly painful, and the limited size (8K entries) of the descriptor tables puts a pretty hard limit to how much of that you can actually do. Even if you don't subset segments, and just assign segments to memory allocations, you don't really have enough. And as I mentioned, if you're just trying to catch allocated area overruns, you can do almost as well just by assigning guard pages around allocations. The quite high overhead of loading segment registers would have been an issue for all actual implementations (clearly at least some optimizations would have been possible for that problem).
Topic | Posted By | Date |
---|---|---|
POWER10 SAP SD benchmark | anon2 | 2021/09/06 03:36 PM |
POWER10 SAP SD benchmark | Daniel B | 2021/09/07 02:31 AM |
"Cores" (and SPEC) | Rayla | 2021/09/07 07:51 AM |
"Cores" (and SPEC) | anon | 2021/09/07 03:56 PM |
POWER10 SAP SD benchmark | Anon | 2021/09/07 03:24 PM |
POWER10 SAP SD benchmark | Anon | 2021/09/07 03:27 PM |
Virtually tagged L1-caches | sr | 2021/09/08 05:49 AM |
Virtually tagged L1-caches | dmcq | 2021/09/08 08:22 AM |
Virtually tagged L1-caches | sr | 2021/09/08 08:56 AM |
Virtually tagged L1-caches | Hugo Décharnes | 2021/09/08 08:58 AM |
Virtually tagged L1-caches | sr | 2021/09/08 10:09 AM |
Virtually tagged L1-caches | Hugo Décharnes | 2021/09/08 10:46 AM |
Virtually tagged L1-caches | sr | 2021/09/08 11:35 AM |
Virtually tagged L1-caches | Hugo Décharnes | 2021/09/08 12:23 PM |
Virtually tagged L1-caches | sr | 2021/09/08 12:40 PM |
Virtually tagged L1-caches | anon | 2021/09/09 03:16 AM |
Virtually tagged L1-caches | Konrad Schwarz | 2021/09/10 05:19 AM |
Virtually tagged L1-caches | Hugo Décharnes | 2021/09/10 06:59 AM |
Virtually tagged L1-caches | anon | 2021/09/14 03:17 AM |
Virtually tagged L1-caches | dmcq | 2021/09/14 09:34 AM |
Or use a PLB (NT) | Paul A. Clayton | 2021/09/14 09:45 AM |
Or use a PLB | Linus Torvalds | 2021/09/14 03:27 PM |
Or use a PLB | anon | 2021/09/15 12:15 AM |
Or use a PLB | Michael S | 2021/09/15 03:21 AM |
Or use a PLB | dmcq | 2021/09/15 03:42 PM |
Or use a PLB | Konrad Schwarz | 2021/09/16 04:24 AM |
Or use a PLB | Michael S | 2021/09/16 10:13 AM |
Or use a PLB | --- | 2021/09/16 01:02 PM |
PLB reference | Paul A. Clayton | 2021/09/18 02:35 PM |
PLB reference | Michael S | 2021/09/18 04:14 PM |
Demand paging/translation orthogonal | Paul A. Clayton | 2021/09/19 07:33 AM |
Demand paging/translation orthogonal | Michael S | 2021/09/19 09:10 AM |
PLB reference | Carson | 2021/09/20 10:19 PM |
PLB reference | sr | 2021/09/20 06:02 AM |
PLB reference | Michael S | 2021/09/20 07:03 AM |
PLB reference | Linus Torvalds | 2021/09/20 12:10 PM |
Or use a PLB | sr | 2021/09/20 04:32 AM |
Or use a PLB | sr | 2021/09/21 09:36 AM |
Or use a PLB | Linus Torvalds | 2021/09/21 10:04 AM |
Or use a PLB | sr | 2021/09/21 10:48 AM |
Or use a PLB | Linus Torvalds | 2021/09/21 01:55 PM |
Or use a PLB | sr | 2021/09/22 06:55 AM |
Or use a PLB | rwessel | 2021/09/22 07:09 AM |
Or use a PLB | Linus Torvalds | 2021/09/22 11:50 AM |
Or use a PLB | sr | 2021/09/22 01:00 PM |
Or use a PLB | dmcq | 2021/09/22 04:07 PM |
Or use a PLB | Etienne Lorrain | 2021/09/23 08:50 AM |
Or use a PLB | anon2 | 2021/09/22 04:09 PM |
Or use a PLB | dmcq | 2021/09/23 02:35 AM |
Or use a PLB | ⚛ | 2021/09/23 09:37 AM |
Or use a PLB | Linus Torvalds | 2021/09/23 12:01 PM |
Or use a PLB | gpd | 2021/09/24 03:59 AM |
Or use a PLB | Linus Torvalds | 2021/09/24 10:45 AM |
Or use a PLB | dmcq | 2021/09/24 12:43 PM |
Or use a PLB | sr | 2021/09/25 10:19 AM |
Or use a PLB | Linus Torvalds | 2021/09/25 10:44 AM |
Or use a PLB | sr | 2021/09/25 11:11 AM |
Or use a PLB | Linus Torvalds | 2021/09/25 11:31 AM |
Or use a PLB | sr | 2021/09/25 11:52 AM |
Or use a PLB | Linus Torvalds | 2021/09/25 12:05 PM |
Or use a PLB | sr | 2021/09/25 12:23 PM |
Or use a PLB | rwessel | 2021/09/25 03:29 PM |
Or use a PLB | sr | 2021/10/01 12:22 AM |
Or use a PLB | rwessel | 2021/10/01 06:19 AM |
Or use a PLB | David Hess | 2021/10/01 10:35 AM |
Or use a PLB | rwessel | 2021/10/02 04:47 AM |
Or use a PLB | sr | 2021/10/02 11:16 AM |
Or use a PLB | rwessel | 2021/10/02 11:53 AM |
Or use a PLB | Linus Torvalds | 2021/09/25 11:57 AM |
Or use a PLB | sr | 2021/09/25 12:07 PM |
Or use a PLB | Linus Torvalds | 2021/09/25 12:21 PM |
Or use a PLB | sr | 2021/09/25 12:40 PM |
Or use a PLB | nksingh | 2021/09/27 09:07 AM |
Or use a PLB | ⚛ | 2021/09/27 09:02 AM |
Or use a PLB | Linus Torvalds | 2021/09/27 10:20 AM |
Or use a PLB | Linus Torvalds | 2021/09/27 12:58 PM |
Or use a PLB | dmcq | 2021/09/28 10:59 AM |
Or use a PLB | sr | 2021/09/25 10:34 AM |
Or use a PLB | rwessel | 2021/09/25 03:44 PM |
Or use a PLB | sr | 2021/10/01 01:04 AM |
Or use a PLB | rwessel | 2021/10/01 06:33 AM |
I386 segmentation highlights | sr | 2021/10/04 07:53 AM |
I386 segmentation highlights | Adrian | 2021/10/04 09:53 AM |
I386 segmentation highlights | sr | 2021/10/04 10:19 AM |
I386 segmentation highlights | rwessel | 2021/10/04 04:57 PM |
I386 segmentation highlights | sr | 2021/10/05 11:16 AM |
I386 segmentation highlights | Michael S | 2021/10/05 12:27 PM |
I386 segmentation highlights | rwessel | 2021/10/05 04:20 PM |
Or use a PLB | JohnG | 2021/09/25 10:18 PM |
Or use a PLB | ⚛ | 2021/09/27 07:37 AM |
Or use a PLB | Heikki Kultala | 2021/09/28 03:53 AM |
Or use a PLB | rwessel | 2021/09/28 07:29 AM |
Or use a PLB | David Hess | 2021/09/23 06:00 PM |
Or use a PLB | Adrian | 2021/09/24 01:21 AM |
Or use a PLB | dmcq | 2021/09/25 12:41 PM |
Or use a PLB | blaine | 2021/09/26 11:19 PM |
Or use a PLB | David Hess | 2021/09/27 11:35 AM |
Or use a PLB | blaine | 2021/09/27 05:19 PM |
Or use a PLB | Adrian | 2021/09/27 10:40 PM |
Or use a PLB | Adrian | 2021/09/27 10:59 PM |
Or use a PLB | dmcq | 2021/09/28 07:45 AM |
Or use a PLB | rwessel | 2021/09/28 07:45 AM |
Or use a PLB | David Hess | 2021/09/28 12:50 PM |
Or use a PLB | Etienne Lorrain | 2021/09/30 01:25 AM |
Or use a PLB | David Hess | 2021/10/01 10:40 AM |
MMU privileges | sr | 2021/09/21 11:07 AM |
MMU privileges | Linus Torvalds | 2021/09/21 01:49 PM |
Virtually tagged L1-caches | Konrad Schwarz | 2021/09/16 04:18 AM |
Virtually tagged L1-caches | Carson | 2021/09/16 01:12 PM |
Virtually tagged L1-caches | anon2 | 2021/09/16 05:16 PM |
Virtually tagged L1-caches | rwessel | 2021/09/16 06:29 PM |
Virtually tagged L1-caches | sr | 2021/09/20 04:20 AM |
Virtually tagged L1-caches | --- | 2021/09/08 02:28 PM |
Virtually tagged L1-caches | anonymou5 | 2021/09/08 08:28 PM |
Virtually tagged L1-caches | anonymou5 | 2021/09/08 08:34 PM |
Virtually tagged L1-caches | --- | 2021/09/09 10:14 AM |
Virtually tagged L1-caches | anonymou5 | 2021/09/09 10:44 PM |
Multi-threading? | David Kanter | 2021/09/09 09:32 PM |
Multi-threading? | --- | 2021/09/10 09:19 AM |
Virtually tagged L1-caches | sr | 2021/09/11 01:19 AM |
Virtually tagged L1-caches | sr | 2021/09/11 01:36 AM |
Virtually tagged L1-caches | --- | 2021/09/11 09:53 AM |
Virtually tagged L1-caches | sr | 2021/09/12 12:43 AM |
Virtually tagged L1-caches | Linus Torvalds | 2021/09/12 11:10 AM |
Virtually tagged L1-caches | sr | 2021/09/12 11:57 AM |
Virtually tagged L1-caches | dmcq | 2021/09/13 08:31 AM |
Virtually tagged L1-caches | sr | 2021/09/20 04:11 AM |
Virtually tagged L1-caches | sr | 2021/09/11 02:49 AM |
Virtually tagged L1-caches | Linus Torvalds | 2021/09/08 12:34 PM |
Virtually tagged L1-caches | dmcq | 2021/09/09 02:46 AM |
Virtually tagged L1-caches | dmcq | 2021/09/09 02:58 AM |
Virtually tagged L1-caches | sr | 2021/09/11 01:29 AM |
Virtually tagged L1-caches | dmcq | 2021/09/11 08:59 AM |
Virtually tagged L1-caches | sr | 2021/09/12 12:57 AM |
Virtually tagged L1-caches | dmcq | 2021/09/12 08:44 AM |
Virtually tagged L1-caches | sr | 2021/09/12 09:48 AM |
Virtually tagged L1-caches | dmcq | 2021/09/12 01:22 PM |
Virtually tagged L1-caches | sr | 2021/09/20 04:40 AM |
Where do you see this information? (NT) | anon2 | 2021/09/09 02:45 AM |
Where do you see this information? | sr | 2021/09/11 01:40 AM |
Where do you see this information? | anon2 | 2021/09/11 01:53 AM |
Where do you see this information? | sr | 2021/09/11 02:08 AM |
Thank you (NT) | anon2 | 2021/09/11 04:31 PM |