By: , August 11, 2013 9:20 am
Room: Moderated Discussions
rwessel (robertwessel.delete@this.yahoo.com) on August 8, 2013 1:03 pm wrote:
> If you had a fully associative cache, the lookup would search all the tags (one for each cache
> entry) for one matching the desired address. If you had a direct-mapped cache, the index developed
> from the address would be used to select and read a single cache entry, and then that one entries
> tag would be compared. For a set-associative cache, one entire line of cache entries would be
> selected (and read), and then the tag for each of those cache entries would be compared.
>
>
> > So basically, the AGU converts the calculations for the proposed virtual
> > (or physical with DAT off?) and hands the tag off to the LSU?
>
>
> No, the tag is in the cache entry - it says what part of memory is held by the cache line. It is
> compared to part of the address being looked up. For example, with 64 byte cache entries, you don't
> need to compare the low six bits (since those would just select a particular byte within the cache
> entry). With a direct-mapped or set-associative cache, the index does not need to be stored in
> the tag or compared either, since that part of the comparison happens as a side effect of the hashing
> process - IOW, all addresses with 010111 in the second six bits (from the low end), are going to
> go into cache line 39, so if you're looking for an address with 010111 in those bits, you could
> only ever find it in cache line 39 - so you so have to store those bits in the tag.
>
> The AGU generates a logical address, which is then either treated as a physical address,
> and used as is, or as a virtual address, and then translated to a physical address.
>
>
> > So basically, cache lines are a list of cache entries (the tags atleast) consisting of one
> > of each entry from however large the associativeness is? That would assume that a directly
> > mapped cache has a cache line with only one entry in it at a time and a fully associative
> > cache would have the tag of all the entries of the cache in it at one time, correct?
>
>
> A cache line is (almost always) several complete cache entries. If you had a four way set associative
> cache with 64 byte entries and 40 bits (five bytes) of additional info (tag, validity flags,
> protection bits, and whatever else the implementation decided to toss in there), a read of one
> cache line would actually read some 276* bytes out of the cache, all at once. It's reasonable
> to consider that four-way set associative cache to be a 2208 bit wide memory.
>
> It is technically correct to consider a fully associative cache as having one line (although you would
> use a different physical implementation - there is not a reason to ever "read" the whole "line" in
> a fully associative cache, it's effectively already available), and a direct mapped cache as having
> cache lines with a single cache entry in them, but no one ever uses the terms like that. Similarly
> no one ever calls sailplanes "zero engine airplanes" (although that's certainly true).
>
>
> *276 = 4*(64+5)
>
>
> > So the page walker uses the virtual address to locate the physical address of the needed cache entry?
>
>
> The page table walker finds the physical address associated with the virtual
> address. It's used only when the TLB does not already know that. In either
> case the resulting physical* address is used to search the cache.
>
>
> *Again, ignoring the existence/possibility of virtual addressing in caches.
>
Thanks for your reply again! Things are coming together!
- Ah, I understand now how the tags are read in regards to association!
- So the page walker finds the physical address associated with a virtual address... But how is this association "known" to the page walker?
Thanks, this reply really tied up a lot of ends! I appreciate everyone's help!
> If you had a fully associative cache, the lookup would search all the tags (one for each cache
> entry) for one matching the desired address. If you had a direct-mapped cache, the index developed
> from the address would be used to select and read a single cache entry, and then that one entries
> tag would be compared. For a set-associative cache, one entire line of cache entries would be
> selected (and read), and then the tag for each of those cache entries would be compared.
>
>
> > So basically, the AGU converts the calculations for the proposed virtual
> > (or physical with DAT off?) and hands the tag off to the LSU?
>
>
> No, the tag is in the cache entry - it says what part of memory is held by the cache line. It is
> compared to part of the address being looked up. For example, with 64 byte cache entries, you don't
> need to compare the low six bits (since those would just select a particular byte within the cache
> entry). With a direct-mapped or set-associative cache, the index does not need to be stored in
> the tag or compared either, since that part of the comparison happens as a side effect of the hashing
> process - IOW, all addresses with 010111 in the second six bits (from the low end), are going to
> go into cache line 39, so if you're looking for an address with 010111 in those bits, you could
> only ever find it in cache line 39 - so you so have to store those bits in the tag.
>
> The AGU generates a logical address, which is then either treated as a physical address,
> and used as is, or as a virtual address, and then translated to a physical address.
>
>
> > So basically, cache lines are a list of cache entries (the tags atleast) consisting of one
> > of each entry from however large the associativeness is? That would assume that a directly
> > mapped cache has a cache line with only one entry in it at a time and a fully associative
> > cache would have the tag of all the entries of the cache in it at one time, correct?
>
>
> A cache line is (almost always) several complete cache entries. If you had a four way set associative
> cache with 64 byte entries and 40 bits (five bytes) of additional info (tag, validity flags,
> protection bits, and whatever else the implementation decided to toss in there), a read of one
> cache line would actually read some 276* bytes out of the cache, all at once. It's reasonable
> to consider that four-way set associative cache to be a 2208 bit wide memory.
>
> It is technically correct to consider a fully associative cache as having one line (although you would
> use a different physical implementation - there is not a reason to ever "read" the whole "line" in
> a fully associative cache, it's effectively already available), and a direct mapped cache as having
> cache lines with a single cache entry in them, but no one ever uses the terms like that. Similarly
> no one ever calls sailplanes "zero engine airplanes" (although that's certainly true).
>
>
> *276 = 4*(64+5)
>
>
> > So the page walker uses the virtual address to locate the physical address of the needed cache entry?
>
>
> The page table walker finds the physical address associated with the virtual
> address. It's used only when the TLB does not already know that. In either
> case the resulting physical* address is used to search the cache.
>
>
> *Again, ignoring the existence/possibility of virtual addressing in caches.
>
Thanks for your reply again! Things are coming together!
- Ah, I understand now how the tags are read in regards to association!
- So the page walker finds the physical address associated with a virtual address... But how is this association "known" to the page walker?
Thanks, this reply really tied up a lot of ends! I appreciate everyone's help!