Floorplan images show physical locations

Article: Silvermont, Intel's Low Power Architecture
By: rwessel (robertwessel.delete@this.yahoo.com), August 3, 2013 10:16 pm
Room: Moderated Discussions
Sebastian Soeiro (sebastian_2896.delete@this.hotmail.com) on August 3, 2013 8:23 am wrote:
> Stubabe (Stubabe.delete@this.nospam.com) on August 2, 2013 7:49 pm wrote:
> > Sebastian Soeiro (sebastian_2896.delete@this.hotmail.com) on August 2, 2013 12:58 pm wrote:
> >
> > > Thank you very much for your explanation! However, there still seems to be some
> > > unclear things that you didnt answer or I don't understand the answer to;
> > >
> > > - It is still unclear as to whether there are multiple TLBs in CPUs or not; with one TLB being
> > > tightly tied to the L1 cache and the other one being used as a "general purpose" TLB.
> > >
> > > - If ALL page tables are in memory, does that mean the CPU must go to the RAM to check the
> > > page table of the L2 cache? That seems INCREDIBLY inefficient... Shouldnt the L2 cache's
> > > page table be close to the L2 cache itself? That seems most logical and efficient...
> > >
> > > Looking forward to your answer as always!
> >
> > You seem to be confused by the interaction between caching and TLBs. While many modern CPUs
> > have virtually indexed L1 data caches (as an optimisation) it is probably not helpful to your
> > understanding here. So consider a hypothetical CPU with only physically indexed caches, a
> > single level of TLB, a 32 bit address space, 4KiB pages and a hardware pagewalker:
> >
> > The CPU decodes an instruction LOAD [0x12345 (74565 decimal)]. Paging is enabled so this is a virtual
> > address i.e. the load unit must translate it to physical 1st. Its queries the TLB (let's say its
> > a miss) so the hardware pagewalker kicks in. It subdivides the address into three parts
> >
> > The top 10 bits (0000000000) are all zero -> Page Directory Index
> > The next 10 bits(0000010010) equal 0x12 (18 decimal) -> Page Table Index
> > The low 12 bits (001101000101) equal 0x345 (837 decimal) -> Page offset
> >
> > The system register "SYSEXAMPLE1" points to the 4Kib aligned page directory (lets say its in
> > page 5) so to find the page table we need to load the 4byte (32bit) entry from address:
> > (4096*5) + (4*0) i.e. the 1st (zero) Page Directory Index (see above) in the 4KiB page directory in page 5.
> >
> > This (as all subsequent memory accesses) may or may not be
> > cached so could miss to RAM - but this is irrelevant
> > to our discussion so for this example just assume all cache levels have just been flushed and we are always
> > accessing RAM. Next we extract the relevant 20bits from this
> > PDE (the other 12 can be used for access/permission
> > flags etc) to obtain the 4Kib aligned page table (lets say its in page 50 decimal) so we then read the
> > 4 byte entry starting from address (given by Page Table Index 18 from above):
> > (4096*50) + (4*18) and again use 20bits as a page address (this time for the data
> > page itself). Lets say the PTE says the data page is at page 100 decimal. So the
> > final physical address is (4096*100) + (837 i.e. the page offset) = 410437.
> >
> > So virtual address 0x12345 (74565 decimal) is actually at physical address 0x64345 (410437).
> > We had to hit memory 3 times JUST TO CALCULATE THE ADDRESS! This is very slow and so we
> > don't have to do it again we cache the result in our TLB (Entry 1 : 0x12000-> 0x64000 note
> > bottom 12bits are not needed as we are working with 4Kib aligned 4Kib sized pages).
> > Now we can start to actually issue the read to address 0x64345 to the cache memory hierarchy.
> >
> > What did you notice about the role of the caches in this process? Apart from possibly caching a few page
> > tables they had no role what so ever! Also until the AGU/Load unit has completed the translation the caches
> > are not even aware of what we are asking for (they are indexed by the physical address just like RAM) so
> > the TLBs have not had a direct bearing on the caches either (although the page walker issues PTE loads
> > via them i.e. it is a client of the caches itself). The
> > TLBs and caches are two totally unrelated structures
> > on our chip. So if you are asking what TLBs cache level1/2/whatever has the answer is ALWAYS NONE as caches
> > don't own the TLBs the load/store unit does. i.e. TLB access is upstream of cache access in the memory
> > pipeline. All the caches in this chip are physically indexed so do not care what goes into the TLB at all,
> > only the resulting physical address is used in cache indexing, memory addressing etc. That means the caches
> > have no need of the TLB data to track their contents or read/flush to RAM.
> >
> > ------------------------------------------------------------------------------------
> > Now if you actually get that (and yes page walking is incredibly inefficient that's why we have TLBs) then
> > we can consider a virtually indexed, physically tagged L1 cache. Caches are composed of fixed sized caches
> > lines arranged in sets of ways. Consider a virtually indexed, 64 byte line, 32KiB, 8-way cache.
> >
> > There are 32768/64 = 512 cache lines in our cache
> > The are 512/8 = 64 sets in our cache of 8 ways each
> >
> > So the low 6bits (0-5) of an address are just the byte offset in the 64byte cache line.
> > The next 6bits (6-11) give us our set. Each set has 8 possible cache lines for us to go in and these
> > are managed in a pseudo LRU type algorithm. Notice we need only the low 12bits of the address (same
> > as the 4kiB page offset above - this is not an accident) to find our place in the cache. To determine
> > the correct way (since many different addresses have the same bits 6-11) we need to check the TAG held
> > against the cache line as this holds the full physical address (minus the 12bits covered above) so
> > we can see which way (if any) holds our data by checking the 8 TAGs in our set in parallel.
> >
> > TLBs are also often either arranged an N-way CAM type memory or in sets and ways like a cache and can
> > often take nearly as long to lookup as a L1 cache fetch. So rather than index the cache with the physical
> > address from the TLB we can index the L1 with the pre-translated virtual address so we can start it in
> > parallel with the TLB lookup. Note we still need to get the physical address back to do the final compare
> > with the TAG but this is an optimisation that can save a few clocks load to use latency (if we get a TLB
> > hit). However, it only makes sense for the L1 since we already have the physical address before we need
> > to query the L2 (i.e. a miss in L1) so there is no advantage in making the L2/3 virtually indexed. So
> > please don't confuse this clever "cheat" as some kind of TLBcache association, it isn't.
> >
> > Ideally a CPU should have enough TLBs to completely cover all its caches but this is not always the case
> > since as you make the TLB bigger it gets slower. So often you have a second larger and slower (L2) TLB that
> > is queried on a miss to the L1 TLB (probably in parallel
> > to the page walker starting up). Also you may choose
> > to cache PDE entries (i.e. page table addresses) in your
> > TLB or even in a special cache for the page walker.
> > But all of this is implementation detail and doesn't really change the core algorithm.
> >
>
> Oh boy... Well that is certainly a LOT to take in... Let's see if I can recite
> what you guys said in my own words and understanding. There are still some
> "blanks" though... Perhaps you guys can help fill them in with me?


You're continuing to *way* overcomplicate the relationship between paging (and the PLB) and the caches.


> So an instruction asks the LSU to get ahold of "#12345678", and it asks the LSU to translate the virtual address
> into a physical address (here is a blank. How does it do this? What unit translates the virtual address into
> a physical address, and how?), but at the same time, the LSU can also try loading a line (a line? Heres one of
> my other blanks; (I have 4 in total. A lot I know, sorry; I'm not the sharpest tool in the shed, but I try to
> be ':) ) Why does taking one line and comparing it to the translated physical address the equivalent of comparing
> it to the entire L1 cache? Isnt the L1 cache larger than one line?) and then compares the physical address to
> the cache line, and if they match, then the data is found and the translation goes to the TLB.
>
> If it is not found, then it moves to the L2 cache and uses the physical address there (heres where
> two of my other blanks comes in. What does it use to look through the L2 and L3 cache? The page
> walker? Also, if the physical address is already found, why does the LSU have to bother going from
> L1>L2>L3>RAM when it can just go directly to where the physical address is since it already has
> it? Or does the physical address have no notation of where it is in the physical memory hiearchy?)
> and if it finds the info, then the data is found; if not, it moves onto the L3.
>
> The process is repeated and if the data is not found, it moves to RAM.
>
> And heres my last blank. It doesnt do the same process in the RAM; does it? Does it use the
> page tables to look up whats in RAM? Im now confused as to what the page tables do...
>
> Sorry for probably missing a lot of what you guys said; I'm a little
> slow, so please bare with me here. Thanks for the great help!


Not exactly. The instruction is a load or a store, and so is sent by the CPU scheduler to one of the LSUs for execution. How exactly that happens is rather outside the scope of this discussion.

The load or store instruction has some parameters which the LSU uses to compute the address the program wishes to reference. On x86, for example, you might have "mov eax,25(ebx,4*edx)" as the instruction, and the LSU would multiply the contents of register edx by four, then add the contents of ebx and the constant 25 to produce the address of interest, and then move the word at that location into register eax. For sake of discussion, we're going to assume that calculation comes out to 0x12345678. Other ISAs have different levels of capabilities in what you can specify. There are ISAs that allow *only* the contents of single register to flow into the address calculation (in which case it's not much of a calculation!), and others that permit even more complex schemes than x86 does. None of that is relevant, however, the important thing is that the LSU comes up with an address.

That address is sometimes called a logical* address (although the terminology varies by ISA). As far as the LSU is concerned it's the address in memory it's going to want to read. As far as the running program is concerned, it’s the address in the address space it lives in.

Now many processors can be run with address translation or *or* off, some even allow the mode to switch on a regular basis.

Let's assume address translation (DAT) is off. So the logical address is directly a physical address, and can be used as is.

On processors without cache, the physical address is used directly to address the system RAM. There are some complications. For example, you might have several CPU sockets in the system, each with several memory channels, each with several DIMMs. Somewhere there's a mapping of what goes where, so that the system knows to fetch the word at 0x12345678 from the third DIMM on the second memory channel of the first socket. Again, that's not really relevant, that mapping happens down in the bowels of the hardware, and we don't really see it.

If there is a cache, then the process becomes two step: first, search the cache, to see if the requested word (at the given physical address) is in the cache, and then if it isn't (a "cache miss"), go fetch the word from memory, just as in the no-cache scenario. Quite likely the newly fetched word with be put in the cache, on the assumption that you might need it again later, so the subsequent access can come from the (much faster) cache, rather than going to (much slower) main memory.

The crucial question then, is how do you search a cache? That's actually two questions. How do you find the entry, and how do you identify it once you've found it. Taking the second question first, the answer is that each cache entry has a "tag" associated with it. That tag contains the physical** address of what the cache entry has in it. In our example, the cache line we're looking for, assuming a fairly typical 64B cache line, would be the one with address 0x12345640***, which is the beginning of the 64B block of memory containing 0x12345678. So once we have a candidate cache entry, we look at the tag, and compare it to the physical address we're looking for, and if they match we've got a cache hit.

Big caches can hold literally millions of cache entries. You obviously cannot go searching through the cache sequentially - it would take forever. Even modest sized caches have far too many entries to search sequentially.

You *can* build a cache where there is a hardware comparator on each cache entry, so you can search all the cache entries in parallel. That would be called a fully associative cache. There are three downsides to that. First, it's not particularly fast, the distribution and collection network for the tag comparand broadcast and result collection is large. Second, it's expensive in die area - rather than just storing the address in the tag (which is just additional bits of memory in the cache), you need a comparator (plus the distribution/collection) stuff. That greatly increases the required die area for a given sized cache. Third, doing all those comparisons in parallel consumes a great deal of power. Still, some (especially) smaller cache-like structures are implemented as fully associative arrays, as they provide the best hit rates, because there are no conflict misses.

Alternatively, you could use a hash of the address to select a cache line entry. Just like a hash table in software, this is fairly quick, and leads to a candidate entry. But since many addresses will hash to the same cache entry, you have to see if the entry you found is the one you're actually interested in. Again, that’s the tag comparison. Now we don't really use a complicated hash function (as we would in a software hash table), rather we just use some of the lower bits of the address. Since the lowest bits address *within* a cache entry, you can't use those, so you use the ones just left of those. We call that a direct mapped cache. So if you had a cache of 1024 64B cache entries, we'd use the lowest 6 bits to find the data within the cache entry, the next 10 bits to select the particular cache entry (remember we have 2**10 or 1024 of those), and the left-most 16 bits (assuming 32 bit addresses) are the tag. So attempting to find the data for 0x12345678, we'd pull off the low 6 bits for later, then use the next 10 bits (0x159), and use those to select cache entry 0x159, and then compare the tag (0x1234) to see if we have the data we're interested in. Direct mapped caches are admirable simple, fast and dense. The problem is that they suffer poor performance due to conflicts. Let's say the program wants to address both 0x12345678 and 0x99995678. The hash of both of those locations will point to the same cache entry, and since the cache entry can only hold one item, the processor will continually be thrashing those back and forth, resulting in poor performance.

The solution we've settled on for most cases is to create a hybrid between a direct mapped and fully associative cache. We call that a set-associative cache. We still use the next few bits (past the sub-entry bits) to select a cache *line*, but now we allow the cache line to contain several cache entries. So what we do is read an entire cache line out of the cache, and then compare the tags for all of the cache entries in that line in parallel. IOW, we do a fully associative lookup *within* the cache line. So for example, if we had a four-way set associative with 1024 lines, we'd still use 0x159 as the index into the cache, but then we'd have *four* entries in that line, which we would search (in parallel) to see if any of them contained the data of interest. So not only can we store the words at 0x12345678 and 0x99995678 in the cache at the same time, there’s room for two more words in that line as well. So for a modest performance hit, that allows a vast reduction in conflict misses, and so generally provides almost all the hit-rate advantages of a fully associative cache, with the speed, power and size advantages of a direct mapped cache.

Now if we have multiple levels of cache, things don’t really change, except that we repeat the process as many time as we have cache levels. So if we’re looking for physical address 0x12345678, we look in the L1. If we don’t find it there, we look in the L2, which uses the same sort of set-associative design to look up potential cache entries, and then compare their tags to the physical address. And again for the L3, L4, etc… And finally, if we miss in all the caches, it’s off to main memory as usual.

You’ll notice that the above didn’t say a word about virtual addresses. There’s a reason for that – for the most part the caches don’t care at all.

Now if we go back to the LSU, and instead we’re running with DAT on. Now the LSU generates logical address 0x12345678, and since we’re running DAT on, it’s actually a virtual address. So the LSU it passes the virtual address to the address translation unit, which translates it to a physical address. Let’s say it ends up 0x88888678. Then that physical address is passed to the L1 and the above process completes exactly as it did before, except using 0x88888678 instead of 0x12345678. Again, the caches don’t care.

You’ll also notice that the above made no mention of anything called a TLB. Because that’s an implementation detail which doesn’t matter to either the LSU (which just generates logical addresses) or the caches (which use physical) addresses.

Now TLBs are hugely important for performance, but they’re just caches for the translation unit. Without well performing TLBs (and we’re usually looking for hit rates on the order of 99.9%), the translation unit would have to walk the page tables for every memory access, which would be horrendous. Depending on the ISA, you may have hardware walking the page table, or software, or some combination of the two. In either case the walker feeds the TLB. A software walker is called by the hardware via an exception/interrupt, and uses ordinary memory access instructions to walk the TLB (usually a software walker is entered in DAT off**** mode, so those “ordinary” memory accesses are not themselves subject to address translation, IOW, they’re physical addresses). So page table entries cache just like any other data in memory. A hardware based walker *also* generates normal memory references, although it’s done internally to the processor. A hardware page walker may have a back door to one of the “normal” LSUs, or may have its own dedicated LSU-like hardware for generating those memory accesses. Again, normally page table accesses from the walker are usually fairly normal memory accesses (usually via physical addresses), and those cache as per the normal system caching policy.

TLBs cache completed translations. Physically they are implemented very similarly to caches (although the search criteria, and hence what’s in the tags) is a bit different. Fully associative TLBs used to be common, but as TLBs have grown, set-associative has become more common. TLBs can have multiple levels as well, just as ordinary caches. So a translation unit handling a translation request might check the level-1 TLB, then the level-2 TLB (neither of which should be read to introduce any relationship to L1 or L2 caches), and then punt to the page walker to build the translation by walking the page tables via fairly conventional memory accesses. Because these are ordinary memory accesses, the data in the page tables tends to cache, and thus the page walker can take advantage of the page table data being in caches, although there’s very little special about that – the walker just issues an access via the LSU it’s using, and that goes through caches and memory pretty much like any other memory access does.

So there’s really no logical connection between address translation and the caches. Translation to a physical address happens before the caches (or memory) are accessed. TLBs make that translation faster.

OK, so that’s a (very slight) lie (actually two – the second is the performance improvement that the page walker sees from ordinary caches, as described above). There is a trick you can do to allow the TLB lookup and the L1 lookup to overlap significantly. If you consider our example access to 0x12345678, on a system with 4KB pages, it’s obvious that the low 12 bits of the address are not modified by translation. So if you limit yourself to those 12 bits, you can index into the cache, and read the cache line, *before* the translation gets done. The problem is that there are a limited number of bits there. Again, assuming 64B cache entries (which uses six address bits), you only have six bits to index into your cache. So you can have at most 64 cache lines. Now with a direct mapped cache your cache capacity would be 64*64 bytes (or 4KB), the size of a page, since each cache line***** only contains a single cache entry. But if you have a set associative cache, each cache line can have several cache entries. So with a 4-way set-associative L1, you have 64 cache lines (since we’re still stuck with only six available bits for indexing), but each line now contains four 64B cache entries, for a total of 16KB. With that organization you can read the cache line using only parts of the address that do not depend on translation. But once you’ve read that line, you still need to compare the tags – those comparisons *do* need the translated address. So you start the TLB access and the L1 cache lookup at the same time, and then the translation (hopefully there will be a hit in the TLB) will complete around the time the read of the cache line completes, and then you have the translated (aka physical) address to compare to the tags for the cache entries in the cache line. So even here there is not much actual association between the TLB and the L1, rather a trick to let the first part of the L1 lookup proceed before you have the translation, coming simply from limiting the index to coming from the low 12 bits of the address, which is not altered by translation.

In systems where the are multiple caches at a given level (for example, it’s common to have instruction and data L1 caches, but shared lower level caches), there is usually a TLB associated with each access path, namely one on the instruction fetch path, and one on the data access path (the LSUs).

Now there are a bunch of additional complications in real systems, especially when you have multiple cores, or have multiple caches at the same level, and need to keep the contents of caches coherent, and how translation relates to protection, but those are all embellishments on the above.




*Ignoring that many ISA provide instructions that address specific address spaces - usually those are for the convenience of the operating system, and not available to applications.

**You can build caches that are accessed via virtual addresses, but they introduce a number of issues to the system, and are, in any event, rare beyond the first level of cache (where being able to search the cache without doing address translation provides a performance advantage). The limited size L1 approach described above effectively lets you do the first part of the L1 access with the “virtual” address, because you’re only using address bits that are the same in the virtual and physical address.

***In practice, you don’t need quite that many bits in the tag, since the bits used to index the cache have effectively already been compared (which only applies to non-fully associative caches). So on a cache with 64 lines and 64B entries, the actual tag contents would be 0x12345 with none of the low 12 bits actually stored.

****That’s not universally true, but it introduces complications not relevant here.

*****Conventionally you don’t distinguish between cache lines and entries on a direct map cache, since they’re one and the same.
< Previous Post in ThreadNext Post in Thread >
TopicPosted ByDate
Silvermont architecture in-depth articleDavid Kanter2013/05/06 01:30 PM
  Silvermont architecture in-depth articleSHK2013/05/06 02:19 PM
    Silvermont architecture in-depth articletianbing2013/05/18 08:08 PM
      Silvermont architecture in-depth articleKlimax2013/05/18 10:04 PM
        Silvermont architecture in-depth articleEduardoS2013/05/18 11:20 PM
          Silvermont architecture in-depth articleKlimax2013/05/19 01:34 AM
            Silvermont architecture in-depth articleMichael S2013/05/19 03:02 AM
              Silvermont architecture in-depth articleKlimax2013/05/19 03:36 AM
                Silvermont architecture in-depth articleMichael S2013/05/19 04:42 AM
                  Silvermont architecture in-depth articleMichael S2013/05/19 04:45 AM
                Silvermont architecture in-depth articleWilco2013/05/19 05:00 AM
                  Silvermont architecture in-depth articleMichael S2013/05/19 05:30 AM
                    Silvermont architecture in-depth articleDavid Kanter2013/05/19 12:28 PM
      Silvermont architecture in-depth articleRicardo B2013/05/19 06:20 AM
        Silvermont architecture in-depth articleLinus Torvalds2013/05/19 10:24 AM
          Silvermont architecture in-depth articleMichael S2013/05/19 10:39 AM
            Silvermont architecture in-depth articleLinus Torvalds2013/05/19 11:19 AM
              Silvermont architecture in-depth articleanon2013/05/19 09:11 PM
                Silvermont architecture in-depth articleVincent Diepeveen2013/05/21 07:34 AM
                I am not sure I agreeDavid Kanter2013/05/21 12:54 PM
                  I am not sure I agreeanon2013/05/21 03:54 PM
                    I am not sure I agreebakaneko2013/05/22 02:42 AM
                Silvermont architecture in-depth articlebakaneko2013/05/22 02:44 AM
          Silvermont architecture in-depth articleRicardo B2013/05/19 03:17 PM
        Silvermont architecture in-depth articleMaynard Handley2013/05/19 04:44 PM
  Silvermont architecture in-depth articlenone2013/05/06 02:32 PM
    Another use for huge page TLB entriesPaul A. Clayton2013/05/06 08:27 PM
      Another use for huge page TLB entriesanon2013/05/06 08:35 PM
        Lack of paging structure cache informationPaul A. Clayton2013/05/07 05:51 AM
      Another use for huge page TLB entriesMaynard Handley2013/05/14 04:49 PM
        Another use for huge page TLB entriesUngo2013/05/15 11:15 AM
          Another use for huge page TLB entriesMaynard Handley2013/05/15 02:33 PM
            Another use for huge page TLB entriesUngo2013/05/16 02:54 PM
    Silvermont architecture in-depth articleanon2013/05/06 11:11 PM
      Silvermont architecture in-depth articlenone2013/05/07 01:27 AM
        Silvermont architecture in-depth articleJames2013/05/07 04:18 AM
  Silvermont architecture in-depth articleMichael S2013/05/06 02:59 PM
    Silvermont architecture in-depth articleDavid Kanter2013/05/06 04:48 PM
      Silvermont architecture in-depth articleMichael S2013/05/06 11:49 PM
        Silvermont architecture in-depth articleWilco2013/05/07 04:37 AM
          Silvermont architecture in-depth articleMichael S2013/05/07 04:49 AM
            Silvermont architecture in-depth articleWilco2013/05/07 07:17 AM
              Silvermont architecture in-depth articleExophase2013/05/07 08:00 AM
                Silvermont architecture in-depth articleExophase2013/05/07 08:05 AM
                  Resources allocated for different templatesDavid Kanter2013/05/07 08:38 AM
                  Silvermont architecture in-depth articleEduardoS2013/05/07 09:18 AM
  Silvermont architecture in-depth articleEduardoS2013/05/06 05:31 PM
    Silvermont architecture in-depth articleDavid Kanter2013/05/06 06:26 PM
  Some editing suggestionsPaul A. Clayton2013/05/06 08:00 PM
    Some editing suggestionsGabriele Svelto2013/05/08 04:27 AM
      Some editing suggestionsEduardoS2013/05/08 07:27 AM
        Original T series more web-oriented?Paul A. Clayton2013/05/08 10:01 AM
    SMT is a dodo birdBrett2013/05/09 07:26 PM
      SMT is a dodo birdExophase2013/05/09 07:57 PM
      SMT is a dodo birdDavid Kanter2013/05/09 11:34 PM
        SMT is a dodo birdMichael S2013/05/10 02:08 AM
        SMT is a dodo birdGabriele Svelto2013/05/10 02:45 AM
          SMT is a dodo birdRichardC2013/05/11 06:02 AM
            SMT is a dodo birdRicardo B2013/05/11 07:07 AM
              SMT is a dodo birdRichardC2013/05/11 03:39 PM
                SMT is a dodo birdRicardo B2013/05/11 05:55 PM
                  SMT is a dodo birdRichardC2013/05/11 11:36 PM
                    quad-core since Nov 2006RichardC2013/05/12 05:41 AM
                      quad-core since Nov 2006mpx2013/05/13 12:34 AM
                        quad-core since Nov 2006RichardC2013/05/13 02:48 AM
                          quad-core since Nov 2006Jukka Larja2013/05/14 10:34 AM
                            quad-core since Nov 2006RichardC2013/05/14 11:23 AM
                            quad-core since Nov 2006RichardC2013/05/14 11:24 AM
                    SMT is a dodo birdRicardo B2013/05/12 04:48 PM
                      * browsers are moving to a process per tab model (NT)Ricardo B2013/05/12 04:54 PM
                        * browsers are moving to a process per tab modelDoug S2013/05/12 07:27 PM
                          * browsers are moving to a process per tab modelMichael S2013/05/12 11:43 PM
                        * browsers are moving to a process per tab modelMaynard Handley2013/05/13 05:52 PM
                          * browsers are moving to a process per tab modelgallier22013/05/14 12:31 AM
                            * browsers are moving to a process per tab modelDoug S2013/05/14 10:58 AM
                          * browsers are moving to a process per tab modelRichardC2013/05/14 02:54 AM
                            * browsers are moving to a process per tab modelRicardo B2013/05/14 04:06 AM
                              * browsers are moving to a process per tab modelMichael S2013/05/14 04:45 AM
                                * browsers are moving to a process per tab modelRicardo B2013/05/14 06:48 AM
                                How about anti-virus, security, utility, and encryption cpu usage and delays?David Ball2013/05/14 06:48 AM
                                  How about anti-virus, security, utility, and encryption cpu usage and delays?RichardC2013/05/14 09:34 AM
                                * browsers are moving to a process per tab modelUngo2013/05/14 03:50 PM
                      SMT is a dodo birdEduardoS2013/05/12 06:21 PM
                        SMT is a dodo birdJouni Osmala2013/05/12 08:57 PM
                        SMT is a dodo birdRicardo B2013/05/13 01:40 AM
                          SMT is a dodo birdMichael S2013/05/13 02:50 AM
                            SMT is a dodo birdRicardo B2013/05/13 05:13 AM
                    SMT is a dodo birdPatrick Chase2013/05/13 07:33 PM
                      SMT is a dodo birdRichardC2013/05/14 09:38 AM
                        SMT is a dodo birdRicardo B2013/05/14 10:15 AM
                          SMT is a dodo birdRicardo B2013/05/14 10:18 AM
                            SMT is a dodo birdRichardC2013/05/14 11:09 AM
                              SMT is a dodo birdRicardo B2013/05/14 11:50 AM
                                SMT is a dodo birdRichardC2013/05/14 12:09 PM
                                  SMT is a dodo birdRicardo B2013/05/14 01:47 PM
                                  SMT is a dodo birdMaynard Handley2013/05/14 04:02 PM
                                    SMT is a dodo birdRichardC2013/05/14 04:40 PM
                                      SMT is a dodo birdBrendan2013/05/15 01:26 AM
                                        SMT is a dodo birdRichardC2013/05/15 02:14 AM
                                        SMT is a dodo birdRichardC2013/05/15 02:20 AM
                                          SMT is a dodo birdBrendan2013/05/15 03:10 AM
                                            SMT is a dodo birdGabriele Svelto2013/05/15 05:40 AM
                                              SMT is a dodo birdBrendan2013/05/15 06:29 AM
                                            SMT is a dodo birdRichardC2013/05/15 06:29 AM
                                              SMT is a dodo birdRicardo B2013/05/15 08:16 AM
                                                SMT is a dodo birdRichardC2013/05/15 10:06 AM
                                                  SMT is a dodo birdRicardo B2013/05/15 11:19 AM
                                            SMT is a dodo birdEduardoS2013/05/15 03:51 PM
                                              SMT is a dodo birdBrendan2013/05/15 11:29 PM
                                                The Problem With ThreadsRichardC2013/05/16 05:57 AM
                                                  The Problem With Threadsanon2013/05/16 07:55 AM
                                                  The Problem With ThreadsMaynard Handley2013/05/16 09:22 AM
                                                  The Problem With ThreadsBrendan2013/05/16 10:54 AM
                                                    The Problem With ThreadsRichardC2013/05/16 11:19 AM
                                                      The Problem With ThreadsSymmetry2013/05/16 12:14 PM
                                                        The Problem With ThreadsMaynard Handley2013/05/16 01:50 PM
                                                          The Problem With ThreadsSymmetry2013/05/16 03:22 PM
                                                            The Problem With ThreadsRichardC2013/05/17 06:22 AM
                                                        The Problem With ThreadsRichardC2013/05/16 04:49 PM
                                                      The Problem With ThreadsRicardo B2013/05/16 01:12 PM
                                                        The Problem With ThreadsRichardC2013/05/17 06:49 AM
                                                    The Problem With ThreadsBrendan2013/05/17 02:03 AM
                                                SMT is a dodo birdKoby2013/05/16 05:58 AM
                                          SMT is a dodo birdmpx2013/05/15 01:09 PM
                                            SMT is a dodo birdEduardoS2013/05/15 03:55 PM
                               4C/4T 4.0GHz was significantly faster than 4C/4T 3.8GHz??Mark Roulo2013/05/14 04:37 PM
                                 4C/4T 4.0GHz was significantly faster than 4C/4T 3.8GHz??RichardC2013/05/14 04:54 PM
                                  Thanks. Mark Roulo2013/05/14 05:05 PM
                                   4C/4T 4.0GHz was significantly faster than 4C/4T 3.8GHz??Exophase2013/05/14 07:10 PM
                                     More tests of HT and gamesExophase2013/05/14 07:24 PM
                                       More tests of HT and gamesRichardC2013/05/15 02:26 AM
                                         More tests of HT and gamesExophase2013/05/15 05:43 AM
                                           More tests of HT and gamesMichael S2013/05/15 11:35 AM
                                             More tests of HT and gamesRicardo B2013/05/15 12:16 PM
                                               More tests of HT and gamesRicardo B2013/05/15 12:19 PM
                                               More tests of HT and gamesJukka Larja2013/05/16 05:12 AM
                                         More tests of HT and gamesStubabe2013/05/16 11:24 AM
                                       More tests of HT and gamesmpx2013/05/19 01:56 AM
                          SMT is a dodo birdAction_Parsnip2013/05/14 12:28 PM
                          SMT is a dodo birdJukka Larja2013/05/15 08:02 AM
                        SMT is a dodo birdPatrick Chase2013/05/17 08:43 AM
                          SMT is a dodo birdJouni Osmala2013/05/17 09:27 AM
                          SMT is a dodo birdBrett2013/05/17 06:26 PM
                The cost of multithreadingPaul A. Clayton2013/05/11 06:51 PM
                  The cost of multithreadingRichardC2013/05/11 11:20 PM
                SMT benefits high end desktop.Jouni Osmala2013/05/11 09:04 PM
                SMT is a dodo birdHeikki Kultala2013/05/12 12:02 AM
                  SMT is a dodo birdRichardC2013/05/12 05:57 AM
                SMT is a dodo birdJukka Larja2013/05/12 02:33 AM
                SMT is a dodo birdStubabe2013/05/14 11:09 AM
                  SMT is a dodo birdRichardC2013/05/14 12:43 PM
                    SMT is a dodo birdRicardo B2013/05/14 02:00 PM
                      SMT is a dodo birdRichardC2013/05/14 05:27 PM
                        SMT is a dodo birdRicardo B2013/05/15 02:50 AM
                          SMT is a dodo birdRichardC2013/05/15 07:53 AM
                            SMT is a dodo birdRicardo B2013/05/15 08:31 AM
                              SMT is a dodo birdRichardC2013/05/15 10:13 AM
                                SMT is a dodo birdRicardo B2013/05/15 11:43 AM
                                  SMT is a dodo birdRichardC2013/05/15 12:15 PM
                                    SMT is a dodo birdRicardo B2013/05/15 12:40 PM
                                      SMT is a dodo birdRichardC2013/05/15 01:00 PM
                                        SMT is a dodo birdMaynard Handley2013/05/15 02:55 PM
                                        SMT is a dodo birdJouni Osmala2013/05/16 02:16 AM
                    SMT is a dodo birdStubabe2013/05/16 10:54 AM
                  Using the available die areaDoug S2013/05/14 04:51 PM
                    Using the available die areaRicardo B2013/05/15 03:15 AM
                      Using the available die areaStubabe2013/05/16 10:26 AM
                        Using the available die areaRicardo B2013/05/16 01:05 PM
      SMT is a dodo birdmpx2013/05/10 10:51 PM
        SMT is a dodo birdFoo_2013/05/11 04:21 AM
          SMT is a dodo birdEduardoS2013/05/11 08:28 AM
        SMT is a dodo birdDoug S2013/05/11 09:49 AM
          SMT is a dodo birdmpx2013/05/12 11:04 AM
            SMT is a dodo birdLinus Torvalds2013/05/12 01:22 PM
              Mediocre SMT implementations?Paul A. Clayton2013/05/12 04:06 PM
              SMT is a dodo birdmpx2013/05/12 05:13 PM
                Thanks for the actual research effort! (NT)Paul A. Clayton2013/05/13 04:43 AM
              How Does Silvermont Compare To A15?Ashraf Eassa2013/05/15 10:59 AM
                How Does Silvermont Compare To A15?Maynard Handley2013/05/15 03:08 PM
                  How Does Silvermont Compare To A15?TREZA2013/05/16 02:17 AM
                How Does Silvermont Compare To A15?Wilco2013/05/15 04:37 PM
                  How Does Silvermont Compare To A15?David Kanter2013/05/17 07:00 AM
                    How Does Silvermont Compare To A15?Exophase2013/05/17 07:52 AM
                      How Does Silvermont Compare To A15?David Kanter2013/05/17 08:38 PM
                        How Does Silvermont Compare To A15?Exophase2013/05/17 09:33 PM
                          FYI Tegra4 has A15 in it also, so 3 devices (NT)S. Rao2013/05/21 11:59 PM
                            FYI Tegra4 has A15 in it also, so 3 devicesMichael S2013/05/22 02:21 AM
                              FYI Tegra4 has A15 in it also, so 3 devicesS. Rao2013/05/22 09:06 AM
                        How Does Silvermont Compare To A15?Wilco2013/05/19 04:19 AM
                          How Does Silvermont Compare To A15?Patrick Chase2013/05/20 08:29 AM
                            Virtualization was also mostly server-orientedPaul A. Clayton2013/05/20 01:37 PM
                    How Does Silvermont Compare To A15?Wilco2013/05/17 11:22 AM
                      How Does Silvermont Compare To A15?David Kanter2013/05/17 08:29 PM
                        How Does Silvermont Compare To A15?Exophase2013/05/17 09:41 PM
                          How Does Silvermont Compare To A15?Gabriele Svelto2013/05/18 04:48 AM
                            How Does Silvermont Compare To A15?Exophase2013/05/18 08:57 AM
                          How Does Silvermont Compare To A15?David Kanter2013/05/18 07:02 AM
                            How Does Silvermont Compare To A15?Exophase2013/05/18 09:07 AM
                            How Does Silvermont Compare To A15?Wilco2013/05/18 11:04 AM
                        How Does Silvermont Compare To A15?anon2013/05/17 10:03 PM
                        How Does Silvermont Compare To A15?Maynard Handley2013/05/19 05:27 PM
                          How Does Silvermont Compare To A15?David Kanter2013/05/20 04:30 PM
                            How Does Silvermont Compare To A15?Exophase2013/05/20 05:06 PM
                              big.LITTLE and low-leakage processesPatrick Chase2013/05/20 08:26 PM
                                big.LITTLE and low-leakage processesExophase2013/05/20 08:56 PM
                                  big.LITTLE and low-leakage processesPatrick Chase2013/05/20 09:16 PM
                      Clock frequency comparisons and processDavid Kanter2013/05/18 07:50 AM
                        Hard cores from ARM?Patrick Chase2013/05/19 11:38 AM
                  Architecture isn't everythingPatrick Chase2013/05/17 04:49 PM
                    Architecture isn't everythingBjörn R. Björnsson2013/05/17 05:34 PM
                      Architecture isn't everythingaaron spink2013/05/17 07:55 PM
                        Load delay slots need not be a problemPaul A. Clayton2013/05/18 04:41 AM
                          Load delay slots need not be a problemExophase2013/05/18 09:33 AM
                            Did cache misses disrupt load delay?Paul A. Clayton2013/05/18 02:48 PM
                              Did cache misses disrupt load delay?Exophase2013/05/18 06:45 PM
                            TI C6x load delaysPatrick Chase2013/05/19 03:19 PM
                              Hexagon documentation?David Kanter2013/05/23 10:19 AM
                                Hexagon documentation?Exophase2013/05/23 10:45 AM
                                Hexagon documentation?Patrick Chase2013/05/23 02:55 PM
                        Architecture isn't everythingLinus Torvalds2013/05/18 10:31 AM
                How Does Silvermont Compare To A15?David Kanter2013/05/17 07:12 AM
                  How Does Silvermont Compare To A15?Ashraf Eassa2013/05/18 08:06 PM
                    How Does Silvermont Compare To A15?Ashraf Eassa2013/05/18 08:06 PM
            SMT is a dodo birdDoug S2013/05/12 07:22 PM
              SMT is a dodo birdGabriele Svelto2013/05/12 11:53 PM
                SMT is a dodo birdMichael S2013/05/13 12:13 AM
              SMT is a dodo birdMichael S2013/05/13 12:30 AM
  Silvermont architecture in-depth articlerwessel2013/05/06 08:04 PM
    Two different RSBsPaul A. Clayton2013/05/06 08:35 PM
  Decode cacheanon2013/05/07 03:53 AM
    Decode cachenone2013/05/07 03:59 AM
      Decode cacheanon2013/05/07 05:08 AM
    There is also predecode information in IcachePaul A. Clayton2013/05/07 06:18 AM
      There is also predecode information in IcacheMaynard Handley2013/05/13 08:23 PM
    Decode cachenona2013/05/07 10:25 PM
  Silvermont architecture in-depth articlekashing ho2013/05/07 11:18 AM
    Silvermont architecture in-depth articleEduardoS2013/05/07 12:41 PM
      Silvermont architecture in-depth articleLinus Torvalds2013/05/07 02:07 PM
        Silvermont architecture in-depth articleEduardoS2013/05/07 06:18 PM
          Silvermont architecture in-depth articleY2013/05/08 03:23 AM
  Silvermont architecture in-depth articleChris Rodinis2013/05/22 02:56 PM
    Silvermont architecture in-depth articleDavid Kanter2013/05/23 12:17 AM
  Silvermont architecture in-depth articleSebastian Soeiro2013/05/28 08:00 AM
    Silvermont architecture in-depth articleDavid Kanter2013/05/28 11:14 AM
      Silvermont architecture in-depth articleSebastian Soeiro2013/05/28 11:45 AM
        Silvermont architecture in-depth articleRicardo B2013/05/28 02:58 PM
          Silvermont architecture in-depth articleSebastian Soeiro2013/05/28 03:27 PM
            Silvermont architecture in-depth articleRicardo B2013/05/29 01:41 PM
              Silvermont architecture in-depth articleSebastian Soeiro2013/05/29 09:16 PM
                Silvermont architecture in-depth articleRicardo B2013/05/30 07:05 AM
                  Silvermont architecture in-depth articleSebastian Soeiro2013/05/30 08:48 AM
                    Silvermont architecture in-depth articleRicardo B2013/05/30 10:18 AM
                      Silvermont architecture in-depth articleSebastian Soeiro2013/05/30 12:13 PM
                        Silvermont architecture in-depth articleSebastian Soeiro2013/05/30 02:39 PM
                          Silvermont architecture in-depth articleRicardo B2013/05/30 03:43 PM
                            Silvermont architecture in-depth articleSebastian Soeiro2013/05/30 06:52 PM
                              Silvermont architecture in-depth articleRicardo B2013/05/31 03:34 AM
                                Silvermont architecture in-depth articleSebastian Soeiro2013/05/31 05:59 AM
                                  Silvermont architecture in-depth articleKlimax2013/05/31 06:15 AM
                                    Silvermont architecture in-depth articleSebastian Soeiro2013/05/31 07:01 AM
                                      Silvermont architecture in-depth articlerwessel2013/05/31 09:59 AM
                                        Silvermont architecture in-depth articleSebastian Soeiro2013/05/31 01:26 PM
                                          Silvermont architecture in-depth articlerwessel2013/05/31 02:11 PM
                                            Silvermont architecture in-depth articleSebastian Soeiro2013/06/03 08:51 AM
                                              Silvermont architecture in-depth articlerwessel2013/06/03 09:45 AM
                                              Silvermont architecture in-depth articlegallier22013/06/04 01:06 AM
                                                Silvermont architecture in-depth articleSebastian Soeiro2013/06/08 07:30 AM
                                  Silvermont architecture in-depth articleRicardo B2013/05/31 11:22 AM
                                    Silvermont architecture in-depth articleSebastian Soeiro2013/05/31 01:22 PM
                                      Silvermont architecture in-depth articleDavid Kanter2013/05/31 02:08 PM
                                        Silvermont architecture in-depth articleSebastian Soeiro2013/05/31 08:04 PM
                                          Silvermont architecture in-depth articlerwessel2013/05/31 08:53 PM
                                            Silvermont architecture in-depth articleSebastian Soeiro2013/05/31 09:59 PM
                                              Silvermont architecture in-depth articleDavid Kanter2013/06/02 02:02 AM
                                                Silvermont architecture in-depth articleSebastian Soeiro2013/06/02 07:24 AM
                                                  Silvermont architecture in-depth articleDavid Kanter2013/06/02 10:10 PM
                                                    Silvermont architecture in-depth articleSebastian Soeiro2013/06/03 07:25 AM
                                              Silvermont architecture in-depth articlerwessel2013/06/02 11:09 PM
                                                Silvermont architecture in-depth articleSebastian Soeiro2013/06/03 08:59 AM
                                                  Silvermont architecture in-depth articlerwessel2013/06/03 09:51 AM
                                                    Silvermont architecture in-depth articleSebastian Soeiro2013/06/04 05:31 PM
                                      Silvermont architecture in-depth articlerwessel2013/05/31 02:20 PM
                                        Silvermont architecture in-depth articleSebastian Soeiro2013/05/31 05:59 PM
                                          Silvermont architecture in-depth articlerwessel2013/05/31 08:02 PM
                                            Silvermont architecture in-depth articleSebastian Soeiro2013/05/31 08:13 PM
                                            Quibbling on multithreadingPaul A. Clayton2013/06/01 09:11 AM
                                              Quibbling on multithreadingrwessel2013/06/02 10:54 PM
                                                Exploiting for SMT was my "contribution"Paul A. Clayton2013/06/03 03:53 AM
                                            Silvermont architecture in-depth articleMichael S2013/06/01 09:49 AM
                        Silvermont architecture in-depth articleRicardo B2013/05/30 03:41 PM
                          Didn't notice how this was to be organized...Sebastian Soeiro2013/05/30 06:55 PM
          Silvermont, not Saltwell? (NT)papapapa2013/05/28 03:33 PM
            Of course. Not enough sleep. (NT)Ricardo B2013/05/29 01:27 PM
  No out-of-order in FP cluster of silvermont?anon2013/06/28 02:27 AM
    No out-of-order in FP cluster of silvermont?Michael S2013/06/28 03:52 AM
      No out-of-order in FP cluster of silvermont?Exophase2013/06/28 12:54 PM
        No out-of-order in FP cluster of silvermont?Michael S2013/06/29 10:21 AM
          No out-of-order in FP cluster of silvermont?Exophase2013/06/29 08:07 PM
      No out-of-order in FP cluster of silvermont?Linus Torvalds2013/06/29 11:22 AM
        No out-of-order in FP cluster of silvermont?anon2013/06/29 04:26 PM
          No out-of-order in FP cluster of silvermont?EduardoS2013/06/29 05:50 PM
            No out-of-order in FP cluster of silvermont?anon2013/06/29 08:15 PM
              No out-of-order in FP cluster of silvermont?EduardoS2013/06/29 08:31 PM
                No out-of-order in FP cluster of silvermont?anon2013/06/30 01:53 AM
                  No out-of-order in FP cluster of silvermont?EduardoS2013/06/30 04:24 AM
                    No out-of-order in FP cluster of silvermont?anon2013/06/30 09:41 AM
                      No out-of-order in FP cluster of silvermont?EduardoS2013/06/30 12:26 PM
                        No out-of-order in FP cluster of silvermont?anon2013/06/30 11:32 PM
                          No out-of-order in FP cluster of silvermont?EduardoS2013/07/01 04:40 PM
                            No out-of-order in FP cluster of silvermont?anon2013/07/01 05:44 PM
                            No out-of-order in FP cluster of silvermont?Michael S2013/07/02 04:40 AM
                              No out-of-order in FP cluster of silvermont?Exophase2013/07/02 08:03 AM
                              No out-of-order in FP cluster of silvermont?EduardoS2013/07/02 04:20 PM
                          No out-of-order in FP cluster of silvermont?Patrick Chase2013/07/02 08:25 AM
                      No out-of-order in FP cluster of silvermont?Etienne2013/07/02 03:36 AM
                        No out-of-order in FP cluster of silvermont?anon2013/07/02 06:13 AM
                          No out-of-order in FP cluster of silvermont?Patrick Chase2013/07/02 09:03 AM
                            No out-of-order in FP cluster of silvermont?anon2013/07/02 03:12 PM
                              No out-of-order in FP cluster of silvermont?Patrick Chase2013/07/02 03:43 PM
                                No out-of-order in FP cluster of silvermont?anon2013/07/02 04:47 PM
                                  No out-of-order in FP cluster of silvermont?Patrick Chase2013/07/02 06:36 PM
                                    No out-of-order in FP cluster of silvermont?anon2013/07/02 08:03 PM
                                      No out-of-order in FP cluster of silvermont?Symmetry2013/07/03 06:14 AM
                                        No out-of-order in FP cluster of silvermont?Patrick Chase2013/07/03 08:04 PM
                                          No out-of-order in FP cluster of silvermont?Patrick Chase2013/07/04 02:23 AM
                        No out-of-order in FP cluster of silvermont?Patrick Chase2013/07/02 09:01 AM
                          No out-of-order in FP cluster of silvermont?Etienne2013/07/03 02:26 AM
                            layer 2 cache behaviourEtienne2013/07/05 02:14 AM
                              layer 2 cache behaviourMichael S2013/07/05 05:41 AM
                                layer 2 cache behaviourEtienne2013/07/05 08:15 AM
                              layer 2 cache behaviourPatrick Chase2013/07/05 10:37 AM
                                layer 2 cache behaviourMichael S2013/07/06 09:57 AM
                                  layer 2 cache behaviourPatrick Chase2013/07/06 11:08 AM
                                    layer 2 cache behaviourPatrick Chase2013/07/06 11:19 AM
                                    layer 2 cache behaviourEtienne2013/07/08 02:10 AM
                                      layer 2 cache behaviourPatrick Chase2013/07/08 09:02 AM
                                  layer 2 cache behaviourPatrick Chase2013/07/06 01:15 PM
                                  layer 2 cache behaviourUngo2013/07/09 02:24 PM
                      No out-of-order in FP cluster of silvermont?Patrick Chase2013/07/02 07:43 AM
                  No out-of-order in FP cluster of silvermont?Patrick Chase2013/07/02 07:34 AM
                    No out-of-order in FP cluster of silvermont?anon2013/07/02 03:15 PM
            No out-of-order in FP cluster of silvermont?none2013/06/30 04:03 AM
              No out-of-order in FP cluster of silvermont?EduardoS2013/06/30 04:45 AM
                No out-of-order in FP cluster of silvermont?none2013/06/30 05:28 AM
                  No out-of-order in FP cluster of silvermont?EduardoS2013/06/30 12:27 PM
        No out-of-order in FP cluster of silvermont?x2013/06/30 02:54 AM
          No out-of-order in FP cluster of silvermont?Foo_2013/06/30 05:15 AM
        No out-of-order in FP cluster of silvermont?rwessel2013/06/30 09:11 PM
          No out-of-order in FP cluster of silvermont?Linus Torvalds2013/07/01 12:40 PM
            No out-of-order in FP cluster of silvermont?Patrick Chase2013/07/02 09:35 AM
              No out-of-order in FP cluster of silvermont?Linus Torvalds2013/07/02 10:18 AM
            FPU in integer codes2013/07/02 10:38 AM
              FPU in integer codesPatrick Chase2013/07/02 10:56 AM
              FPU in integer codesLinus Torvalds2013/07/02 11:12 AM
                FPU in integer codesPatrick Chase2013/07/02 11:34 AM
                  FPU in integer codesLinus Torvalds2013/07/02 12:55 PM
        No out-of-order in FP cluster of silvermont?2013/06/30 10:42 PM
          No out-of-order in FP cluster of silvermont?bakaneko2013/07/03 02:54 AM
        What about integer SIMD?Exophase2013/06/30 11:44 PM
          What about integer SIMD?anon2013/07/01 12:45 AM
            What about integer SIMD?Michael S2013/07/01 07:57 AM
              What about integer SIMD?rwessel2013/07/01 02:39 PM
                What about integer SIMD?Michael S2013/07/01 03:11 PM
                  What about integer SIMD?EduardoS2013/07/01 05:04 PM
                    What about integer SIMD?Michael S2013/07/01 11:32 PM
          What about integer SIMD?hi2013/07/01 07:15 AM
            What about integer SIMD?Exophase2013/07/01 09:50 AM
            What about integer SIMD?David Kanter2013/07/02 12:20 PM
              What about integer SIMD?hi2013/07/02 02:48 PM
                What about integer SIMD?David Kanter2013/07/04 02:19 PM
  Silvermont architecture in-depth articleSebastian Soeiro2013/07/30 08:34 AM
    TLBs are caches of page tablesPaul A. Clayton2013/07/30 10:03 AM
      TLBs are caches of page tablesSebastian Soeiro2013/07/30 12:18 PM
        TLBs are caches of page tablesPaul A. Clayton2013/07/30 02:59 PM
          TLBs are caches of page tablesSebastian Soeiro2013/07/30 04:27 PM
            TLBs are caches of page tablesrwessel2013/07/30 06:01 PM
              TLBs are caches of page tablesSebastian Soeiro2013/07/31 01:15 PM
                Floorplan images show physical locationsPaul A. Clayton2013/07/31 03:11 PM
                  Floorplan images show physical locationsSebastian Soeiro2013/07/31 08:12 PM
                    Floorplan images show physical locationsrwessel2013/07/31 11:21 PM
                      Floorplan images show physical locationsSebastian Soeiro2013/08/02 11:58 AM
                        Floorplan images show physical locationsAntti-Ville Tuunainen2013/08/02 01:09 PM
                        Floorplan images show physical locationsStubabe2013/08/02 06:49 PM
                          Floorplan images show physical locationsSebastian Soeiro2013/08/03 07:23 AM
                            Floorplan images show physical locationsrwessel2013/08/03 10:16 PM
                            Floorplan images show physical locationsMelody Speer2013/08/04 03:11 AM
                              Floorplan images show physical locationsSebastian Soeiro2013/08/04 04:42 PM
                                Floorplan images show physical locationsrwessel2013/08/04 09:58 PM
                                  Floorplan images show physical locationsSebastian Soeiro2013/08/07 02:33 PM
                                    Floorplan images show physical locationsrwessel2013/08/08 12:03 PM
                                      Floorplan images show physical locationsSebastian Soeiro2013/08/11 08:20 AM
                                        Page tables are set up by the OSPaul A. Clayton2013/08/11 12:25 PM
                                          Page tables are set up by the OSSebastian Soeiro2013/08/16 07:43 PM
                                            Multilevel page table formatPaul A. Clayton2013/08/17 10:18 AM
                                              Multilevel page table formatSebastian Soeiro2013/08/19 02:28 PM
                                                Multilevel page table formatrwessel2013/08/19 11:08 PM
                                                  Multilevel page table formatSebastian Soeiro2013/08/22 03:52 PM
                                                    Multilevel page table formatTREZA2013/08/22 04:30 PM
                                                    Multilevel page table formatrwessel2013/08/23 09:58 PM
                                                      Multilevel page table formatSebastian Soeiro2013/08/25 07:20 AM
                TLBs are caches of page tablesrwessel2013/07/31 11:41 PM
                  TLBs are caches of page tablesSebastian Soeiro2013/08/02 11:51 AM
                    TLBs are caches of page tablesrwessel2013/08/02 05:55 PM
  Silvermont instruction schedulingHugo Décharnes2013/10/02 02:25 AM
    Silvermont instruction schedulingMichael S2013/10/02 03:29 AM
Reply to this Topic
Name:
Email:
Topic:
Body: No Text
How do you spell green?