Article: ARM Goes 64-bit
By: Wilco (Wilco.Dijkstra.delete@this.ntlworld.com), November 21, 2012 6:44 am
Room: Moderated Discussions
EduardoS (no.delete@this.spam.com) on November 19, 2012 11:41 am wrote:
> Wilco (Wilco.Dijkstra.delete@this.ntlworld.com) on November 19, 2012 9:31 am wrote:
> > That certainly helps, but you're still doing it multiple times on targets which may not have a lot of
> > resources. What is more efficient - every phone compiling every bit of software you download, or just
> > run a highly optimized binary which was compiled once with optimal settings on fast hardware?
>
> Compiling on target seens a reasonable compromise, allows for new hardware/new hardware features,
> cross binary optimizations, libraries updates and compiler updates as well, still, the compiling
> process will happen only during installation and updates, not at every run.
Install-time compilation would be far better than using a JIT indeed. But you still may not have enough memory/performance for link-time whole program optimization with profile feedback.
> > Cache locality is certainly important, but GC doesn't solve that - in reality it actually makes it worse.
>
> Well... Cache locality improvements with GC is measurable... No need to discuss.
Rubbish. GC is memory inefficient by definition, so claiming it is better for locality is just wishful thinking. Compacting GC's typically need 2-3 times more memory than a non-compacting GC, so are worse on average. Also the much higher memory allocation rate and resulting collections are bad for locality.
> > And that is before we consider the actual overhead of the
> > collection itself, often having to stop all threads
> > for long periods.
>
> No,
>
> 1) A background GC can run on another thread, specially usefull on not threaded software;
Concurrent GC has even larger overheads. It stops threads for shorter periods but stops them more often, so takes far longer overall. And then we haven't considered the far higher overheads on the generated code.
> 2) A full stop still the less resource hungry GC, but don't look at it without
> considering that, thanks to this GC allocations and deallocations are much faster
> and heap is compacted periodically, in the end, often it is a win.
>
> > Then there is the optimization overhead and extra tables causing code bloat.
>
> And C++ allocators waste space to avoid memory fragmentation... But doing so also hits locality.
No, no space is wasted, unlike GC which requires descriptors for every object.
> > For example arithmetic with overflow,
>
> By default, there is no overflow check, IIRC there is no way to even specify
> overflow check on Java, on C# it is optional but disabled by default.
>
> It is more a CPU design fault then language design fault, more often then not
> an overflow exception is more usefull than a mod 2^32 result, but few CPUs
> provides a fast way of overflow checks, MIPS being a notable exception.
Instructions that can trap are bad. That's why you see modern FPUs implement IEEE so you never need traps, not even for denormals.
> > array bounds checks,
>
> I was thinking about this one when you mentioned "other features", sometimes the
> compiler is able to optimize it away and when not, last part of the post.
>
> > null pointer checks, assuming
> > any pointer access may cause an exception,
>
> In x86 .Net this check is "cmp eax, [eax]" with the pointer in eax, on field access there
> is no check at all since it will raise an exception anyway in the case of a null pointer.
>
> Since null pointer checks are so cheap it is not clear wich optimizations are disabled
> by them, just put the check where it is needed to keep the correct order.
Since when is a memory access cheap? Every unnecesary instruction has a cost.
> > multithreading support etc etc.
>
> How exactly this lowers performance?
The barriers and other checks for concurrent GC or multithreaded access to fields are not exactly zero-cost and block many optimizations.
> > It's not like languages like Java or C# are new.
>
> New or not doesn't matter, B is older than C++ but didn't have as much effort as C++ on building compilers.
By your logic, C++ would be much slower than C as C++ is quite new. Clearly that's not the case - and the simple reason is that much of what goes on inside is unrelated to the source language or the target.
> What Java or C# doesn't have is a SpecCPU for wich compilers with ridiculous aggressive
> optimizations are written, just look at the too most popular compilers Visual C++ and
> GCC and compare to the ones used in SpecCPU (ICC, PGI, Open64 and SunStudio).
>
> Visual C++ in particular is very conservative, still, in some workloads
> (of course, other than SpecCPU) it outperforms the others.
>
> Oh, yes, there is SpecJBB, and JVM there are much more aggressive,
> try comparing they performance there to something else.
Not sure what your point is here. Yes VC++ does extremely well as it is optimized to do well on the millions of lines of code that people actually write (such as Windows and its apps) - as opposed to showing huge gains on a few small benchmarks.
> > It is significantly harder to write a good compiler
> > for them, and even then you can never get close to C++ performance. Many optimizations have to be
> > disabled or turned extremely conservative as an exception or GC may occur at any time.
>
> Strictly speaking, GC may occur during memory allocations, in C++ a exception may happen on a division,
> on pointer derreference and whatever the spec says "undefined behaviour",
Wrong. Exceptions can only happen explicitly with throw in C++.
managed languages are usually
> more strict about ordering, and it is not obvious weak ordering improves performance by that much.
The problem is not just the ordering, but the fact that more operations can cause exceptions. That alone creates a lot of overhead as you need to model flows from every possible exception to all possible exception handlers. Local variable values need to be preserved for example, severely limiting optimizations.
> > Obviously you could argue all the overhead is worth it as some of the features allow programmers to
> > write code faster. Whether that is a good or a bad thing is a different discussion altogether...
>
> And finally, back to array checks, yes, they reduce performance, and yes, it is a different discussion,
> but frankly, the performance reduction is pretty small and a lot of security bugs would be avoided by
> array bound checks, it is not something I would left behind even if performance was a big concern.
The performance cost is high if you happen to use arrays a lot. Even if you think it is worth it, and ignore the overhead as small enough, many of such costs add up to something quite large.
Wilco
> Wilco (Wilco.Dijkstra.delete@this.ntlworld.com) on November 19, 2012 9:31 am wrote:
> > That certainly helps, but you're still doing it multiple times on targets which may not have a lot of
> > resources. What is more efficient - every phone compiling every bit of software you download, or just
> > run a highly optimized binary which was compiled once with optimal settings on fast hardware?
>
> Compiling on target seens a reasonable compromise, allows for new hardware/new hardware features,
> cross binary optimizations, libraries updates and compiler updates as well, still, the compiling
> process will happen only during installation and updates, not at every run.
Install-time compilation would be far better than using a JIT indeed. But you still may not have enough memory/performance for link-time whole program optimization with profile feedback.
> > Cache locality is certainly important, but GC doesn't solve that - in reality it actually makes it worse.
>
> Well... Cache locality improvements with GC is measurable... No need to discuss.
Rubbish. GC is memory inefficient by definition, so claiming it is better for locality is just wishful thinking. Compacting GC's typically need 2-3 times more memory than a non-compacting GC, so are worse on average. Also the much higher memory allocation rate and resulting collections are bad for locality.
> > And that is before we consider the actual overhead of the
> > collection itself, often having to stop all threads
> > for long periods.
>
> No,
>
> 1) A background GC can run on another thread, specially usefull on not threaded software;
Concurrent GC has even larger overheads. It stops threads for shorter periods but stops them more often, so takes far longer overall. And then we haven't considered the far higher overheads on the generated code.
> 2) A full stop still the less resource hungry GC, but don't look at it without
> considering that, thanks to this GC allocations and deallocations are much faster
> and heap is compacted periodically, in the end, often it is a win.
>
> > Then there is the optimization overhead and extra tables causing code bloat.
>
> And C++ allocators waste space to avoid memory fragmentation... But doing so also hits locality.
No, no space is wasted, unlike GC which requires descriptors for every object.
> > For example arithmetic with overflow,
>
> By default, there is no overflow check, IIRC there is no way to even specify
> overflow check on Java, on C# it is optional but disabled by default.
>
> It is more a CPU design fault then language design fault, more often then not
> an overflow exception is more usefull than a mod 2^32 result, but few CPUs
> provides a fast way of overflow checks, MIPS being a notable exception.
Instructions that can trap are bad. That's why you see modern FPUs implement IEEE so you never need traps, not even for denormals.
> > array bounds checks,
>
> I was thinking about this one when you mentioned "other features", sometimes the
> compiler is able to optimize it away and when not, last part of the post.
>
> > null pointer checks, assuming
> > any pointer access may cause an exception,
>
> In x86 .Net this check is "cmp eax, [eax]" with the pointer in eax, on field access there
> is no check at all since it will raise an exception anyway in the case of a null pointer.
>
> Since null pointer checks are so cheap it is not clear wich optimizations are disabled
> by them, just put the check where it is needed to keep the correct order.
Since when is a memory access cheap? Every unnecesary instruction has a cost.
> > multithreading support etc etc.
>
> How exactly this lowers performance?
The barriers and other checks for concurrent GC or multithreaded access to fields are not exactly zero-cost and block many optimizations.
> > It's not like languages like Java or C# are new.
>
> New or not doesn't matter, B is older than C++ but didn't have as much effort as C++ on building compilers.
By your logic, C++ would be much slower than C as C++ is quite new. Clearly that's not the case - and the simple reason is that much of what goes on inside is unrelated to the source language or the target.
> What Java or C# doesn't have is a SpecCPU for wich compilers with ridiculous aggressive
> optimizations are written, just look at the too most popular compilers Visual C++ and
> GCC and compare to the ones used in SpecCPU (ICC, PGI, Open64 and SunStudio).
>
> Visual C++ in particular is very conservative, still, in some workloads
> (of course, other than SpecCPU) it outperforms the others.
>
> Oh, yes, there is SpecJBB, and JVM there are much more aggressive,
> try comparing they performance there to something else.
Not sure what your point is here. Yes VC++ does extremely well as it is optimized to do well on the millions of lines of code that people actually write (such as Windows and its apps) - as opposed to showing huge gains on a few small benchmarks.
> > It is significantly harder to write a good compiler
> > for them, and even then you can never get close to C++ performance. Many optimizations have to be
> > disabled or turned extremely conservative as an exception or GC may occur at any time.
>
> Strictly speaking, GC may occur during memory allocations, in C++ a exception may happen on a division,
> on pointer derreference and whatever the spec says "undefined behaviour",
Wrong. Exceptions can only happen explicitly with throw in C++.
managed languages are usually
> more strict about ordering, and it is not obvious weak ordering improves performance by that much.
The problem is not just the ordering, but the fact that more operations can cause exceptions. That alone creates a lot of overhead as you need to model flows from every possible exception to all possible exception handlers. Local variable values need to be preserved for example, severely limiting optimizations.
> > Obviously you could argue all the overhead is worth it as some of the features allow programmers to
> > write code faster. Whether that is a good or a bad thing is a different discussion altogether...
>
> And finally, back to array checks, yes, they reduce performance, and yes, it is a different discussion,
> but frankly, the performance reduction is pretty small and a lot of security bugs would be avoided by
> array bound checks, it is not something I would left behind even if performance was a big concern.
The performance cost is high if you happen to use arrays a lot. Even if you think it is worth it, and ignore the overhead as small enough, many of such costs add up to something quite large.
Wilco
Topic | Posted By | Date |
---|---|---|
New Article: ARM Goes 64-bit | David Kanter | 2012/08/13 11:04 PM |
New Article: ARM Goes 64-bit | none | 2012/08/13 11:44 PM |
New Article: ARM Goes 64-bit | David Kanter | 2012/08/14 12:04 AM |
MIPS MT-ASE | Paul A. Clayton | 2012/08/14 08:01 AM |
MONITOR/MWAIT | EduardoS | 2012/08/14 09:08 AM |
MWAIT not specifically MT | Paul A. Clayton | 2012/08/14 09:36 AM |
MWAIT not specifically MT | EduardoS | 2012/08/15 02:16 PM |
MONITOR/MWAIT | anonymou5 | 2012/08/14 10:07 AM |
MONITOR/MWAIT | EduardoS | 2012/08/15 02:20 PM |
MIPS MT-ASE | rwessel | 2012/08/14 09:14 AM |
New Article: ARM Goes 64-bit | SHK | 2012/08/14 01:01 AM |
New Article: ARM Goes 64-bit | anon | 2012/08/14 01:37 AM |
New Article: ARM Goes 64-bit | Richard Cownie | 2012/08/14 02:57 AM |
New Article: ARM Goes 64-bit | anon | 2012/08/14 03:29 AM |
New Article: ARM Goes 64-bit | none | 2012/08/14 03:44 AM |
New Article: ARM Goes 64-bit | anon | 2012/08/14 04:28 AM |
New Article: ARM Goes 64-bit | anon | 2012/08/14 04:32 AM |
New Article: ARM Goes 64-bit | EduardoS | 2012/08/14 05:06 AM |
New Article: ARM Goes 64-bit | none | 2012/08/14 04:40 AM |
AArch64 select better than cmov | Paul A. Clayton | 2012/08/14 05:08 AM |
New Article: ARM Goes 64-bit | anon | 2012/08/14 05:12 AM |
New Article: ARM Goes 64-bit | none | 2012/08/14 05:25 AM |
Predicated ld/store are useful | Paul A. Clayton | 2012/08/14 05:48 AM |
Predicated ld/store are useful | none | 2012/08/14 05:56 AM |
Predicated ld/store are useful | anon | 2012/08/14 06:07 AM |
Predicated stores might not be that bad | Paul A. Clayton | 2012/08/14 06:27 AM |
Predicated stores might not be that bad | David Kanter | 2012/08/15 12:14 AM |
Predicated stores might not be that bad | Michael S | 2012/08/15 10:41 AM |
Predicated stores might not be that bad | R Byron | 2012/08/17 03:09 AM |
New Article: ARM Goes 64-bit | anon | 2012/08/14 05:54 AM |
New Article: ARM Goes 64-bit | none | 2012/08/14 06:04 AM |
New Article: ARM Goes 64-bit | anon | 2012/08/14 06:43 AM |
New Article: ARM Goes 64-bit | EduardoS | 2012/08/14 05:07 AM |
New Article: ARM Goes 64-bit | anon | 2012/08/14 05:20 AM |
New Article: ARM Goes 64-bit | none | 2012/08/14 05:29 AM |
New Article: ARM Goes 64-bit | anon | 2012/08/14 06:00 AM |
New Article: ARM Goes 64-bit | Michael S | 2012/08/14 02:43 PM |
New Article: ARM Goes 64-bit | Richard Cownie | 2012/08/14 05:53 AM |
OT: Conrad's "Youth" | Richard Cownie | 2012/08/14 06:20 AM |
New Article: ARM Goes 64-bit | EduardoS | 2012/08/14 05:04 AM |
New Article: ARM Goes 64-bit | mpx | 2012/08/14 07:59 AM |
New Article: ARM Goes 64-bit | Antti-Ville Tuunainen | 2012/08/14 08:16 AM |
New Article: ARM Goes 64-bit | anonymou5 | 2012/08/14 10:03 AM |
New Article: ARM Goes 64-bit | name99 | 2012/11/17 02:31 PM |
Microarchitecting a counter register | Paul A. Clayton | 2012/11/17 06:37 PM |
New Article: ARM Goes 64-bit | bakaneko | 2012/08/14 03:21 AM |
New Article: ARM Goes 64-bit | name99 | 2012/11/17 02:40 PM |
New Article: ARM Goes 64-bit | EduardoS | 2012/11/17 03:52 PM |
New Article: ARM Goes 64-bit | Doug S | 2012/11/17 04:48 PM |
New Article: ARM Goes 64-bit | bakaneko | 2012/11/18 04:40 PM |
New Article: ARM Goes 64-bit | Wilco | 2012/11/19 06:59 AM |
New Article: ARM Goes 64-bit | EduardoS | 2012/11/19 07:23 AM |
New Article: ARM Goes 64-bit | Wilco | 2012/11/19 08:31 AM |
Downloading µarch-specific binaries? | Paul A. Clayton | 2012/11/19 10:21 AM |
New Article: ARM Goes 64-bit | EduardoS | 2012/11/19 10:41 AM |
New Article: ARM Goes 64-bit | Wilco | 2012/11/21 06:44 AM |
JIT vs. static compilation (Was: New Article: ARM Goes 64-bit) | VMguy | 2012/11/22 02:21 AM |
JIT vs. static compilation (Was: New Article: ARM Goes 64-bit) | David Kanter | 2012/11/22 11:12 AM |
JIT vs. static compilation (Was: New Article: ARM Goes 64-bit) | Gabriele Svelto | 2012/11/23 02:50 AM |
New Article: ARM Goes 64-bit | EduardoS | 2012/11/23 09:09 AM |
New Article: ARM Goes 64-bit | EBFE | 2012/11/26 12:24 AM |
New Article: ARM Goes 64-bit | Gabriele Svelto | 2012/11/26 02:33 AM |
New Article: ARM Goes 64-bit | EBFE | 2012/11/27 10:17 PM |
New Article: ARM Goes 64-bit | Gabriele Svelto | 2012/11/28 01:32 AM |
New Article: ARM Goes 64-bit | EduardoS | 2012/11/26 11:16 AM |
New Article: ARM Goes 64-bit | EBFE | 2012/11/27 11:33 PM |
New Article: ARM Goes 64-bit | EduardoS | 2012/11/28 04:53 AM |
New Article: ARM Goes 64-bit | Michael S | 2012/11/28 05:15 AM |
New Article: ARM Goes 64-bit | EduardoS | 2012/11/28 06:33 AM |
New Article: ARM Goes 64-bit | Michael S | 2012/11/28 08:16 AM |
New Article: ARM Goes 64-bit | EduardoS | 2012/11/28 08:53 AM |
New Article: ARM Goes 64-bit | Eugene Nalimov | 2012/11/28 04:58 PM |
Amazing! | EduardoS | 2012/11/28 06:25 PM |
Amazing! (non-italic response) | EduardoS | 2012/11/28 06:25 PM |
Amazing! | EBFE | 2012/11/28 07:20 PM |
Undefined behaviour doubles down | EduardoS | 2012/11/28 08:10 PM |
New Article: ARM Goes 64-bit | EBFE | 2012/11/28 06:54 PM |
New Article: ARM Goes 64-bit | EduardoS | 2012/11/28 08:21 PM |
Have you heard of Transmeta? | David Kanter | 2012/11/19 02:47 PM |
New Article: ARM Goes 64-bit | bakaneko | 2012/11/19 08:08 AM |
New Article: ARM Goes 64-bit | David Kanter | 2012/11/19 02:40 PM |
Semantic Dictionary Encoding | Ray | 2012/11/19 09:37 PM |
New Article: ARM Goes 64-bit | Rohit | 2012/11/20 03:48 PM |
New Article: ARM Goes 64-bit | David Kanter | 2012/11/20 10:07 PM |
New Article: ARM Goes 64-bit | Wilco | 2012/11/21 05:41 AM |
New Article: ARM Goes 64-bit | David Kanter | 2012/11/21 09:12 AM |
A JIT example | Mark Roulo | 2012/11/21 09:30 AM |
A JIT example | Wilco | 2012/11/21 06:04 PM |
A JIT example | rwessel | 2012/11/21 08:05 PM |
A JIT example | Gabriele Svelto | 2012/11/23 02:53 AM |
A JIT example | EduardoS | 2012/11/23 09:13 AM |
A JIT example | Wilco | 2012/11/23 12:41 PM |
A JIT example | EduardoS | 2012/11/23 01:06 PM |
A JIT example | Gabriele Svelto | 2012/11/23 03:09 PM |
A JIT example | Symmetry | 2012/11/26 04:58 AM |
New Article: ARM Goes 64-bit | Ray | 2012/11/19 09:27 PM |
New Article: ARM Goes 64-bit | David Kanter | 2012/08/14 08:11 AM |
v7-M is Thumb-only | Paul A. Clayton | 2012/08/14 05:58 AM |
Minor suggested correction | Paul A. Clayton | 2012/08/14 07:33 AM |
Minor suggested correction | anon | 2012/08/14 07:57 AM |
New Article: ARM Goes 64-bit | Exophase | 2012/08/14 07:33 AM |
New Article: ARM Goes 64-bit | David Kanter | 2012/08/14 08:16 AM |
New Article: ARM Goes 64-bit | jigal | 2012/08/15 12:49 PM |
Correction re ARM and BBC Micro | Paul | 2012/08/14 07:59 PM |
Correction re ARM and BBC Micro | Per Hesselgren | 2012/08/15 02:27 AM |
Memory BW so low | Per Hesselgren | 2012/08/15 02:14 AM |
Memory BW so low | none | 2012/08/15 10:16 AM |
New Article: ARM Goes 64-bit | dado | 2012/08/15 09:25 AM |
Number of GPRs | Kenneth Jonsson | 2012/08/16 01:35 PM |
Number of GPRs | Exophase | 2012/08/16 01:52 PM |
Number of GPRs | Kenneth Jonsson | 2012/08/17 01:41 AM |
Ooops, missing link... | Kenneth Jonsson | 2012/08/17 01:44 AM |
64-bit pointers eat some performance | Paul A. Clayton | 2012/08/17 05:19 AM |
64-bit pointers eat some performance | bakaneko | 2012/08/17 07:37 AM |
Brute force seems to work | Paul A. Clayton | 2012/08/17 09:08 AM |
Brute force seems to work | bakaneko | 2012/08/17 10:15 AM |
64-bit pointers eat some performance | Richard Cownie | 2012/08/17 07:46 AM |
Pointer compression is atypical | Paul A. Clayton | 2012/08/17 09:43 AM |
Pointer compression is atypical | Richard Cownie | 2012/08/17 11:57 AM |
Pointer compression is atypical | Howard Chu | 2012/08/22 09:17 PM |
Pointer compression is atypical | Richard Cownie | 2012/08/23 03:48 AM |
Pointer compression is atypical | Howard Chu | 2012/08/23 05:51 AM |
Pointer compression is atypical | Wilco | 2012/08/17 01:41 PM |
Pointer compression is atypical | Richard Cownie | 2012/08/17 03:13 PM |
Pointer compression is atypical | Ricardo B | 2012/08/19 09:44 AM |
Pointer compression is atypical | Howard Chu | 2012/08/22 09:08 PM |
Unified libraries? | Paul A. Clayton | 2012/08/23 06:49 AM |
Pointer compression is atypical | Richard Cownie | 2012/08/23 07:44 AM |
Pointer compression is atypical | Howard Chu | 2012/08/23 04:17 PM |
Pointer compression is atypical | anon | 2012/08/23 07:15 PM |
Pointer compression is atypical | Howard Chu | 2012/08/23 08:33 PM |
64-bit pointers eat some performance | Foo_ | 2012/08/18 11:09 AM |
64-bit pointers eat some performance | Richard Cownie | 2012/08/18 04:25 PM |
64-bit pointers eat some performance | Richard Cownie | 2012/08/18 04:32 PM |
Page-related benefit of small pointers | Paul A. Clayton | 2012/08/23 07:36 AM |
Number of GPRs | Wilco | 2012/08/17 05:31 AM |
Number of GPRs | Kenneth Jonsson | 2012/08/17 10:54 AM |
Number of GPRs | Exophase | 2012/08/17 11:44 AM |
Number of GPRs | Kenneth Jonsson | 2012/08/17 12:22 PM |
Number of GPRs | Wilco | 2012/08/17 01:53 PM |
What about dynamic utilization? | Exophase | 2012/08/17 08:30 AM |
Compiler vs. assembly aliasing knowledge? | Paul A. Clayton | 2012/08/17 09:20 AM |
Compiler vs. assembly aliasing knowledge? | Exophase | 2012/08/17 10:09 AM |
Compiler vs. assembly aliasing knowledge? | anon | 2012/08/18 01:23 AM |
Compiler vs. assembly aliasing knowledge? | Ricardo B | 2012/08/19 10:02 AM |
Compiler vs. assembly aliasing knowledge? | anon | 2012/08/19 05:07 PM |
Compiler vs. assembly aliasing knowledge? | Ricardo B | 2012/08/19 06:26 PM |
Compiler vs. assembly aliasing knowledge? | anon | 2012/08/19 09:03 PM |
Compiler vs. assembly aliasing knowledge? | anon | 2012/08/20 12:59 AM |
Number of GPRs | David Kanter | 2012/08/17 11:46 AM |
RAT issues as part of reason 1 | Paul A. Clayton | 2012/08/17 01:18 PM |
Number of GPRs | name99 | 2012/11/17 05:37 PM |
Large ARFs increase renaming cost | Paul A. Clayton | 2012/11/17 08:23 PM |
Number of GPRs | David Kanter | 2012/08/16 02:31 PM |
Number of GPRs | Richard Cownie | 2012/08/16 04:17 PM |
32 GPRs ~2-3% | Paul A. Clayton | 2012/08/16 05:27 PM |
Oops, Message-ID: aaed6e38-c7bd-467e-ba41-f40cf1020e5e@googlegroups.com (NT) | Paul A. Clayton | 2012/08/16 05:29 PM |
32 GPRs ~2-3% | Exophase | 2012/08/16 09:06 PM |
R31 as SP/zero is kind of neat (NT) | Paul A. Clayton | 2012/08/17 05:23 AM |
32 GPRs ~2-3% | rwessel | 2012/08/17 07:24 AM |
32 GPRs ~2-3% | Exophase | 2012/08/17 08:16 AM |
32 GPRs ~2-3% | Max | 2012/08/17 03:19 PM |
32 GPRs ~2-3% | name99 | 2012/11/17 06:43 PM |
Number of GPRs | mpx | 2012/08/17 12:11 AM |
Latency and power | Paul A. Clayton | 2012/08/17 05:54 AM |
Number of GPRs | bakaneko | 2012/08/17 02:09 AM |
New Article: ARM Goes 64-bit | Steve | 2012/08/17 01:12 PM |
New Article: ARM Goes 64-bit | David Kanter | 2012/08/19 11:42 AM |
New Article: ARM Goes 64-bit | Doug S | 2012/08/19 01:02 PM |
New Article: ARM Goes 64-bit | Anon | 2012/08/19 06:16 PM |
New Article: ARM Goes 64-bit | Steve | 2012/08/30 06:51 AM |
Scalar vs Vector registers | Robert David Graham | 2012/08/19 04:19 PM |
Scalar vs Vector registers | David Kanter | 2012/08/19 04:29 PM |
New Article: ARM Goes 64-bit | Baserock ARM servers | 2012/08/21 03:13 PM |
Baserock ARM servers | Sysanon | 2012/08/21 03:14 PM |
A-15 virtualization and LPAE? | Paul A. Clayton | 2012/08/21 05:13 PM |
A-15 virtualization and LPAE? | Anon | 2012/08/21 06:13 PM |
Half-depth advantages? | Paul A. Clayton | 2012/08/21 07:42 PM |
Half-depth advantages? | Anon | 2012/08/22 02:33 PM |
Thanks for the information (NT) | Paul A. Clayton | 2012/08/22 03:04 PM |
A-15 virtualization and LPAE? | C. Ladisch | 2012/08/23 10:12 AM |
A-15 virtualization and LPAE? | Paul | 2012/08/23 02:17 PM |
Excessive pessimism | Paul A. Clayton | 2012/08/23 03:08 PM |
Excessive pessimism | David Kanter | 2012/08/23 04:05 PM |
New Article: ARM Goes 64-bit | Michael S | 2012/08/22 06:12 AM |
BTW, Baserock==product, Codethink==company (NT) | Paul A. Clayton | 2012/08/22 07:56 AM |
New Article: ARM Goes 64-bit | Reinoud Zandijk | 2012/08/21 10:27 PM |