By: bakaneko (nyan.delete@this.hyan.wan), July 12, 2013 10:12 am
Room: Moderated Discussions
Linus Torvalds (torvalds.delete@this.linux-foundation.org) on July 12, 2013 9:11 am wrote:
> bakaneko (nyan.delete@this.hyan.wan) on July 12, 2013 3:28 am wrote:
> >
> > Then just turn the JIT/virtual machine/whatever
> > into a full blown compiler backend already. GPU
> > drivers have one built in, no excuse for the
> > java sandbox not to do it.
>
> The latency concerns make that impossible in practice.
>
> Also, the one advantage of JIT's is that they can take dynamic behavior into account, so you actually want
> the ability to recompile code on the fly. That can help vectorization efforts: you can say "I'm going to assume
> there are no aliases, and trap if they ever happen" and recompile without vectorization on the trap.
>
> But both of these issues very much mean that you want to only JIT fairly small sections of code
> at a time ((a) latency: because you cannot afford the non-linear effects of bug code and (b) recompiling:
> because you want to have many small "chunks" that you can re-JIT independently).
>
> In fact, you generally don't want to JIT run-once (or run-few-times) instructions
> at all, because the JIT overhead (even if you don't do any optimizations at
> all you have to manage the memory for the translations) is too big.
>
> So JIT's don't generally want to be anything like a "real compiler". But they do have advantages that
> can make vectorization easier due to the whole "we can try to be optimistic" approach, it's just that
> generally the JIT will have to be pretty quick and simple, so you'd only catch the fairly trivial and
> easy cases. But for those, you might be able to do a better job than a static compiler would.
>
> The thing to really look out for with JIT's are benchmarks, though. Particularly for small benchmarks with
> high repeat-counts, a JIT may do things that are completely unrealistic in the real world. You can get
> some totally unrealistic results that make a JIT look really stunningly good, even when it is complete crap.
> Even more so than with the kinds of tricks static compilers do (as discussed in this whole thread).
>
> Linus
You might only be able to hit a few switches
at runtime, but being able to add information
to the software package at compile time on
your developer machine and letting Android
use these to vectorize code during install
would be a big step forward as it allows your
OS to select the right SIMD opcodes.
These kind of information/data formats would
be a prerequisite anyway to test out more
"dynamic" approaches (whatever they are worth).
The jump from C/assembler to staying inside
the virtual machine alone is a big step up and
would be necessary to avoid a lot of per-SIMD
extension/CPU architecture optimized code.
And even catching simple cases is worth it.
People did way worse things in the past to
use SIMD, so this wouldn't change anything
for the worse.
Maybe Dalvik uses vector extensions already
and I didn't know about it?
> bakaneko (nyan.delete@this.hyan.wan) on July 12, 2013 3:28 am wrote:
> >
> > Then just turn the JIT/virtual machine/whatever
> > into a full blown compiler backend already. GPU
> > drivers have one built in, no excuse for the
> > java sandbox not to do it.
>
> The latency concerns make that impossible in practice.
>
> Also, the one advantage of JIT's is that they can take dynamic behavior into account, so you actually want
> the ability to recompile code on the fly. That can help vectorization efforts: you can say "I'm going to assume
> there are no aliases, and trap if they ever happen" and recompile without vectorization on the trap.
>
> But both of these issues very much mean that you want to only JIT fairly small sections of code
> at a time ((a) latency: because you cannot afford the non-linear effects of bug code and (b) recompiling:
> because you want to have many small "chunks" that you can re-JIT independently).
>
> In fact, you generally don't want to JIT run-once (or run-few-times) instructions
> at all, because the JIT overhead (even if you don't do any optimizations at
> all you have to manage the memory for the translations) is too big.
>
> So JIT's don't generally want to be anything like a "real compiler". But they do have advantages that
> can make vectorization easier due to the whole "we can try to be optimistic" approach, it's just that
> generally the JIT will have to be pretty quick and simple, so you'd only catch the fairly trivial and
> easy cases. But for those, you might be able to do a better job than a static compiler would.
>
> The thing to really look out for with JIT's are benchmarks, though. Particularly for small benchmarks with
> high repeat-counts, a JIT may do things that are completely unrealistic in the real world. You can get
> some totally unrealistic results that make a JIT look really stunningly good, even when it is complete crap.
> Even more so than with the kinds of tricks static compilers do (as discussed in this whole thread).
>
> Linus
You might only be able to hit a few switches
at runtime, but being able to add information
to the software package at compile time on
your developer machine and letting Android
use these to vectorize code during install
would be a big step forward as it allows your
OS to select the right SIMD opcodes.
These kind of information/data formats would
be a prerequisite anyway to test out more
"dynamic" approaches (whatever they are worth).
The jump from C/assembler to staying inside
the virtual machine alone is a big step up and
would be necessary to avoid a lot of per-SIMD
extension/CPU architecture optimized code.
And even catching simple cases is worth it.
People did way worse things in the past to
use SIMD, so this wouldn't change anything
for the worse.
Maybe Dalvik uses vector extensions already
and I didn't know about it?