By: anon (anon.delete@this.anon.com), July 15, 2013 4:43 pm
Room: Moderated Discussions
David Kanter (dkanter.delete@this.realworldtech.com) on July 15, 2013 1:57 pm wrote:
> Steve (sberens.Throwaway.delete@this.gmail.com) on July 14, 2013 8:16 pm wrote:
> > Wilco (Wilco.Dijkstra.delete@this.ntlworld.com) on July 12, 2013 11:59 am wrote:
> > > AnTuTu apparently fixed their benchmark: http://www.eetimes.com/author.asp?section_id=36&doc_id=1318894&
> > >
> > > The RAM score halves on Atom but ARM scores remain the same. Apparently they
> > > still use ICC, but hopefully AnTuTu will review this given this debacle.
> > >
> > > Wilco
> > >
> >
> > Funny how it is only the ICC compiler from Intel that dropped code that tested the RAM.
> >
> > It was an Intel problem all along not AnTuTu.
> >
> > Intel: opps got caught cheating again so now direct flack to AnTuTu
>
> Newsflash: That's a perfectly legal, intelligent and reasonable optimization. Get used to it.
>
> It's 100% a benchmark problem that they have crap code lying around.
It is true, Intel did not "drop code that tested the RAM", as such. That would be a compiler bug. They would have found some transformation, that is allowed by the high level language, such that the emitted code can execute more quickly. So in fact Intel "_optimized_ the code that tested the RAM" (to a point where it did not do what the author expected!).
Optimizations are what a compiler is all about, after all.
However there are two issues with these benchmark numbers.
The first is that x86 code was compiled with a special compiler and tuned flags that did not represent the actual build process, while the ARM used generic compiler and poor generic flags. Unless these are Android's out-of-the-box build settings, this is already dishonest IMO.
The second is the code transformations and big speedups. Now if they came about due to the general optimization process, the compiler did nothing wrong. The point where optimizations change from being "legal, intelligent and reasonable" to "cheating" is when there is an optimization which recognizes a specific pattern in the benchmark, and applies transformations that are not generalized (because they would be too expensive to calculate for more general pattern), or they make use of specific knowledge of that particular benchmark (if we see this loop at one place, we know subsequent data access will be sequential because that is what the benchmark does).
> Steve (sberens.Throwaway.delete@this.gmail.com) on July 14, 2013 8:16 pm wrote:
> > Wilco (Wilco.Dijkstra.delete@this.ntlworld.com) on July 12, 2013 11:59 am wrote:
> > > AnTuTu apparently fixed their benchmark: http://www.eetimes.com/author.asp?section_id=36&doc_id=1318894&
> > >
> > > The RAM score halves on Atom but ARM scores remain the same. Apparently they
> > > still use ICC, but hopefully AnTuTu will review this given this debacle.
> > >
> > > Wilco
> > >
> >
> > Funny how it is only the ICC compiler from Intel that dropped code that tested the RAM.
> >
> > It was an Intel problem all along not AnTuTu.
> >
> > Intel: opps got caught cheating again so now direct flack to AnTuTu
>
> Newsflash: That's a perfectly legal, intelligent and reasonable optimization. Get used to it.
>
> It's 100% a benchmark problem that they have crap code lying around.
It is true, Intel did not "drop code that tested the RAM", as such. That would be a compiler bug. They would have found some transformation, that is allowed by the high level language, such that the emitted code can execute more quickly. So in fact Intel "_optimized_ the code that tested the RAM" (to a point where it did not do what the author expected!).
Optimizations are what a compiler is all about, after all.
However there are two issues with these benchmark numbers.
The first is that x86 code was compiled with a special compiler and tuned flags that did not represent the actual build process, while the ARM used generic compiler and poor generic flags. Unless these are Android's out-of-the-box build settings, this is already dishonest IMO.
The second is the code transformations and big speedups. Now if they came about due to the general optimization process, the compiler did nothing wrong. The point where optimizations change from being "legal, intelligent and reasonable" to "cheating" is when there is an optimization which recognizes a specific pattern in the benchmark, and applies transformations that are not generalized (because they would be too expensive to calculate for more general pattern), or they make use of specific knowledge of that particular benchmark (if we see this loop at one place, we know subsequent data access will be sequential because that is what the benchmark does).