By: Paul A. Clayton (paaronclayton.delete@this.gmail.com), January 30, 2013 9:08 pm
Room: Moderated Discussions
Patrick Chase (patrickjchase.delete@this.gmail.com) on January 30, 2013 7:25 pm wrote:
[snip]
> Most people in academia and industry who do this sort of thing use cycle-accurate simulators.
Unfortunately cycle-accurate system simulation (including a realistic memory controller [one not especially old presentation claimed that some "cycle-accurate" simulators use a single value for main memory latency] and I/O) is relatively slow. Of course, this would probably not be a big deal for evaluating the effect of a single parameter for four values.
The wording of the article also gave an impression (to me) that actual hardware was used, but in hindsight you are most likely correct that simulation was used.
[snip]
> The simulator is actually preferred, because you can precisely control things like bus/memory
> latencies and posted transaction counts. That makes them inherently more repeatable (and more
> representative, if you know what you're doing) than running on, say, an eval board.
From the little I have read, early versions of eval boards not infrequently have performance bugs (or poor documentation of configuration which can have a similar effect).
The ability to tweak parameters and repeat operation can certainly be helpful. However, repeatability can also have a disadvantage in not exposing "random" factors like page placement or relative timing of execution phases in multiprocessing.
I think ideally one would be able to use hardware with limited configurability to do some gross evaluations (particularly for software evaluation). Analytical models, functional simulation, and cycle-accurate simulation all have places (in my ignorant opinion), but being able to adjust cache sizes, issue queue sizes, etc. would seem to allow very fast exploration of certain factors. There might even be a place for FPGA-based evaluation methods (which might be facilitated by the availability of a "cloud" service [universities already have compute clusters and commercial compute services are available, but comparable FPGA-based services seem not to be well established--in theory, such could be similar to platform evaluation access to high-end machines]).
Anyway, thanks for the heads-up.
[snip]
> Most people in academia and industry who do this sort of thing use cycle-accurate simulators.
Unfortunately cycle-accurate system simulation (including a realistic memory controller [one not especially old presentation claimed that some "cycle-accurate" simulators use a single value for main memory latency] and I/O) is relatively slow. Of course, this would probably not be a big deal for evaluating the effect of a single parameter for four values.
The wording of the article also gave an impression (to me) that actual hardware was used, but in hindsight you are most likely correct that simulation was used.
[snip]
> The simulator is actually preferred, because you can precisely control things like bus/memory
> latencies and posted transaction counts. That makes them inherently more repeatable (and more
> representative, if you know what you're doing) than running on, say, an eval board.
From the little I have read, early versions of eval boards not infrequently have performance bugs (or poor documentation of configuration which can have a similar effect).
The ability to tweak parameters and repeat operation can certainly be helpful. However, repeatability can also have a disadvantage in not exposing "random" factors like page placement or relative timing of execution phases in multiprocessing.
I think ideally one would be able to use hardware with limited configurability to do some gross evaluations (particularly for software evaluation). Analytical models, functional simulation, and cycle-accurate simulation all have places (in my ignorant opinion), but being able to adjust cache sizes, issue queue sizes, etc. would seem to allow very fast exploration of certain factors. There might even be a place for FPGA-based evaluation methods (which might be facilitated by the availability of a "cloud" service [universities already have compute clusters and commercial compute services are available, but comparable FPGA-based services seem not to be well established--in theory, such could be similar to platform evaluation access to high-end machines]).
Anyway, thanks for the heads-up.