By: --- (---.delete@this.redheron.com), August 23, 2022 10:00 am
Room: Moderated Discussions
Andrey (andrey.semashev.delete@this.gmail.com) on August 23, 2022 6:29 am wrote:
> --- (---.delete@this.redheron.com) on August 21, 2022 9:27 pm wrote:
> > Andrey (andrey.semashev.delete@this.gmail.com) on August 21, 2022 6:39 pm wrote:
> > >
> > > The key advantage of transactional memory is atomicity of *multiple* memory accesses,
> > > at potentially distant memory locations. No predictor will give you that.
> >
> > I'm no expert, but this seems to me too strong a claim.
> > I make no claims as to whether it's a good use of transistors, but I could imagine a two
> > level system that starts by detecting patterns of atomics that occur close to each in
> > time, and that that then predicts an overall outcome, all held as speculative in the same
> > way as HTM (ie vie special "don't propagate this" bits in each cache line)...
>
> Architecturally, two atomic operations are distinct and are not atomic in combination. This is regardless
> of whether the particular hardware manages to somehow commit the two operations as one atomic operation.
> Being architecturally atomic is what is important here because that is what software relies on.
>
> HTM, on the other hand, is architectural (i.e. not speculative). That is, the architecture guarantees,
> within set limits, that a certain sequence of operations will execute atomically.
I understand this Andrey, but you apparently did not understand the distinction I was trying to make.
Is what people want from HTM
- PERFORMANCE (which can be achieved, I think, by speculation, as I suggested) OR
- EASIER writing of code (which can be achieved, I think, by language+compiler, with any theoretical performance that's left on the table being made up by speculation).
HTM is a means to an end, it's not an end in itself. But is that end
- performance OR
- making it easier to write reliable parallel code
?
> --- (---.delete@this.redheron.com) on August 21, 2022 9:27 pm wrote:
> > Andrey (andrey.semashev.delete@this.gmail.com) on August 21, 2022 6:39 pm wrote:
> > >
> > > The key advantage of transactional memory is atomicity of *multiple* memory accesses,
> > > at potentially distant memory locations. No predictor will give you that.
> >
> > I'm no expert, but this seems to me too strong a claim.
> > I make no claims as to whether it's a good use of transistors, but I could imagine a two
> > level system that starts by detecting patterns of atomics that occur close to each in
> > time, and that that then predicts an overall outcome, all held as speculative in the same
> > way as HTM (ie vie special "don't propagate this" bits in each cache line)...
>
> Architecturally, two atomic operations are distinct and are not atomic in combination. This is regardless
> of whether the particular hardware manages to somehow commit the two operations as one atomic operation.
> Being architecturally atomic is what is important here because that is what software relies on.
>
> HTM, on the other hand, is architectural (i.e. not speculative). That is, the architecture guarantees,
> within set limits, that a certain sequence of operations will execute atomically.
I understand this Andrey, but you apparently did not understand the distinction I was trying to make.
Is what people want from HTM
- PERFORMANCE (which can be achieved, I think, by speculation, as I suggested) OR
- EASIER writing of code (which can be achieved, I think, by language+compiler, with any theoretical performance that's left on the table being made up by speculation).
HTM is a means to an end, it's not an end in itself. But is that end
- performance OR
- making it easier to write reliable parallel code
?