By: Paul A. Clayton (paaronclayton.delete@this.gmail.com), April 5, 2021 12:53 pm
Room: Moderated Discussions
[I am posting this here since it applies to the thread more generally.]
The posts here about hardware transactional memory seem reminiscent of discussions about garbage collection. As GC may be defined as simply automated memory management or a more specific mark-and-sweep (not reference counting, not compiler scope tracking, etc.) style of automated memory management, so HTM is ranging from the broad atomic memory operations exploiting optimistic concurrency to particularly combinations of software uses and hardware support. As GC arguments often focus on correctness, programming ease, and performance without an agreement about their relative importance — which will vary by programming context — so posters here seem to talk past each other about these aspects.
Not agreeing on definitions and intent not only facilitates miscommunication between persons but miscommunication within a person (imprecise conceptualization fosters creativity and miscommunication both internally and externally).
As GC was often presented as a panacea for memory management errors and programming difficulty, so HTM has often been presented as a cure-all for scalability with limited programmer effort and skill. As GC has often been depised by C-oriented programmers as removing control and ruining performance, so HTM seems to be dismissed as useless from the perpective of more manually-tuned cases.
I am excessively optimistic about HTM — a consequence of my love of the new and theoretical orientation — but I think some posters are probably a little overly pessimistic in part because practical orientations ("what can I benefit from now") and emphasis on skilled programmers. (Being repeatedly disappointed by something that seemed promising also does not promote optimism.)
A scientific perspective would love this conflict between theory and actuality; such often opens interesting research opportunities. Some of the conflict is somewhat well-known (e.g., software performance counters were mentioned as a difficulty for Azul Systems HTM), but even there the knowledge seems not to have been adopted into best current practices and a strategy for future development. (Transaction escaping operations have been proposed for logging and such, but even there the engineering seems immature.)
Some of the slowness of advancing the state of the art seems to come from weak measurement tools; real world measurement for multithreaded programming is complex and optimistic concurrency increases the complexity. Some seems to come from lack of communication (sadly, "la la la la la — I don't hear you" exists and talking past each other can be mistaken for such); researchers may be ignorant of the nature, scale, or scope of real world issues and practioners may not be positioned (means, motive, and opportunity) to communicate with researchers both expressing concerns and receiving information.
I have some thoughts on HTM, as architecture and microarchitecture, and how such might be integrated into a system design exploiting speculation/"as if" operation, but my thoughts are not well-formed and the topic is complex and extensive.
The posts here about hardware transactional memory seem reminiscent of discussions about garbage collection. As GC may be defined as simply automated memory management or a more specific mark-and-sweep (not reference counting, not compiler scope tracking, etc.) style of automated memory management, so HTM is ranging from the broad atomic memory operations exploiting optimistic concurrency to particularly combinations of software uses and hardware support. As GC arguments often focus on correctness, programming ease, and performance without an agreement about their relative importance — which will vary by programming context — so posters here seem to talk past each other about these aspects.
Not agreeing on definitions and intent not only facilitates miscommunication between persons but miscommunication within a person (imprecise conceptualization fosters creativity and miscommunication both internally and externally).
As GC was often presented as a panacea for memory management errors and programming difficulty, so HTM has often been presented as a cure-all for scalability with limited programmer effort and skill. As GC has often been depised by C-oriented programmers as removing control and ruining performance, so HTM seems to be dismissed as useless from the perpective of more manually-tuned cases.
I am excessively optimistic about HTM — a consequence of my love of the new and theoretical orientation — but I think some posters are probably a little overly pessimistic in part because practical orientations ("what can I benefit from now") and emphasis on skilled programmers. (Being repeatedly disappointed by something that seemed promising also does not promote optimism.)
A scientific perspective would love this conflict between theory and actuality; such often opens interesting research opportunities. Some of the conflict is somewhat well-known (e.g., software performance counters were mentioned as a difficulty for Azul Systems HTM), but even there the knowledge seems not to have been adopted into best current practices and a strategy for future development. (Transaction escaping operations have been proposed for logging and such, but even there the engineering seems immature.)
Some of the slowness of advancing the state of the art seems to come from weak measurement tools; real world measurement for multithreaded programming is complex and optimistic concurrency increases the complexity. Some seems to come from lack of communication (sadly, "la la la la la — I don't hear you" exists and talking past each other can be mistaken for such); researchers may be ignorant of the nature, scale, or scope of real world issues and practioners may not be positioned (means, motive, and opportunity) to communicate with researchers both expressing concerns and receiving information.
I have some thoughts on HTM, as architecture and microarchitecture, and how such might be integrated into a system design exploiting speculation/"as if" operation, but my thoughts are not well-formed and the topic is complex and extensive.