By: Ganon (anon.delete@this.gmail.com), March 31, 2021 7:52 pm
Room: Moderated Discussions
First, TSX is being used by SAP HANA; and here is a more recent (2017) paper from SAP folk directly mentioning it:
SAP HANA – The Evolution of an In-Memory DBMS from Pure OLAP Processing Towards Mixed Workloads
"
As a last optimization HANA uses Intel TSX operations to increase scalability. These
operations are especially important in the B-tree used for the lookup of value IDs in the delta fragment because the delta fragment is accessed both for read operations and also updated concurrently by any update operation. In [KDR+14] it is shown that using TSX results in far better scalability with the number of concurrent threads. This is especially important on systems with a growing number of available cores as we see them already today in deployments of SAP HANA.
"
The claim that this improvement only shows up in toy benchmarks is false.
In fact in
https://static.sched.com/hosted_files/bkk19/3c/BKK19-202_New-Technologies-in-Arm-Architecture.pdf
ARM mentions
"Improved competitiveness with other architectures that support HTM."
as a reason for introducing TME; how can anyone interpret this to mean anything less than this being a blocker for large clients using Intel RTM/HLE from adopting ARM?
Second, regarding the straw man characterization that transactional memory was supposed to solve all concurrent programming challenges; don't think anybody sane actually believed that. Hardware transaction memory is merely meant to simplify things under a limited set of conditions (compared to doing lock-free, hierarchical/striped locking, etc etc):
- You are transacting over no more than a small number of (say <8) cache lines
- You specifically designed the data structure and access patterns with transactional memory and this cache line limit in mind
- You are targeting a controlled and known in advance system environment (no need for non-transactional code paths)
There is enough value provided through code simplifications even in this restricted environment to keep hardware transactional memory around.
SAP HANA – The Evolution of an In-Memory DBMS from Pure OLAP Processing Towards Mixed Workloads
"
As a last optimization HANA uses Intel TSX operations to increase scalability. These
operations are especially important in the B-tree used for the lookup of value IDs in the delta fragment because the delta fragment is accessed both for read operations and also updated concurrently by any update operation. In [KDR+14] it is shown that using TSX results in far better scalability with the number of concurrent threads. This is especially important on systems with a growing number of available cores as we see them already today in deployments of SAP HANA.
"
The claim that this improvement only shows up in toy benchmarks is false.
In fact in
https://static.sched.com/hosted_files/bkk19/3c/BKK19-202_New-Technologies-in-Arm-Architecture.pdf
ARM mentions
"Improved competitiveness with other architectures that support HTM."
as a reason for introducing TME; how can anyone interpret this to mean anything less than this being a blocker for large clients using Intel RTM/HLE from adopting ARM?
Second, regarding the straw man characterization that transactional memory was supposed to solve all concurrent programming challenges; don't think anybody sane actually believed that. Hardware transaction memory is merely meant to simplify things under a limited set of conditions (compared to doing lock-free, hierarchical/striped locking, etc etc):
- You are transacting over no more than a small number of (say <8) cache lines
- You specifically designed the data structure and access patterns with transactional memory and this cache line limit in mind
- You are targeting a controlled and known in advance system environment (no need for non-transactional code paths)
There is enough value provided through code simplifications even in this restricted environment to keep hardware transactional memory around.