By: Linus Torvalds (torvalds.delete@this.linux-foundation.org), April 3, 2021 12:14 pm
Room: Moderated Discussions
sr (nobody.delete@this.nowhere.com) on April 3, 2021 11:30 am wrote:
>
> But whole idea of transactional memory isn't saved cycles from locking - main point
> is to let other threads to use all cachelines that aren't modified by transaction.
No.
The main idea of transactional memory is to improve performance.
Yes, it does so by avoiding bouncing cachelines (and that is mainly by keeping them shared).
But one is fundamental (performance) and one is just a tool to get there (avoid dirtying cachelines).
See?
There is absolutely zero point in avoiding dirtying cachelines in itself. If using an actual honest-to-goodness lock (or incrementing and then decrementing a reference count - another example of something that using a transaction could possibly avoid) and dirtying the cacheline performs better, then that's by definition better than trying to desperately use a (slower) transaction that avoids it.
So the basic and truly fundamental issue is purely about performance. If transactions don't perform better than locking (or atomics), you have entirely missed the whole point. No amount "but but at least you avoided a dirty cacheline" matters one whit if those dirty cache lines got you better performance.
Which gets us back to my original argument: locking is not necessarily hugely expensive in the common case with little contention. And in a not insignificant number of the cases where lock contention is a real thing, trying to do the same with transactions will fail due to capacity and/or conflict issues.
If your transaction hardware doesn't handle those cases well, your transactional memory hardware is useless garbage and has failed.
Case in point: TSX.
Really. I'm not making some theoretical argument here. I'm making arguments based on undeniable facts. TSX has been around. It hasn't performed.
Go through this thread - yes, despite the fact that there are a lot of variations of "anonymous" here - and read the posts that seem to be by people who have actually tried this out in real life, rather than the posts by people who argue from some armchair theory.
Do you see a pattern? A pattern of "transactional memory didn't work out"? A pattern of reality?
Honestly, after all these failures - on multiple architectures - the proof of burden is not on me, or on the other people who are arguing that transactional memory hasn't been a great success. The proof of burden is on the people who still argue for it, despite all the actual implementation reality, and despite all the history of vendors actually trying.
Linus
>
> But whole idea of transactional memory isn't saved cycles from locking - main point
> is to let other threads to use all cachelines that aren't modified by transaction.
No.
The main idea of transactional memory is to improve performance.
Yes, it does so by avoiding bouncing cachelines (and that is mainly by keeping them shared).
But one is fundamental (performance) and one is just a tool to get there (avoid dirtying cachelines).
See?
There is absolutely zero point in avoiding dirtying cachelines in itself. If using an actual honest-to-goodness lock (or incrementing and then decrementing a reference count - another example of something that using a transaction could possibly avoid) and dirtying the cacheline performs better, then that's by definition better than trying to desperately use a (slower) transaction that avoids it.
So the basic and truly fundamental issue is purely about performance. If transactions don't perform better than locking (or atomics), you have entirely missed the whole point. No amount "but but at least you avoided a dirty cacheline" matters one whit if those dirty cache lines got you better performance.
Which gets us back to my original argument: locking is not necessarily hugely expensive in the common case with little contention. And in a not insignificant number of the cases where lock contention is a real thing, trying to do the same with transactions will fail due to capacity and/or conflict issues.
If your transaction hardware doesn't handle those cases well, your transactional memory hardware is useless garbage and has failed.
Case in point: TSX.
Really. I'm not making some theoretical argument here. I'm making arguments based on undeniable facts. TSX has been around. It hasn't performed.
Go through this thread - yes, despite the fact that there are a lot of variations of "anonymous" here - and read the posts that seem to be by people who have actually tried this out in real life, rather than the posts by people who argue from some armchair theory.
Do you see a pattern? A pattern of "transactional memory didn't work out"? A pattern of reality?
Honestly, after all these failures - on multiple architectures - the proof of burden is not on me, or on the other people who are arguing that transactional memory hasn't been a great success. The proof of burden is on the people who still argue for it, despite all the actual implementation reality, and despite all the history of vendors actually trying.
Linus