By: Linus Torvalds (torvalds.delete@this.linux-foundation.org), July 8, 2015 8:23 am
Room: Moderated Discussions
dmcq (dmcq.delete@this.fano.co.uk) on July 7, 2015 2:38 pm wrote:
>
> I think ARM would have been better off without those rules at all and just depended on memory barrier instructions.
> I guess they were left in because of having to cope with systems like Linux which were written for x86 and
> don't properly describe for the hardware what is really required.
Bzzt, wrong.
Linux actually copes with weak memory ordering fine. Even the lack of data dependencies. We support alpha, after all, which has exactly that "only memory barriers" model.
And that model is pure and utter garbage. All it results in is software having to add insane memory barriers that don't actually matter 99% of the time (because it's actually hard for hardware to screw up a dependency ordering), which in turn just makes normal code go slower - because the piece of crap hardware that doesn't normally do memory orderings usually goes much slower when you say "uhhuh, now you need to be careful".
Plus it results in bugs, because code that works on one microarchutecture may not work on another.
Now, the weak memory ordering people go "but that code was buggy to begin with", which just shows that they don't understand software. Show me a piece of bugfree software, and I will show you a piece of useless software. Seriously. Bugs happen. Even when you're careful. The way those bugs get noticed is by testing. If you cannot accept that, then you should take up another vocation. Maybe farming.
If your architecture model is designed so that testing is basically meaningless because your architecture under-defines behavior, then your architecture is objectively inferior. Really. It's that simple. It's not a "balance between software and hardware" or a matter of opinion. It's a cold hard fact.
The whole objectively inferior thing has some really nasty interaction at a high level too. The broken ARM and Power memory model causes pain right now for the C standards committee that is trying to come up with ways to do efficient threading. Those people are sadly bending over backwards to make things architecture-neutral, resulting in an objectively worse standard. Oh well.
Anyway, one big reason why the whole "data dependency is an implicit ordering" happens is that (a) it's fairly hard for hardware to screw that up (although it's been done, see alpha), and (b) it turns out that languages like Java actually required some memory ordering. You screw up too badly, and you screw Java (object construction in particular).
But there are other reasons, and debugging is a major one.
Weak memory ordering really is bad.
Linus
>
> I think ARM would have been better off without those rules at all and just depended on memory barrier instructions.
> I guess they were left in because of having to cope with systems like Linux which were written for x86 and
> don't properly describe for the hardware what is really required.
Bzzt, wrong.
Linux actually copes with weak memory ordering fine. Even the lack of data dependencies. We support alpha, after all, which has exactly that "only memory barriers" model.
And that model is pure and utter garbage. All it results in is software having to add insane memory barriers that don't actually matter 99% of the time (because it's actually hard for hardware to screw up a dependency ordering), which in turn just makes normal code go slower - because the piece of crap hardware that doesn't normally do memory orderings usually goes much slower when you say "uhhuh, now you need to be careful".
Plus it results in bugs, because code that works on one microarchutecture may not work on another.
Now, the weak memory ordering people go "but that code was buggy to begin with", which just shows that they don't understand software. Show me a piece of bugfree software, and I will show you a piece of useless software. Seriously. Bugs happen. Even when you're careful. The way those bugs get noticed is by testing. If you cannot accept that, then you should take up another vocation. Maybe farming.
If your architecture model is designed so that testing is basically meaningless because your architecture under-defines behavior, then your architecture is objectively inferior. Really. It's that simple. It's not a "balance between software and hardware" or a matter of opinion. It's a cold hard fact.
The whole objectively inferior thing has some really nasty interaction at a high level too. The broken ARM and Power memory model causes pain right now for the C standards committee that is trying to come up with ways to do efficient threading. Those people are sadly bending over backwards to make things architecture-neutral, resulting in an objectively worse standard. Oh well.
Anyway, one big reason why the whole "data dependency is an implicit ordering" happens is that (a) it's fairly hard for hardware to screw that up (although it's been done, see alpha), and (b) it turns out that languages like Java actually required some memory ordering. You screw up too badly, and you screw Java (object construction in particular).
But there are other reasons, and debugging is a major one.
Weak memory ordering really is bad.
Linus