By: anon (anon.delete@this.anon.com), July 13, 2015 9:01 pm
Room: Moderated Discussions
EduardoS (no.delete@this.spam.com) on July 13, 2015 4:48 pm wrote:
> dmcq (dmcq.delete@this.fano.co.uk) on July 13, 2015 3:24 pm wrote:
> > Strict and easy is very good for high level languages.
>
> At such high levels languages if everything below is so f***ed up the language will only
> have to options: give that complexity to the programmer, which is against the "simple"
> or put barriers everywhere which will slow down things to unacceptable levels.
>
> Guess what, everybody opted for the first, so Linus is arguing in favor of a sane world where
> things happen in the order the programmer asked for, hardware makers f***ed up and shuffled
> the order completely compiler makers said: "since everything is already a mess we will not try
> to fix anything, instead we will f*** up a little more and give the problem to the programmer",
> so what you thing for high level languages is not going to happen, neither is for Linus, you
> kind of agree with him but you think this should be accomplished by impossible means.
>
Nah. Guaranteed, 100%, the "average programmer" will fuck things up in a gargantuan way if they try to write concurrent code without using languages or libraries to access shared data. x86 memory ordering they will not understand at all, and it would hardly help much even if they *did* have sequential consistency.
Even in the case that they do use languages or libraries, they will fuck things up too. Deadlocks, missed wakeups, use after free/refcounting bugs, cross-stack access, missed locking, unscalable locking, etc etc.
Actually, even the best programmers will make bugs too in those cases, but the difference being that they will be able to understand and debug and fix the problems.
I see the weak vs x86 memory ordering models as pissing in the wind. Sure, weak ordering might result in *slightly* more bugs. More memory ordering bugs. But in proportion to the number of concurrency bugs in general, pretty damn insignificant difference. Even most Linux kernel code uses well defined locking APIs that hides such details away in arch specific code, and many of the very clever lockless stuff also sits in synchronization libraries or confined to well defined data structure libraries. Most *kernel* programmers don't need to know about it, let alone userspace programmers. So I don't get why it's considered such a problem (might as well claim that SCSI programming sucks because it's complex to write data to the disk, ignoring the fact that nobody needs to know or care about it outside the few who write that part of the OS).
> dmcq (dmcq.delete@this.fano.co.uk) on July 13, 2015 3:24 pm wrote:
> > Strict and easy is very good for high level languages.
>
> At such high levels languages if everything below is so f***ed up the language will only
> have to options: give that complexity to the programmer, which is against the "simple"
> or put barriers everywhere which will slow down things to unacceptable levels.
>
> Guess what, everybody opted for the first, so Linus is arguing in favor of a sane world where
> things happen in the order the programmer asked for, hardware makers f***ed up and shuffled
> the order completely compiler makers said: "since everything is already a mess we will not try
> to fix anything, instead we will f*** up a little more and give the problem to the programmer",
> so what you thing for high level languages is not going to happen, neither is for Linus, you
> kind of agree with him but you think this should be accomplished by impossible means.
>
Nah. Guaranteed, 100%, the "average programmer" will fuck things up in a gargantuan way if they try to write concurrent code without using languages or libraries to access shared data. x86 memory ordering they will not understand at all, and it would hardly help much even if they *did* have sequential consistency.
Even in the case that they do use languages or libraries, they will fuck things up too. Deadlocks, missed wakeups, use after free/refcounting bugs, cross-stack access, missed locking, unscalable locking, etc etc.
Actually, even the best programmers will make bugs too in those cases, but the difference being that they will be able to understand and debug and fix the problems.
I see the weak vs x86 memory ordering models as pissing in the wind. Sure, weak ordering might result in *slightly* more bugs. More memory ordering bugs. But in proportion to the number of concurrency bugs in general, pretty damn insignificant difference. Even most Linux kernel code uses well defined locking APIs that hides such details away in arch specific code, and many of the very clever lockless stuff also sits in synchronization libraries or confined to well defined data structure libraries. Most *kernel* programmers don't need to know about it, let alone userspace programmers. So I don't get why it's considered such a problem (might as well claim that SCSI programming sucks because it's complex to write data to the disk, ignoring the fact that nobody needs to know or care about it outside the few who write that part of the OS).