By: dmcq (dmcq.delete@this.fano.co.uk), July 13, 2015 3:24 pm
Room: Moderated Discussions
sylt (no.delete@this.thanks.com) on July 13, 2015 2:19 pm wrote:
> dmcq (dmcq.delete@this.fano.co.uk) on July 13, 2015 12:20 pm wrote:
> > You simply don't seem to be able to acknowledge that supporting that is causing more problems,
> > that Linux is part of a feedback loop leading to more buggy programs being produced and your
> > attitude is part of a problem not a solution. Yes supporting old buggy programs is nice and
> > that is what Intel has done, and it can disavow all responsibility for the problems they encourage
> > by saying they have put in very strong support for the sort of thing you want.
> >
>
> I'm not sure I understand what you are arguing here. If people write software for systems with
> strict memory models and takes advantage of that it can hardly be called a bug right?
Provided they do it right, that is a very big assumption that doesn't work out in practice.
> Is it that you think that if everybody wrote software for weak memory models and needed to annotate all load/stores
> or insert barriers then we would somehow have less bugs? I fail to see why people would be better at correctly
> inserting barriers (which to me seems like a fairly hard problem in general) than all the other things people
> fail to do correctly? It seems like we are just adding one more thing that can go wrong.
I am not talking about taking an idea that is developed at too low a level as for a strict memory model and then making it even more likely to fail by annotating it. That's like translating an assembler program into C complete with all the flags updates and then complaining that i is more complicated than the original assembler.
> Or is it that you would like have weak memory models that detects and faults any violation of ordering
> rules by software that would traditionally result in unpredictable behavior on a weakly ordered system?
> Without thinking too much I could see this being a credible alternative to a strictly ordered model
> and helping to expose some types of bugs in multi threaded programs etc. However this seems like it
> could very well be harder to do in HW than actually doing strict memory ordering.
There are developments in checking programs like that, programs written in high level languages that actually say which bits are shared rather than treating C as an assembler language and depending on outside knowledge about its implementation on a particular machine.
> In any case, my experience with building low end, and very simple processors supports Linus point of
> view that as the complexity and sophistication of the system increases the drive for stricter and simpler
> to reason about rules also increases. I have not worked directly on memory ordering issues but for
> similar issues where it was deemed easiest for the HW to punt the problem to software for simple designs
> it soon became clear that defining simple intuitive rules and obeying them in HW gave the best over
> all trade off of performance, complexity and ease of design as the system evolved.
I fully agree with simpler and stricter - at the correct level. Assembler and machine code is not that level.
> Sure, sometimes it feels rotten when some "unnecessary niceties" pops up in the critical paths but
> after a few generations you are doing much more complicated things anyway for pure performance reasons
> and the "unnecessary niceties" are a side show. After seeing this play out a few times I find it hard
> to not go with strict and easy. This is especially true for us since we deliver inhouse and want to
> optimize the productivity of HW, compiler and firmware teams combined. It's no good shipping some
> blazing fast HW if you have no firmware because nobody can figure out how to program it.
Strict and easy is very good for high level languages.
> dmcq (dmcq.delete@this.fano.co.uk) on July 13, 2015 12:20 pm wrote:
> > You simply don't seem to be able to acknowledge that supporting that is causing more problems,
> > that Linux is part of a feedback loop leading to more buggy programs being produced and your
> > attitude is part of a problem not a solution. Yes supporting old buggy programs is nice and
> > that is what Intel has done, and it can disavow all responsibility for the problems they encourage
> > by saying they have put in very strong support for the sort of thing you want.
> >
>
> I'm not sure I understand what you are arguing here. If people write software for systems with
> strict memory models and takes advantage of that it can hardly be called a bug right?
Provided they do it right, that is a very big assumption that doesn't work out in practice.
> Is it that you think that if everybody wrote software for weak memory models and needed to annotate all load/stores
> or insert barriers then we would somehow have less bugs? I fail to see why people would be better at correctly
> inserting barriers (which to me seems like a fairly hard problem in general) than all the other things people
> fail to do correctly? It seems like we are just adding one more thing that can go wrong.
I am not talking about taking an idea that is developed at too low a level as for a strict memory model and then making it even more likely to fail by annotating it. That's like translating an assembler program into C complete with all the flags updates and then complaining that i is more complicated than the original assembler.
> Or is it that you would like have weak memory models that detects and faults any violation of ordering
> rules by software that would traditionally result in unpredictable behavior on a weakly ordered system?
> Without thinking too much I could see this being a credible alternative to a strictly ordered model
> and helping to expose some types of bugs in multi threaded programs etc. However this seems like it
> could very well be harder to do in HW than actually doing strict memory ordering.
There are developments in checking programs like that, programs written in high level languages that actually say which bits are shared rather than treating C as an assembler language and depending on outside knowledge about its implementation on a particular machine.
> In any case, my experience with building low end, and very simple processors supports Linus point of
> view that as the complexity and sophistication of the system increases the drive for stricter and simpler
> to reason about rules also increases. I have not worked directly on memory ordering issues but for
> similar issues where it was deemed easiest for the HW to punt the problem to software for simple designs
> it soon became clear that defining simple intuitive rules and obeying them in HW gave the best over
> all trade off of performance, complexity and ease of design as the system evolved.
I fully agree with simpler and stricter - at the correct level. Assembler and machine code is not that level.
> Sure, sometimes it feels rotten when some "unnecessary niceties" pops up in the critical paths but
> after a few generations you are doing much more complicated things anyway for pure performance reasons
> and the "unnecessary niceties" are a side show. After seeing this play out a few times I find it hard
> to not go with strict and easy. This is especially true for us since we deliver inhouse and want to
> optimize the productivity of HW, compiler and firmware teams combined. It's no good shipping some
> blazing fast HW if you have no firmware because nobody can figure out how to program it.
Strict and easy is very good for high level languages.