By: Brendan (btrotter.delete@this.gmail.com), May 17, 2013 3:03 am
Room: Moderated Discussions
Hi,
Brendan (btrotter.delete@this.gmail.com) on May 16, 2013 11:54 am wrote:
> RichardC (tich.delete@this.pobox.com) on May 16, 2013 6:57 am wrote:
> > Brendan (btrotter.delete@this.gmail.com) on May 16, 2013 12:29 am wrote:
> > > Is it reasonable to expect competent developers to be able to handle that extra complexity when
> > > it's beneficial? I guess this depends on how you define "competent". I'd say "it's definitely
> > > reasonable" (it's not the 20th century anymore) but other people may have lower standards.
> >
> > See this paper http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf
> >
> > Key quote from the conclusion: "non-trivial multi-threaded programs are incomprehensible
> > to humans".
>
> Here's another quote: "An early objective was to permit modification of concurrent programs
> via a graphical user interface while those concurrent programs were executing."
>
> I'd be shocked if anyone thought this would be a "risk free" endeavour from the outset.
> The fact that it actually worked with only one deadlock in 4 years is amazing and shows
> that even academics can get an extremely convoluted example of threading 99% right.
Ok; I should admit that when it comes to papers like this, I've learnt to be very cynical. Normally it takes 10 minutes to see what they want you to think, then several hours of research to figure out how biased they are in which direction (and why) before you can determine how much of what they're saying is true and how much is hype.
For this paper, what they want you to think is "If we expect concurrent programming to be mainstream, and if we demand reliability and pre-dictability from programs, then we must discard threads as a programming model.".
They say "a deadlock after 4 years" is a massive failure. I say this is less bugs than any sane person would expect for any (single-threaded or multi-threaded) code; and their statement indicates a huge bias.
I'm not too sure what motivates this bias (I haven't done the several hours of research to assess it properly). I'm not too sure if it's hype intended to promote their aspect oriented programming model, or if it's merely intended to get people into thinking their work is more important than it actually is (researchers need hype for funding), or if there are other reasons (or a combination of several reasons).
All I'm saying is that one paper that seems heavily biased (for unknown reasons) should not be considered proof of their conclusion.
In addition; even if you do believe their conclusion, they aren't saying that concurrency (multiple cores or SMT) is bad - they're only saying that we need "better" languages and/or tools to deal with concurrency.
Note: I should also admit that I am also biased. My bias is towards the actor model - more specifically, "shared nothing" actors communicating via. asynchronous messaging (where 1 to n actors are mapped to 1 to m threads/CPUs). Basically; I'm biased towards accepting their "discard threads as a programming model" argument, but despite my own bias I still think their paper exaggerates and misrepresents the difficulty of threads.
- Brendan
Brendan (btrotter.delete@this.gmail.com) on May 16, 2013 11:54 am wrote:
> RichardC (tich.delete@this.pobox.com) on May 16, 2013 6:57 am wrote:
> > Brendan (btrotter.delete@this.gmail.com) on May 16, 2013 12:29 am wrote:
> > > Is it reasonable to expect competent developers to be able to handle that extra complexity when
> > > it's beneficial? I guess this depends on how you define "competent". I'd say "it's definitely
> > > reasonable" (it's not the 20th century anymore) but other people may have lower standards.
> >
> > See this paper http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-1.pdf
> >
> > Key quote from the conclusion: "non-trivial multi-threaded programs are incomprehensible
> > to humans".
>
> Here's another quote: "An early objective was to permit modification of concurrent programs
> via a graphical user interface while those concurrent programs were executing."
>
> I'd be shocked if anyone thought this would be a "risk free" endeavour from the outset.
> The fact that it actually worked with only one deadlock in 4 years is amazing and shows
> that even academics can get an extremely convoluted example of threading 99% right.
Ok; I should admit that when it comes to papers like this, I've learnt to be very cynical. Normally it takes 10 minutes to see what they want you to think, then several hours of research to figure out how biased they are in which direction (and why) before you can determine how much of what they're saying is true and how much is hype.
For this paper, what they want you to think is "If we expect concurrent programming to be mainstream, and if we demand reliability and pre-dictability from programs, then we must discard threads as a programming model.".
They say "a deadlock after 4 years" is a massive failure. I say this is less bugs than any sane person would expect for any (single-threaded or multi-threaded) code; and their statement indicates a huge bias.
I'm not too sure what motivates this bias (I haven't done the several hours of research to assess it properly). I'm not too sure if it's hype intended to promote their aspect oriented programming model, or if it's merely intended to get people into thinking their work is more important than it actually is (researchers need hype for funding), or if there are other reasons (or a combination of several reasons).
All I'm saying is that one paper that seems heavily biased (for unknown reasons) should not be considered proof of their conclusion.
In addition; even if you do believe their conclusion, they aren't saying that concurrency (multiple cores or SMT) is bad - they're only saying that we need "better" languages and/or tools to deal with concurrency.
Note: I should also admit that I am also biased. My bias is towards the actor model - more specifically, "shared nothing" actors communicating via. asynchronous messaging (where 1 to n actors are mapped to 1 to m threads/CPUs). Basically; I'm biased towards accepting their "discard threads as a programming model" argument, but despite my own bias I still think their paper exaggerates and misrepresents the difficulty of threads.
- Brendan