By: Maynard Handley (name99.delete@this.name99.org), December 9, 2014 6:56 pm
Room: Moderated Discussions
Patrick Chase (patrickjchase.delete@this.gmail.com) on December 9, 2014 1:54 pm wrote:
> Cherry-picking one bit out of a long post...
>
> Maynard Handley (name99.delete@this.name99.org) on December 9, 2014 11:33 am wrote:
> > Linus Torvalds (torvalds.delete@this.linux-foundation.org) on December 8, 2014 8:08
> > There may well be severe limits to what we can do if we insist on writing every
> > parallel program in K&R C with pthreads. But we are slowly fumbling our way to
> > better abstractions. Blocks/lambdas/futures are still only a few years old, and
> > where they have been retrofitted to existing languages the edges are still pretty
> > obvious (horribly so in the case of C++, just ugly in the case of Objective C or C#).
>
> I think you're grossly overstating the pace of progress here. Blocks/lambdas/futures are very,
> very, VERY old. Lambda calculus was contemporaneous with Shannon's MS thesis and therefore
> as old as the concept of digital computing, and they were added to Lisp in the 50s.
>
> The fact that they are just now seeing mainstream use therefore constitutes extremely powerful evidence that
> progress on the language/algorithm front is agonizingly slow, i.e. exactly the opposite of what you claim.
>
Uhh, wot???
What I claimed was precisely that the reason we (apparently) don't see progress in parallel programming is less because it's largely impossible for most programmers (the Linus et al claim) than that progress in the field, while steady, is very slow.
But slow is NOT the same thing as stationary...
Moreover. at the same time that there are large old codebases, there are also completely new codebases, some of them substantial, some of them based on very different language paradigms from C+pthreads. Facebook is an interesting example. Twitter starting in Ruby and was willing/required to move the whole thing to a different (but still managed) language. I'm aware of one prominent silicon valley company that's in the process of moving their codebase from JS to Scala.
> Cherry-picking one bit out of a long post...
>
> Maynard Handley (name99.delete@this.name99.org) on December 9, 2014 11:33 am wrote:
> > Linus Torvalds (torvalds.delete@this.linux-foundation.org) on December 8, 2014 8:08
> > There may well be severe limits to what we can do if we insist on writing every
> > parallel program in K&R C with pthreads. But we are slowly fumbling our way to
> > better abstractions. Blocks/lambdas/futures are still only a few years old, and
> > where they have been retrofitted to existing languages the edges are still pretty
> > obvious (horribly so in the case of C++, just ugly in the case of Objective C or C#).
>
> I think you're grossly overstating the pace of progress here. Blocks/lambdas/futures are very,
> very, VERY old. Lambda calculus was contemporaneous with Shannon's MS thesis and therefore
> as old as the concept of digital computing, and they were added to Lisp in the 50s.
>
> The fact that they are just now seeing mainstream use therefore constitutes extremely powerful evidence that
> progress on the language/algorithm front is agonizingly slow, i.e. exactly the opposite of what you claim.
>
Uhh, wot???
What I claimed was precisely that the reason we (apparently) don't see progress in parallel programming is less because it's largely impossible for most programmers (the Linus et al claim) than that progress in the field, while steady, is very slow.
But slow is NOT the same thing as stationary...
Moreover. at the same time that there are large old codebases, there are also completely new codebases, some of them substantial, some of them based on very different language paradigms from C+pthreads. Facebook is an interesting example. Twitter starting in Ruby and was willing/required to move the whole thing to a different (but still managed) language. I'm aware of one prominent silicon valley company that's in the process of moving their codebase from JS to Scala.