By: dmcq (dmcq.delete@this.fano.co.uk), February 6, 2017 11:07 am
Room: Moderated Discussions
RichardC (tich.delete@this.pobox.com) on February 6, 2017 8:36 am wrote:
> etudiant (etudiant.delete@this.msn.com) on February 4, 2017 2:40 pm wrote:
>
> > Definitely a bot, there is no coherent thought here.
> > Pretty impressive for a bot though, well formed sentences and good grammar, generally vaguely
> > connected to the thread at hand.
>
> I think you're overestimating the normal level of "coherent thought" among the non-bot
> population. In just the same way that many people object to automated driving systems,
> while ignoring the copious statistical and anecdotal evidence that
> humans are really terrible drivers who make many serious mistakes.
I'm surprised there aren't more accidents. There are loads of idiots driving around recklessly at high speed, so it makes sense to spend as little time driving on the road as one can. That's why I always drive as fast as possible ;-)
> It also reminds me a little of the https://en.wikipedia.org/wiki/Sokal_affair, where a
> deliberately nonsensical pseudo-academic article "Transgressing the Boundaries: Towards a
> Transformative Hermeneutics of Quantum Gravity" was accepted and published in the journal
> "Social Text", suggesting that even in some branches of academia, experienced and qualified academic
> editors accept may work devoid of "coherent thought" as being within the acceptable standards of
> their discipline, provided it conforms to the expected grammar, syntax, and vocabulary.
>
> From that perspective, it's a little entertaining to consider a thought experiment where
> the content of Ireland's posts was put into a style close to the RWT norm - fewer words,
> shorter posts, a bit more quantitative data (even if bogus), and also with less polite
> responses to criticism :-). I think there is a genuine - though possibly more sociological than
> technical - point lurking in there, about how the increasing use of complicated and fine-grain
> computer-based models, interacting in dynamic and unpredictable ways, may be exposing us
> to new risks (in the same ways that Perreault's "Normal Risks" analyzed for earlier
> complex technologies such as nuclear reactors and airliners). The part played by
> automated-trading-bots in the 2008 financial crash might be one example.
I think we have a big enough corpus for that experiment now!
> But ... well, this *is* a forum where the short and pithy statement, preferably backed
> by hard numbers and hardcore technical detail, is the preferred style.
> etudiant (etudiant.delete@this.msn.com) on February 4, 2017 2:40 pm wrote:
>
> > Definitely a bot, there is no coherent thought here.
> > Pretty impressive for a bot though, well formed sentences and good grammar, generally vaguely
> > connected to the thread at hand.
>
> I think you're overestimating the normal level of "coherent thought" among the non-bot
> population. In just the same way that many people object to automated driving systems,
> while ignoring the copious statistical and anecdotal evidence that
> humans are really terrible drivers who make many serious mistakes.
I'm surprised there aren't more accidents. There are loads of idiots driving around recklessly at high speed, so it makes sense to spend as little time driving on the road as one can. That's why I always drive as fast as possible ;-)
> It also reminds me a little of the https://en.wikipedia.org/wiki/Sokal_affair, where a
> deliberately nonsensical pseudo-academic article "Transgressing the Boundaries: Towards a
> Transformative Hermeneutics of Quantum Gravity" was accepted and published in the journal
> "Social Text", suggesting that even in some branches of academia, experienced and qualified academic
> editors accept may work devoid of "coherent thought" as being within the acceptable standards of
> their discipline, provided it conforms to the expected grammar, syntax, and vocabulary.
>
> From that perspective, it's a little entertaining to consider a thought experiment where
> the content of Ireland's posts was put into a style close to the RWT norm - fewer words,
> shorter posts, a bit more quantitative data (even if bogus), and also with less polite
> responses to criticism :-). I think there is a genuine - though possibly more sociological than
> technical - point lurking in there, about how the increasing use of complicated and fine-grain
> computer-based models, interacting in dynamic and unpredictable ways, may be exposing us
> to new risks (in the same ways that Perreault's "Normal Risks" analyzed for earlier
> complex technologies such as nuclear reactors and airliners). The part played by
> automated-trading-bots in the 2008 financial crash might be one example.
I think we have a big enough corpus for that experiment now!
> But ... well, this *is* a forum where the short and pithy statement, preferably backed
> by hard numbers and hardcore technical detail, is the preferred style.