By: Rob Thorpe (rthorpe.delete@this.realworldtech.com), October 27, 2006 9:06 am
Room: Moderated Discussions
Linus Torvalds (torvalds@osdl.org) on 10/27/06 wrote:
---------------------------
>Rob Thorpe (rthorpe@realworldtech.com) on 10/27/06 wrote:
>>
>>Well, however you look at it it is a technical issue.
>>I completely agree with the outlook of hardware guys: if
>>it's rare then there's no reason not to let it be slow.
>
>Wrong..
>
>>Of-course this should change if usage changes,
>>which is really what we're talking about here.
>
>The issue is not that "usage changes" (even though that is
>also true), but simply that "different people have
>different usage"!
>
>So even if usage doesn't change, the fact is, what is
>rare for me and you is not rare for somebody else.
Yes. That's why execution traces should be taken over a large set of applications. Also, execution traces should take note of which applications people will actually pay to make fast. It's no good speeding up X because it is common if everyone is happy with it and won't pay for the extra speed.
>This is why you should not have special cases, and
>instead say: everything we do is fast.
You can't not have special cases. Everything a microprocessor does is a special case tailored to some set of particular applications.
>Another way of saying the exact same thing: if it's worth
>doing at all, it's worth doing well.
Hmm, you wouldn't go far in communications electronics.
It's a matter of economics, how well something is worth doing is connected to how many people prepared to buy stuff want it to be good.
>The problem with a lot of RISC stuff was that it took a
>statistical approach to something that in the end isn't
>even all that statistical - it doesn't matter one whit
>how fast something is "on average", what matters is how
>fast something is for me (for any arbitrary value
>of "me").
? It's obviously a statistical problem! There are applications and things that run on them, there is performance etc. All these things are measurable.
Of-course, no one person experiences the statistical mean. But the point is all approach it.
>I've talked about "glass jaws" before. We all know them.
>They're a problem, and when you hit them (even just fairly
>occasionally) it really doesn't matter if something works
>well "on average".
>
>So x86 avoided this glass jaw (and others), by virtue of
>not being "too designed". The "designed" architectures
>all ended up deciding that it didn't matter, and screwing
>over the people for whom it did matter.
Rubbish. x86 had loads of glass jaws in the past, such as it's FPU (which to some extent is still there). They were just sorted out because there was an economic motivation to do so.
Had one of the "more designed" architectures have succeeded the same thing would have happened. And if it had been a RISC architecture it would have been easier.
>For example, it's still true that synchronization doesn't
>matter "on average". It was even more true ten years ago.
>But guess what turned me off alpha? The locked instructions
>(that "nobody" uses, except us system people) were slow as
>hell, and on a (at the time) speed-deamon core, they went
>out to the bus.
>
>Why? Because "they were rare".
>
>Not for me, they weren't. Not now, not back then.
>
>See?
Maybe the designers of those particular Alphas made a mistake in the case of the machine you used, and there was sufficient interest in locks to warrant them being faster. I don't know.
I understand that specific people suffer from certain decisions. But that is inevitable and will happen anyway. I'd love some instructions to do FDTD analysis, but I'm not going to get them.
Trying to satisfy everyone is futile and only results in satisfying no-one. The sensible thing to do is to create priorities, like in any other form of engineering. To find out what code actually is run on machines, in what mixes and who will pay for speed. Then to use the information to work out the priorities for speed in various parts of the machine.
Ultimately, this is what most architects have tried to do over time. The limitations of modern machines show mainly cases where they have failed or old decisions had unintended consequences.
---------------------------
>Rob Thorpe (rthorpe@realworldtech.com) on 10/27/06 wrote:
>>
>>Well, however you look at it it is a technical issue.
>>I completely agree with the outlook of hardware guys: if
>>it's rare then there's no reason not to let it be slow.
>
>Wrong..
>
>>Of-course this should change if usage changes,
>>which is really what we're talking about here.
>
>The issue is not that "usage changes" (even though that is
>also true), but simply that "different people have
>different usage"!
>
>So even if usage doesn't change, the fact is, what is
>rare for me and you is not rare for somebody else.
Yes. That's why execution traces should be taken over a large set of applications. Also, execution traces should take note of which applications people will actually pay to make fast. It's no good speeding up X because it is common if everyone is happy with it and won't pay for the extra speed.
>This is why you should not have special cases, and
>instead say: everything we do is fast.
You can't not have special cases. Everything a microprocessor does is a special case tailored to some set of particular applications.
>Another way of saying the exact same thing: if it's worth
>doing at all, it's worth doing well.
Hmm, you wouldn't go far in communications electronics.
It's a matter of economics, how well something is worth doing is connected to how many people prepared to buy stuff want it to be good.
>The problem with a lot of RISC stuff was that it took a
>statistical approach to something that in the end isn't
>even all that statistical - it doesn't matter one whit
>how fast something is "on average", what matters is how
>fast something is for me (for any arbitrary value
>of "me").
? It's obviously a statistical problem! There are applications and things that run on them, there is performance etc. All these things are measurable.
Of-course, no one person experiences the statistical mean. But the point is all approach it.
>I've talked about "glass jaws" before. We all know them.
>They're a problem, and when you hit them (even just fairly
>occasionally) it really doesn't matter if something works
>well "on average".
>
>So x86 avoided this glass jaw (and others), by virtue of
>not being "too designed". The "designed" architectures
>all ended up deciding that it didn't matter, and screwing
>over the people for whom it did matter.
Rubbish. x86 had loads of glass jaws in the past, such as it's FPU (which to some extent is still there). They were just sorted out because there was an economic motivation to do so.
Had one of the "more designed" architectures have succeeded the same thing would have happened. And if it had been a RISC architecture it would have been easier.
>For example, it's still true that synchronization doesn't
>matter "on average". It was even more true ten years ago.
>But guess what turned me off alpha? The locked instructions
>(that "nobody" uses, except us system people) were slow as
>hell, and on a (at the time) speed-deamon core, they went
>out to the bus.
>
>Why? Because "they were rare".
>
>Not for me, they weren't. Not now, not back then.
>
>See?
Maybe the designers of those particular Alphas made a mistake in the case of the machine you used, and there was sufficient interest in locks to warrant them being faster. I don't know.
I understand that specific people suffer from certain decisions. But that is inevitable and will happen anyway. I'd love some instructions to do FDTD analysis, but I'm not going to get them.
Trying to satisfy everyone is futile and only results in satisfying no-one. The sensible thing to do is to create priorities, like in any other form of engineering. To find out what code actually is run on machines, in what mixes and who will pay for speed. Then to use the information to work out the priorities for speed in various parts of the machine.
Ultimately, this is what most architects have tried to do over time. The limitations of modern machines show mainly cases where they have failed or old decisions had unintended consequences.