By: Rob Thorpe (rthorpe.delete@this.realworldtech.com), November 14, 2006 11:18 am
Room: Moderated Discussions
Linus Torvalds (torvalds@osdl.org) on 11/14/06 wrote:
---------------------------
>Rob Thorpe (rthorpe@realworldtech.com) on 11/14/06 wrote:
>>
>>What you're describing isn't much of a problem.
>
>I disagree. It becomes a huge logistical problem.
>
>>There are generally two types of legacy software. The
>>first is that where speed does not matter because when
>>the software was written processors were much slower than
>>they are today, so current machines are more than adequate
>>to run it.
>
>This argument is just bogus. You're saying it's ok to run
>fairly slowly, because your previous machine ran even
>worse. That's simply not true, and it's also a totally
>inane argument to make from a hardware design perspective.
>
>Think of this from the perspective of the hardware
>designer, who also is obviously going to sell that
>hardware. Are you better off with an architecture that
>might initially not be any faster (example: do the
>unaligned thing as a micro-trap, no faster than doing it
>with two separate instructions), but that you can speed up
>existing software for in the future?
>
>I say yes. It's much worse for everybody if old
>software stays slower. Users don't want to recompile.
?? Since when do users recompile!
> And
>hardware vendors sure as hell don't want to have users
>decide not to upgrade because it doesn't help the 99% of
>what they do - legacy applications.
>
>So your argument is crazy. Legacy binaries are simply too
>important to dismiss like that.
No their not. Tell me, how much legacy software do you have on your machine which has performance characteristics you actually care about.
Speaking for myself I have zero. Granted I have loads of legacy software, some Windows, some Linux some even ancient DOS software. But performance is not important for any of it.
Apart from geeks I've met very few people who have any legacy software at all on their machines. Most users use the software that is installed on the system when they bought it, only occasionally buying new packages. When they buy a new machine they start again, getting a new version of Windows, Office etc.
>>The second is software that is old, but is still being
>>maintained. The vast majority of this software is written
>>in high level languages.
>
>That's another totally idiotic argument.
>
>People simply do not want to recompile. In many cases
>they even cannot recompile themselves, and they sure
>as hell don't want to pay for a software upgrade for all
>their critical software. If a new CPU needs a recompile,
>that new CPU is largely broken, as far as 99% of all users
>are concerned.
We're not talking about a CPU needing a recompile. We're talking about it benefitting from one. Lets say you have a machine with instruction set X with a couple of instructions that must be synthesised. Later the instruction set is upgraded to X+1, and synthesis is no longer necessary. If the CPU that executes X+1 is faster than the previous one then the application will be much faster. Every part of the application will get faster even the synthesised operations. They won't get the bonus of the added instructions certainly, but that doesn't make the added instructions useless or the CPU useless.
It is important that legacy binaries work, and work fast. But they need not be given special privelidges so that they always get faster at the same rate as new code.
A CPU that *needs* a recompile because it's ISA is new, or because it has totally different performance characteristics for common operations is a different matter.
>Dammit, didn't people learn anything from the 90's
>and the failure of RISC? "Just recompile" is not the
>answer. It wasn't then, it wasn't now, it never will be.
Who, outside geeks, even knows how to upgrade their CPU? I only became brave enough to upgrade a motherboard in 2003. People buy systems and they buy new software with them that's optimized, if not for their system, then for a system quite like it.
Even IT guys work this way mostly. When they setup a server they use whatever version of the OS is current at the time. They then install the latest stable version of the apps necessary and leave the whole system alone until it needs replacing.
>>In the old days the compiler issued multiply-step code,
>>then when the new processor comes out it is changed to
>>issue proper multiplies.
>
>You're ignoring another huge problem, which is the
>logistical side of things. You say "just recompile", but
>what you choose not to mention is that the answer is a lot
>more complex than that: it's actually "just recompile with
>the new architecture flag that enables the new instructions,
>and make sure that you upgrade all your machines at the
>same time, or that they all have their binaries maintained
>separately and that your MIS department isn't actually under
>any pressure already".
>
>See the problem? The schenario you describe ("just do a
>simple recompile") isn't a real-life scenario AT ALL.
>
>In most settings, you end up with mixtures of old and new
>hardware, and you end up with those mixes for a loong
>time. You give new hardware on a priority basis, and the
>old hardware ends up doing something else - but may well
>need to run the same binaries (just more slowly).
You certainly do end up with a mixture of systems of different ages I agree. Recompiling etc has nothing much to do with this. If your application was fast enough to run on old system A then, so long as new system B is not totally weird it will be faster on that system. People in this situation should just ship for the lowest common denominator. This is exactly what Linux distro developers do for example, they mainly optimize for P6 type machines.
It's for apps that require speed that the new versions should come out optimized for new architectures. (And do, I remember one sims-software system that came out only P4 optimized).
In many cases, for the instructions we've been mentioning, it would be irrelevant anyway. A great many apps, even performance sensitive ones would works similarly on machines with/without multiply-step and/or unaligned access nasties.
>Also, as a ISV, you don't want to sell multiple versions
>of the software, you don't want to test it that way, and
>your customers don't want to buy it that way. So suddenly
>you're in this nightmare situation that the people who
>want the highest performance and paid for a faster machine
>(who are likely also the people who might pay you
>more as an ISV) need to be supported, but you can't just
>do it by ignoring the old architecture that doesn't do the
>new instructions.
>
>So as an ISV, you either have to go to a lot of
>trouble to have parallel versions of the binaries and
>automagically choose the right one at run-time, or you can
>do what actually happens in a lot of cases: stay with the
>old architecture for half a decade or more, and use the
>new features only when you can afford to tell your customers
>that the new version will only work on "new" machines (as
>in "more recent than five years").
I agree, maintaining two binary versions is a collosal pain. (Imagine what the necessity for drivers for Windows Vista and Vista 64-bit is doing to some people!)
>Don't tell me this doesn't happen. It happens. It happens
>a hell of a lot more commonly than your crazy "just do a
>simple recompile" schenario.
>
>This is a fact. Ask any MIS person. Ask any ISV. Your
>schenario is unrealistic, except in small niche markets.
>
>People don't buy a "single machine". They don't maintain
>their machines "one by one". And they don't upgrade all
>their hardware at the same time.
Yes they do. Ask people (real people), ask IT departments.
And the few legacy programs people use are generally not performance critical.
(It's true about hardware too BTW, do you know what percentage of users use the empty PCI slots in their machines?)
>So stop with the "just recompile" already!
>
>It has been shown in the market to not work, and you're
>glossing over all the real problems.
There aren't really many problems. The things we're talking about rarely affect code in a very major way as it is. For years x86 Linux distributions used code compiled for the P5, which performs poorly on the P6s most people were actually using. In practice very few people complained or even noticed. Similarly x87 performance has been sacrificed recently in aid of SSE even though few old programs (and few programs in general) use SSE. No-one is complaining about this or arguing that it is bad.
For it to be a true problem the following things must occur:-
* The software must be performance critical
* The change/improvement in the ISA must affect the program. Many, many programs are tied up in memory, disk or network.
* The customer must be interested in running the program on old hardware.
* The customer must be interested in the performance of that old program on said old hardware.
Overall I think that the chances of this happening are pretty small.
---------------------------
>Rob Thorpe (rthorpe@realworldtech.com) on 11/14/06 wrote:
>>
>>What you're describing isn't much of a problem.
>
>I disagree. It becomes a huge logistical problem.
>
>>There are generally two types of legacy software. The
>>first is that where speed does not matter because when
>>the software was written processors were much slower than
>>they are today, so current machines are more than adequate
>>to run it.
>
>This argument is just bogus. You're saying it's ok to run
>fairly slowly, because your previous machine ran even
>worse. That's simply not true, and it's also a totally
>inane argument to make from a hardware design perspective.
>
>Think of this from the perspective of the hardware
>designer, who also is obviously going to sell that
>hardware. Are you better off with an architecture that
>might initially not be any faster (example: do the
>unaligned thing as a micro-trap, no faster than doing it
>with two separate instructions), but that you can speed up
>existing software for in the future?
>
>I say yes. It's much worse for everybody if old
>software stays slower. Users don't want to recompile.
?? Since when do users recompile!
> And
>hardware vendors sure as hell don't want to have users
>decide not to upgrade because it doesn't help the 99% of
>what they do - legacy applications.
>
>So your argument is crazy. Legacy binaries are simply too
>important to dismiss like that.
No their not. Tell me, how much legacy software do you have on your machine which has performance characteristics you actually care about.
Speaking for myself I have zero. Granted I have loads of legacy software, some Windows, some Linux some even ancient DOS software. But performance is not important for any of it.
Apart from geeks I've met very few people who have any legacy software at all on their machines. Most users use the software that is installed on the system when they bought it, only occasionally buying new packages. When they buy a new machine they start again, getting a new version of Windows, Office etc.
>>The second is software that is old, but is still being
>>maintained. The vast majority of this software is written
>>in high level languages.
>
>That's another totally idiotic argument.
>
>People simply do not want to recompile. In many cases
>they even cannot recompile themselves, and they sure
>as hell don't want to pay for a software upgrade for all
>their critical software. If a new CPU needs a recompile,
>that new CPU is largely broken, as far as 99% of all users
>are concerned.
We're not talking about a CPU needing a recompile. We're talking about it benefitting from one. Lets say you have a machine with instruction set X with a couple of instructions that must be synthesised. Later the instruction set is upgraded to X+1, and synthesis is no longer necessary. If the CPU that executes X+1 is faster than the previous one then the application will be much faster. Every part of the application will get faster even the synthesised operations. They won't get the bonus of the added instructions certainly, but that doesn't make the added instructions useless or the CPU useless.
It is important that legacy binaries work, and work fast. But they need not be given special privelidges so that they always get faster at the same rate as new code.
A CPU that *needs* a recompile because it's ISA is new, or because it has totally different performance characteristics for common operations is a different matter.
>Dammit, didn't people learn anything from the 90's
>and the failure of RISC? "Just recompile" is not the
>answer. It wasn't then, it wasn't now, it never will be.
Who, outside geeks, even knows how to upgrade their CPU? I only became brave enough to upgrade a motherboard in 2003. People buy systems and they buy new software with them that's optimized, if not for their system, then for a system quite like it.
Even IT guys work this way mostly. When they setup a server they use whatever version of the OS is current at the time. They then install the latest stable version of the apps necessary and leave the whole system alone until it needs replacing.
>>In the old days the compiler issued multiply-step code,
>>then when the new processor comes out it is changed to
>>issue proper multiplies.
>
>You're ignoring another huge problem, which is the
>logistical side of things. You say "just recompile", but
>what you choose not to mention is that the answer is a lot
>more complex than that: it's actually "just recompile with
>the new architecture flag that enables the new instructions,
>and make sure that you upgrade all your machines at the
>same time, or that they all have their binaries maintained
>separately and that your MIS department isn't actually under
>any pressure already".
>
>See the problem? The schenario you describe ("just do a
>simple recompile") isn't a real-life scenario AT ALL.
>
>In most settings, you end up with mixtures of old and new
>hardware, and you end up with those mixes for a loong
>time. You give new hardware on a priority basis, and the
>old hardware ends up doing something else - but may well
>need to run the same binaries (just more slowly).
You certainly do end up with a mixture of systems of different ages I agree. Recompiling etc has nothing much to do with this. If your application was fast enough to run on old system A then, so long as new system B is not totally weird it will be faster on that system. People in this situation should just ship for the lowest common denominator. This is exactly what Linux distro developers do for example, they mainly optimize for P6 type machines.
It's for apps that require speed that the new versions should come out optimized for new architectures. (And do, I remember one sims-software system that came out only P4 optimized).
In many cases, for the instructions we've been mentioning, it would be irrelevant anyway. A great many apps, even performance sensitive ones would works similarly on machines with/without multiply-step and/or unaligned access nasties.
>Also, as a ISV, you don't want to sell multiple versions
>of the software, you don't want to test it that way, and
>your customers don't want to buy it that way. So suddenly
>you're in this nightmare situation that the people who
>want the highest performance and paid for a faster machine
>(who are likely also the people who might pay you
>more as an ISV) need to be supported, but you can't just
>do it by ignoring the old architecture that doesn't do the
>new instructions.
>
>So as an ISV, you either have to go to a lot of
>trouble to have parallel versions of the binaries and
>automagically choose the right one at run-time, or you can
>do what actually happens in a lot of cases: stay with the
>old architecture for half a decade or more, and use the
>new features only when you can afford to tell your customers
>that the new version will only work on "new" machines (as
>in "more recent than five years").
I agree, maintaining two binary versions is a collosal pain. (Imagine what the necessity for drivers for Windows Vista and Vista 64-bit is doing to some people!)
>Don't tell me this doesn't happen. It happens. It happens
>a hell of a lot more commonly than your crazy "just do a
>simple recompile" schenario.
>
>This is a fact. Ask any MIS person. Ask any ISV. Your
>schenario is unrealistic, except in small niche markets.
>
>People don't buy a "single machine". They don't maintain
>their machines "one by one". And they don't upgrade all
>their hardware at the same time.
Yes they do. Ask people (real people), ask IT departments.
And the few legacy programs people use are generally not performance critical.
(It's true about hardware too BTW, do you know what percentage of users use the empty PCI slots in their machines?)
>So stop with the "just recompile" already!
>
>It has been shown in the market to not work, and you're
>glossing over all the real problems.
There aren't really many problems. The things we're talking about rarely affect code in a very major way as it is. For years x86 Linux distributions used code compiled for the P5, which performs poorly on the P6s most people were actually using. In practice very few people complained or even noticed. Similarly x87 performance has been sacrificed recently in aid of SSE even though few old programs (and few programs in general) use SSE. No-one is complaining about this or arguing that it is bad.
For it to be a true problem the following things must occur:-
* The software must be performance critical
* The change/improvement in the ISA must affect the program. Many, many programs are tied up in memory, disk or network.
* The customer must be interested in running the program on old hardware.
* The customer must be interested in the performance of that old program on said old hardware.
Overall I think that the chances of this happening are pretty small.