By: Doug Siebert (foo.delete@this.bar.bar), November 14, 2006 9:50 pm
Room: Moderated Discussions
Rob Thorpe (rthorpe@realworldtech.com) on 11/14/06 wrote:
---------------------------
>No their not. Tell me, how much legacy software do you have on your machine which
>has performance characteristics you actually care about.
>
>Speaking for myself I have zero. Granted I have loads of legacy software, some
>Windows, some Linux some even ancient DOS software. But performance is not important for any of it.
>
>Apart from geeks I've met very few people who have any legacy software at all on
>their machines. Most users use the software that is installed on the system when
>they bought it, only occasionally buying new packages. When they buy a new machine
>they start again, getting a new version of Windows, Office etc.
>
You have an incredibly narrow point of view. Maybe you should quit thinking about yourself and your friends, and give some thought to people in large corporate settings, especially those that make use of new technologies as they become available (i.e., mainframes in the 60s, minis in the 70s, PCs in the 80s) and have no reason to change them once they have a working solution.
Sure, if you have some old bridge game that ran under Windows 3.1 on a 386, it can run just as slowly on a Conroe and you may not care. I wouldn't care if the latest and greatest Linux used an 'ls' from the earliest Slackware and emulated its 386 instruction set in Java, on the latest and greatest hardware it would probably be more than adequate for its purpose. But just because there are examples like that doesn't give you the right to make a blanket statement that essentially amounts to "what works for me should work for everyone else" because you are too dense to see other points of view.
In the real world, you may have some Windows 3.1 (or even DOS!!) program that was written in house, for which source code (or at least current source code) has been lost, upon which some small but vital part of your business depends. I remember reading something that some surprisingly high percentage of the total cycles on IBM mainframes worldwide were spent executing code over 20 years old, and an even high percentage of cycles executing code for which NO ONE had source code (the older stuff is assembly, so the source wouldn't be all much help anyway since they probably didn't comment it usefully :)) If it had to do the exact same thing with the exact same amount of data as it did originally, no problem. But as available storage, memory and yes, performance go up, often more is demanded and often short term hacks or 'just buy faster hardware' turns out to be a cheaper and quicker solution than a rewrite from scratch. Compromises that reduce or eliminate performance increases from new hardware limit your options.
You think I'm making this up? I've seen it, at more than one Fortune 500 corporation. You've obviously never been part of an SAP rollout. It is amazing how much crap like this turns up, I've seen stuff supposedly dating from the early 70s that runs on mainframes that either gets rewritten from scratch at great expense and difficulty or has integration layers to import data to/from SAP at great expense and difficulty. This is one of the primary reasons that SAP rollouts go over budget and over schedule, because no company has any clue what weird shit their factory floors in Japan or sales offices in Brazil have been doing for the past couple decades to get their job done. They can survey, request, cajole, threaten, whatever when they are doing due diligence in an attempt to find out about this stuff, but it isn't until you try to ramrod all the company's operations through SAP that people poke their heads up and start screaming about how what you are asking them to do is impossible and why can't they just do things they way they always have (and by the way, can we have more money to upgrade our VAX running this important application to a faster model)
You can imagine that something that worked on a 1 MIP machine 25 years ago can run at a comparative snail's pace on today's hardware and still be fine, and there are surely many cases where you'd be right. But there are plenty of cases where you'd be wrong, because the volume of data passing through it has increased by a few orders of magnitude over the decades, and/or new requirements for just in time ordering and manufacturing have changed your timescale from "need it within five business days" to "need it in five minutes".
---------------------------
>No their not. Tell me, how much legacy software do you have on your machine which
>has performance characteristics you actually care about.
>
>Speaking for myself I have zero. Granted I have loads of legacy software, some
>Windows, some Linux some even ancient DOS software. But performance is not important for any of it.
>
>Apart from geeks I've met very few people who have any legacy software at all on
>their machines. Most users use the software that is installed on the system when
>they bought it, only occasionally buying new packages. When they buy a new machine
>they start again, getting a new version of Windows, Office etc.
>
You have an incredibly narrow point of view. Maybe you should quit thinking about yourself and your friends, and give some thought to people in large corporate settings, especially those that make use of new technologies as they become available (i.e., mainframes in the 60s, minis in the 70s, PCs in the 80s) and have no reason to change them once they have a working solution.
Sure, if you have some old bridge game that ran under Windows 3.1 on a 386, it can run just as slowly on a Conroe and you may not care. I wouldn't care if the latest and greatest Linux used an 'ls' from the earliest Slackware and emulated its 386 instruction set in Java, on the latest and greatest hardware it would probably be more than adequate for its purpose. But just because there are examples like that doesn't give you the right to make a blanket statement that essentially amounts to "what works for me should work for everyone else" because you are too dense to see other points of view.
In the real world, you may have some Windows 3.1 (or even DOS!!) program that was written in house, for which source code (or at least current source code) has been lost, upon which some small but vital part of your business depends. I remember reading something that some surprisingly high percentage of the total cycles on IBM mainframes worldwide were spent executing code over 20 years old, and an even high percentage of cycles executing code for which NO ONE had source code (the older stuff is assembly, so the source wouldn't be all much help anyway since they probably didn't comment it usefully :)) If it had to do the exact same thing with the exact same amount of data as it did originally, no problem. But as available storage, memory and yes, performance go up, often more is demanded and often short term hacks or 'just buy faster hardware' turns out to be a cheaper and quicker solution than a rewrite from scratch. Compromises that reduce or eliminate performance increases from new hardware limit your options.
You think I'm making this up? I've seen it, at more than one Fortune 500 corporation. You've obviously never been part of an SAP rollout. It is amazing how much crap like this turns up, I've seen stuff supposedly dating from the early 70s that runs on mainframes that either gets rewritten from scratch at great expense and difficulty or has integration layers to import data to/from SAP at great expense and difficulty. This is one of the primary reasons that SAP rollouts go over budget and over schedule, because no company has any clue what weird shit their factory floors in Japan or sales offices in Brazil have been doing for the past couple decades to get their job done. They can survey, request, cajole, threaten, whatever when they are doing due diligence in an attempt to find out about this stuff, but it isn't until you try to ramrod all the company's operations through SAP that people poke their heads up and start screaming about how what you are asking them to do is impossible and why can't they just do things they way they always have (and by the way, can we have more money to upgrade our VAX running this important application to a faster model)
You can imagine that something that worked on a 1 MIP machine 25 years ago can run at a comparative snail's pace on today's hardware and still be fine, and there are surely many cases where you'd be right. But there are plenty of cases where you'd be wrong, because the volume of data passing through it has increased by a few orders of magnitude over the decades, and/or new requirements for just in time ordering and manufacturing have changed your timescale from "need it within five business days" to "need it in five minutes".