By: Ireland (boh.delete@this.outlook.ie), April 7, 2017 5:34 pm
Room: Moderated Discussions
Linus Torvalds (torvalds.delete@this.linux-foundation.org) on April 7, 2017 11:35 am wrote:
>
> The people who don't have a FPU simply don't care about FP. They often want it to
> work, because you do end up having a lot of programs that end up doing the occasional
> little snippet of floating point, but it's just not all that noticeable.
>
> So even the horribly bad "just emulate FP with no HW support at all except for the trap" is
> actually acceptable for that case. The embedded people end up often doing things like fixed-point
> etc anyway, they very seldom actually want things like full IEEE floating point, and certainly
> not at high performance (or they absolutely require a FPU anyway. See above).
>
What you can never tell, when we moved from expensive mini-computers - to personal computers with microprocessors etc, . . . is how much got 'thrown out' altogether, as we changed from organizations that were built around one, to organizations that became built around the other. It would make things appear, . . . as though 'people' didn't need FPU. But that's the totally wrong way to look at computing. Because individual 'people' don't solve problems, or deliver projects by themselves. Everything always happens in an organizational context - and often that involves work - where several organizations have to connect even.
I'm not telling you anything new there, because operating system development works exactly like that. I'm preaching to an 'expert' in this domain, I know. But take this point, and look at it, in terms of the history of computing. The best way to think about it actually, is not to think of a 'person' that needs 'x', or 'y', or doesn't need them. The best way to think about things, is as a whole project, a pipeline of tasks that have to be carried out, in order to establish the nature of a project or task, and hypothesize a solution. When one looks at it, in that way, you can be assured that it will take all kinds of tools to attack the problem, from the broad organizational goal's point of view.
And what you want to do, as much as possible from an organizational point of view, is not divide up your teams too much, and not to have your 'data' divided up too much either. Your correct in the sense that Intel's x86 has been astonishing from the point of view of running software binaries for such a long time, and having data that is accessible across such a range of different systems, over a wide range of time. However, there was one major split that did occur in the transition between the mini-computer and the personal computer. I'll try and explain it a little.
The problem in the past, from what I can establish, is that there were indeed people who needed a whole lot of FPU capability, everything you can deliver. And there were people, who's workload wasn't really affected by FPU demands at all. However, what we had back in the old days of things like VAX/VMS, those older more expensive systems, is that at least everyone who was working on 'a project' were all working on a single system. The data, the projects, never got fragmented between lots and lots of different systems, and working on different computer architectures. It was luxurious, in the sense that someone may sit down and 'remote' into their VAX machine, and work away happily through their task load, which only involve working with the database side of the project, and maybe a heavy amount of integer workload at times. But then, you could get another worker, who was collaborating on the same project, could also use the VAX machine, on the same project, and run the analysis that they needed to do.
The trouble with the history of computing after the mini-computer, the VAX machines, was that computing got divided up into two types of systems. The MIPS type of systems that delivered that whopping floating point capability on the desktop, mini-computer performance for microprocessor prices, as the original MIPS business plan had stated. And then you had the other encampment, the IBM's and the 386's over there, doing their whole thing, with the lousy FPU capabilities. But often, what happened was that both sets of people, could find themselves working on the 'same project', trying to solve the same workload. It was just that one collaborator could tackle their side of the collaboration using integers, and the other one could not. What happened in reality, was that camps got too divided altogether. And what you'd find is that establishments that became stocked with the very best of FPU capable Unix boxes, their skillset became all lop-sided in that dimension. And the other side, the IBM pc's side, those establishments became all lop-sided with skills mainly in working with generic data.
From the human resources perspective, the two sides, the two kinds of skills of analysis of solving problems and doing projects - that got divided badly. You can fleets of these Unix boxes around the place, and whole armies of IBM pc's. However, the capabilities of both were separated from each other, and it's taken until the present day, in order to start to put those two halves back together again. That's how it looks at present, from an applications point of view. The real problem that we have no, in 2017, is trying to re-train a workforce again, to go back to thinking about problems and projects - and how to approach the same using computation - using the combined attack method, . . . that they had, all the ways back in those days of VAX/VMS. That is what the price tag, for systems like that purchased one, back in the day (from the point of view of the 'organization'). One could tackle different projects, using all the different kinds of people and different kinds of skills that one had at one's disposal. It was more democratic in that sense.
>
> The people who don't have a FPU simply don't care about FP. They often want it to
> work, because you do end up having a lot of programs that end up doing the occasional
> little snippet of floating point, but it's just not all that noticeable.
>
> So even the horribly bad "just emulate FP with no HW support at all except for the trap" is
> actually acceptable for that case. The embedded people end up often doing things like fixed-point
> etc anyway, they very seldom actually want things like full IEEE floating point, and certainly
> not at high performance (or they absolutely require a FPU anyway. See above).
>
What you can never tell, when we moved from expensive mini-computers - to personal computers with microprocessors etc, . . . is how much got 'thrown out' altogether, as we changed from organizations that were built around one, to organizations that became built around the other. It would make things appear, . . . as though 'people' didn't need FPU. But that's the totally wrong way to look at computing. Because individual 'people' don't solve problems, or deliver projects by themselves. Everything always happens in an organizational context - and often that involves work - where several organizations have to connect even.
I'm not telling you anything new there, because operating system development works exactly like that. I'm preaching to an 'expert' in this domain, I know. But take this point, and look at it, in terms of the history of computing. The best way to think about it actually, is not to think of a 'person' that needs 'x', or 'y', or doesn't need them. The best way to think about things, is as a whole project, a pipeline of tasks that have to be carried out, in order to establish the nature of a project or task, and hypothesize a solution. When one looks at it, in that way, you can be assured that it will take all kinds of tools to attack the problem, from the broad organizational goal's point of view.
And what you want to do, as much as possible from an organizational point of view, is not divide up your teams too much, and not to have your 'data' divided up too much either. Your correct in the sense that Intel's x86 has been astonishing from the point of view of running software binaries for such a long time, and having data that is accessible across such a range of different systems, over a wide range of time. However, there was one major split that did occur in the transition between the mini-computer and the personal computer. I'll try and explain it a little.
The problem in the past, from what I can establish, is that there were indeed people who needed a whole lot of FPU capability, everything you can deliver. And there were people, who's workload wasn't really affected by FPU demands at all. However, what we had back in the old days of things like VAX/VMS, those older more expensive systems, is that at least everyone who was working on 'a project' were all working on a single system. The data, the projects, never got fragmented between lots and lots of different systems, and working on different computer architectures. It was luxurious, in the sense that someone may sit down and 'remote' into their VAX machine, and work away happily through their task load, which only involve working with the database side of the project, and maybe a heavy amount of integer workload at times. But then, you could get another worker, who was collaborating on the same project, could also use the VAX machine, on the same project, and run the analysis that they needed to do.
The trouble with the history of computing after the mini-computer, the VAX machines, was that computing got divided up into two types of systems. The MIPS type of systems that delivered that whopping floating point capability on the desktop, mini-computer performance for microprocessor prices, as the original MIPS business plan had stated. And then you had the other encampment, the IBM's and the 386's over there, doing their whole thing, with the lousy FPU capabilities. But often, what happened was that both sets of people, could find themselves working on the 'same project', trying to solve the same workload. It was just that one collaborator could tackle their side of the collaboration using integers, and the other one could not. What happened in reality, was that camps got too divided altogether. And what you'd find is that establishments that became stocked with the very best of FPU capable Unix boxes, their skillset became all lop-sided in that dimension. And the other side, the IBM pc's side, those establishments became all lop-sided with skills mainly in working with generic data.
From the human resources perspective, the two sides, the two kinds of skills of analysis of solving problems and doing projects - that got divided badly. You can fleets of these Unix boxes around the place, and whole armies of IBM pc's. However, the capabilities of both were separated from each other, and it's taken until the present day, in order to start to put those two halves back together again. That's how it looks at present, from an applications point of view. The real problem that we have no, in 2017, is trying to re-train a workforce again, to go back to thinking about problems and projects - and how to approach the same using computation - using the combined attack method, . . . that they had, all the ways back in those days of VAX/VMS. That is what the price tag, for systems like that purchased one, back in the day (from the point of view of the 'organization'). One could tackle different projects, using all the different kinds of people and different kinds of skills that one had at one's disposal. It was more democratic in that sense.