By: wumpus (lost.delete@this.in-a.cave.net), April 8, 2017 7:19 pm
Room: Moderated Discussions
Ireland (boh.delete@this.outlook.ie) on April 8, 2017 1:39 pm wrote:
> wumpus (lost.delete@this.in-a.cave.net) on April 8, 2017 9:49 am wrote:
> >
> > But the whole "computer operating inside a computer" religion didn't die with microcode and
> > CISC. JAVA was probably the biggest commercial success, and Transmeta tried valiantly. Virtual
> > Machines are a bit different (assuming they keep the same architecture), but presumably are
> > related. Even on this board we hear the odd "do [architectural concept, last I heard was branch
> > prediction**] in software", these are echos of the "computer on a computer" dream.
> >
>
> That might go back to Moses, and the commandments as far as computer science goes.
>
> What I'm told, is that it's possible to actually go back and read 'everything' that Alan Turing ever wrote
> about computing. The reason being, is that Alan Turing wasn't like other early scientists in the field. He
> wrote a small amount of finished papers, but papers that were read and understood by a lot of very important
> people in the development of the science. The reason I mention that, is because the theme of using a smaller
> computer in order to emulate a much larger and more complex machine, is something that goes all the ways back
> to Turing. So it might be able to track down which exact paper, this idea originally comes from.
>
> Gary Kildall, who was creator of the CP/M system for the early IBM personal computer - was influenced
> in large degree by that aspect of Turing's ideas (mentioned in this, I think - 'Legacy of Gary Kildall:
> The CP/M IEEE Milestone Dedication'). The other thing to mention I think, is that software architects
> such as Kildall, even though they're remembered more today for contributions to the 'small' computer,
> . . . they were very aware of how the much larger machines had worked. Maybe, I think, it is because they
> had such a good understanding of the larger machines, that they were so good at working with the smaller
> machine. They probably understood best, where the small machine was going to end up some day?
[note: I'd recommend "Soul of a New Machine" to anyone wondering about computers in the 1980s and microcode in general. Although I'm not sure the thing explained exactly *what* microcode was, or even if the author ever understood it (he wasn't an engineer, just a good writer).
You're missing a fundamental point. Microcode wasn't just about "building a computer in a computer", it let you build a computer *at* *all* much easier with it than without it. But unlike later advances (such as verilog and VHDL) it was explicitly tied to the clock. So during CISC's hayday, if you were building a computer you were almost certainly using microcode [I've heard there was quite a bit in the "hardwired" 6502].
Microcode basically let you take a pile of gates, ALU logic, busses, and memory/IO busses and simply write code into ROMs that would control everything. Once you had that, it was easy to see that you could make a vastly more complex computer merely by writing more microcode than by adding a ton of actual gates. And not only was the microcode inheriently more dense than logic (it was either a diode there or not in the regular rows of memory), sometimes they even had 4 level memory (similar to MLC flash) to cram the stuff in half the size.
But don't forget that "tied to the clock" requirement. So to make RISC worth doing, you had to give up the way you built computers (although there were new ways that replaced the old ways), because building a microcoded RISC would be pretty silly (but not impossible. And I suspect it found its way back into machines by the second or third generation for deprecated legacy support (such as still in Intel machines) and probably also in boot routines and similar corner cases).
The whole "Turing dream" is pretty much an undergraduates fantasy when he groks microcode at all*. But the ability to build whatever you like out of a meager bit of gates shouldn't be discounted (it must have done wonders for allowing a wide product line of identical architected devices).
* I doubt they have taught it for years. For me it was one of those things that was strongly emphasized in my education and a professor insisted that at least one computer [micro?]architect swore he'd never design another hardwired computer. But by the time I graduated, microcode was obsolete (although probably more in demand than circuit designers due to x86 nastiness). But I hope they still teach it, if only to show that you can solve hardware problems with another layer of indirection much the way you can with software (just don't put it in the critical path the way the CISC guys did).
> wumpus (lost.delete@this.in-a.cave.net) on April 8, 2017 9:49 am wrote:
> >
> > But the whole "computer operating inside a computer" religion didn't die with microcode and
> > CISC. JAVA was probably the biggest commercial success, and Transmeta tried valiantly. Virtual
> > Machines are a bit different (assuming they keep the same architecture), but presumably are
> > related. Even on this board we hear the odd "do [architectural concept, last I heard was branch
> > prediction**] in software", these are echos of the "computer on a computer" dream.
> >
>
> That might go back to Moses, and the commandments as far as computer science goes.
>
> What I'm told, is that it's possible to actually go back and read 'everything' that Alan Turing ever wrote
> about computing. The reason being, is that Alan Turing wasn't like other early scientists in the field. He
> wrote a small amount of finished papers, but papers that were read and understood by a lot of very important
> people in the development of the science. The reason I mention that, is because the theme of using a smaller
> computer in order to emulate a much larger and more complex machine, is something that goes all the ways back
> to Turing. So it might be able to track down which exact paper, this idea originally comes from.
>
> Gary Kildall, who was creator of the CP/M system for the early IBM personal computer - was influenced
> in large degree by that aspect of Turing's ideas (mentioned in this, I think - 'Legacy of Gary Kildall:
> The CP/M IEEE Milestone Dedication'). The other thing to mention I think, is that software architects
> such as Kildall, even though they're remembered more today for contributions to the 'small' computer,
> . . . they were very aware of how the much larger machines had worked. Maybe, I think, it is because they
> had such a good understanding of the larger machines, that they were so good at working with the smaller
> machine. They probably understood best, where the small machine was going to end up some day?
[note: I'd recommend "Soul of a New Machine" to anyone wondering about computers in the 1980s and microcode in general. Although I'm not sure the thing explained exactly *what* microcode was, or even if the author ever understood it (he wasn't an engineer, just a good writer).
You're missing a fundamental point. Microcode wasn't just about "building a computer in a computer", it let you build a computer *at* *all* much easier with it than without it. But unlike later advances (such as verilog and VHDL) it was explicitly tied to the clock. So during CISC's hayday, if you were building a computer you were almost certainly using microcode [I've heard there was quite a bit in the "hardwired" 6502].
Microcode basically let you take a pile of gates, ALU logic, busses, and memory/IO busses and simply write code into ROMs that would control everything. Once you had that, it was easy to see that you could make a vastly more complex computer merely by writing more microcode than by adding a ton of actual gates. And not only was the microcode inheriently more dense than logic (it was either a diode there or not in the regular rows of memory), sometimes they even had 4 level memory (similar to MLC flash) to cram the stuff in half the size.
But don't forget that "tied to the clock" requirement. So to make RISC worth doing, you had to give up the way you built computers (although there were new ways that replaced the old ways), because building a microcoded RISC would be pretty silly (but not impossible. And I suspect it found its way back into machines by the second or third generation for deprecated legacy support (such as still in Intel machines) and probably also in boot routines and similar corner cases).
The whole "Turing dream" is pretty much an undergraduates fantasy when he groks microcode at all*. But the ability to build whatever you like out of a meager bit of gates shouldn't be discounted (it must have done wonders for allowing a wide product line of identical architected devices).
* I doubt they have taught it for years. For me it was one of those things that was strongly emphasized in my education and a professor insisted that at least one computer [micro?]architect swore he'd never design another hardwired computer. But by the time I graduated, microcode was obsolete (although probably more in demand than circuit designers due to x86 nastiness). But I hope they still teach it, if only to show that you can solve hardware problems with another layer of indirection much the way you can with software (just don't put it in the critical path the way the CISC guys did).