By: dmcq (dmcq.delete@this.fano.co.uk), March 21, 2021 3:50 pm
Room: Moderated Discussions
Moritz (better.delete@this.not.tell) on March 20, 2021 5:21 am wrote:
> What if you could completely rethink the general processor concept?
> There are concepts that were without alternative in the days of little memory and few transistors:
> Sequential instructions by storage address and jumps based on that address
> Implicit dependency based on above principle
> Explicit naming of storage place rather than data item
> Explicit caching into registers
> Implicit addressing of registers
> Mixing of memory, float, integer instructions in one instruction stream
> that must be analyzed to remove the assumed sequentiallity.
> The ISA used to represent the physical architecture, today that
> is no longer the case in high performance microprocessors.
> The data modifies the program flow at run-time, instead of explicitly generating the data stream
> that reaches the execution units. The CPU steps through the program issuing the data to EUs instead
> of the program explicitly generating multiple data streams with synchronization markers.
> ... and many other implications that are so "natural" to us that we can not see/name them. As usual
> we can not even question the ways, because we are so used to them. There are infinite bad ways of doing
> it, but some of those forced/obvious (legacy) design decisions of the past might no longer be that
> necessary/without alternative. Some ways that seem cumbersome and wasteful might on second thought
> turn out to be hard on the human, but open new ways to the compiler, RTE, OS, CPU removing as much
> complexity as they add, but increasing throughput or energy efficiency beyond the current limit.
For straightforward large tasks I think we can depend on accelerators of various types to do the job. What we should be looking at is what would really help general purpose work. What is causing difficulties nowadays?
Computers from the past have got a lot to teach us, the Burroughs B5000 from nearly 60 years ago had a lot of innovative features more advanced than x86 has nowadays. Its operating system MCP was always renown for its reliability. It has been developed and survives to this day albeit in emulation from Unisys. The system avoided assembly language so could be upgraded more easily.
Of course any new architecture must be able to run practically anything that is around now with negligeable performance loss at worst. I'd be happy if a few faciities in Linux or Windows that many people think are important died the death, but there's no way something like that can be sold easily.
The one really important thing I would definitely bring in is capabilities. Burroughs had the equivalent with tags and called them descriptors but there hav been a number of implementations since. One recent project I like is from the CHERI project in Cambridge England and ARM is making a chip based on its N1 server design for it. This promises to simplify he system and make it more reliable and canp rovide direct user level calls to different protection domains obviating the need for passing data via the kernel.
Making calls to accelerators faster and more reliable has been a continuous project for anumber of years. The main work has been to get memory consistency. It was argued against as expensive and unnecessary because people could do just what was needed by hand - but I believe it is recognized now that it is better to pay the cost and get the hardware to do the job. The work of controlling the tasks is getting better but could still be improved.
Security could be improved so what is currently done by trusted compute modules is well protected and yet easily accessed. Even the operating sysem should be unable to read such code and data, but it would be nice if users could set up their own rather than it having to be completely outside the operating system.
> What if you could completely rethink the general processor concept?
> There are concepts that were without alternative in the days of little memory and few transistors:
> Sequential instructions by storage address and jumps based on that address
> Implicit dependency based on above principle
> Explicit naming of storage place rather than data item
> Explicit caching into registers
> Implicit addressing of registers
> Mixing of memory, float, integer instructions in one instruction stream
> that must be analyzed to remove the assumed sequentiallity.
> The ISA used to represent the physical architecture, today that
> is no longer the case in high performance microprocessors.
> The data modifies the program flow at run-time, instead of explicitly generating the data stream
> that reaches the execution units. The CPU steps through the program issuing the data to EUs instead
> of the program explicitly generating multiple data streams with synchronization markers.
> ... and many other implications that are so "natural" to us that we can not see/name them. As usual
> we can not even question the ways, because we are so used to them. There are infinite bad ways of doing
> it, but some of those forced/obvious (legacy) design decisions of the past might no longer be that
> necessary/without alternative. Some ways that seem cumbersome and wasteful might on second thought
> turn out to be hard on the human, but open new ways to the compiler, RTE, OS, CPU removing as much
> complexity as they add, but increasing throughput or energy efficiency beyond the current limit.
For straightforward large tasks I think we can depend on accelerators of various types to do the job. What we should be looking at is what would really help general purpose work. What is causing difficulties nowadays?
Computers from the past have got a lot to teach us, the Burroughs B5000 from nearly 60 years ago had a lot of innovative features more advanced than x86 has nowadays. Its operating system MCP was always renown for its reliability. It has been developed and survives to this day albeit in emulation from Unisys. The system avoided assembly language so could be upgraded more easily.
Of course any new architecture must be able to run practically anything that is around now with negligeable performance loss at worst. I'd be happy if a few faciities in Linux or Windows that many people think are important died the death, but there's no way something like that can be sold easily.
The one really important thing I would definitely bring in is capabilities. Burroughs had the equivalent with tags and called them descriptors but there hav been a number of implementations since. One recent project I like is from the CHERI project in Cambridge England and ARM is making a chip based on its N1 server design for it. This promises to simplify he system and make it more reliable and canp rovide direct user level calls to different protection domains obviating the need for passing data via the kernel.
Making calls to accelerators faster and more reliable has been a continuous project for anumber of years. The main work has been to get memory consistency. It was argued against as expensive and unnecessary because people could do just what was needed by hand - but I believe it is recognized now that it is better to pay the cost and get the hardware to do the job. The work of controlling the tasks is getting better but could still be improved.
Security could be improved so what is currently done by trusted compute modules is well protected and yet easily accessed. Even the operating sysem should be unable to read such code and data, but it would be nice if users could set up their own rather than it having to be completely outside the operating system.