By: rwessel (robertwessel.delete@this.yahoo.com), November 24, 2014 11:48 pm
Room: Moderated Discussions
Ronald Maas (rmaas.delete@this.wiwo.nl) on November 24, 2014 7:13 pm wrote:
> Michael S (already5chosen.delete@this.yahoo.com) on November 23, 2014 11:24 am wrote:
> > Apart of disadvantages, both 68K and VAX shared one advantage over x86 - 2-byte granularity of
> > instructions. P6-style brute-force approach to parsing and early decoding would take relatively
> > less hardware resources. I don't believe that it could have helped VAX, but it could make 3-way
> > 68K feasible even in transistor budget that does not allow decent decoded instruction cache.
> >
> >
>
> A huge benefit of the x86 instruction encoding scheme is that it allows determination of the instruction length
> by inspecting only the first 1, 2 or 3 bytes of the instruction (not counting any prefixes). The only exception
> is when some length-changing prefixes are used such as address size prefix and operand size prefix. When the
> processor encounters these prefixes in the instruction streams, it is not able to decode these instructions
> in a single cycle anymore. Search for LCP in the Intel Optimization Reference Manual http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf
>
> With 68K and VAX often the whole instruction must be parsed in order to determine its length. Which would
> significantly increase the technical complexity required for building a superscalar instruction decoder.
It is astonishing, at least in hindsight, just how badly Motorola screwed up the 68K ISA with the '020. While the groundwork for that had been laid in the original 68K (the unused bits used in the '020 for the additional addressing modes were in the extension words), the 68000 and '010 had all instructions lengths determined in the first (16-bit) word of the instruction.
VAX, of course, made the mistake from the very start, using basically the same approach to embed decode bits in the variably placed extensions.
Intel managed to avoid the mistake, of course.
OTOH, it's hard to credit those decisions to any brilliant insights (or lack thereof), the pipelining and decode issues just weren't relevant at the time.
Part of Intel's interest in IPF was likely due to the general understanding that CISC decode *was* a major issue, but not understanding that the 386 had avoided the worst of the issues.
> Michael S (already5chosen.delete@this.yahoo.com) on November 23, 2014 11:24 am wrote:
> > Apart of disadvantages, both 68K and VAX shared one advantage over x86 - 2-byte granularity of
> > instructions. P6-style brute-force approach to parsing and early decoding would take relatively
> > less hardware resources. I don't believe that it could have helped VAX, but it could make 3-way
> > 68K feasible even in transistor budget that does not allow decent decoded instruction cache.
> >
> >
>
> A huge benefit of the x86 instruction encoding scheme is that it allows determination of the instruction length
> by inspecting only the first 1, 2 or 3 bytes of the instruction (not counting any prefixes). The only exception
> is when some length-changing prefixes are used such as address size prefix and operand size prefix. When the
> processor encounters these prefixes in the instruction streams, it is not able to decode these instructions
> in a single cycle anymore. Search for LCP in the Intel Optimization Reference Manual http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf
>
> With 68K and VAX often the whole instruction must be parsed in order to determine its length. Which would
> significantly increase the technical complexity required for building a superscalar instruction decoder.
It is astonishing, at least in hindsight, just how badly Motorola screwed up the 68K ISA with the '020. While the groundwork for that had been laid in the original 68K (the unused bits used in the '020 for the additional addressing modes were in the extension words), the 68000 and '010 had all instructions lengths determined in the first (16-bit) word of the instruction.
VAX, of course, made the mistake from the very start, using basically the same approach to embed decode bits in the variably placed extensions.
Intel managed to avoid the mistake, of course.
OTOH, it's hard to credit those decisions to any brilliant insights (or lack thereof), the pipelining and decode issues just weren't relevant at the time.
Part of Intel's interest in IPF was likely due to the general understanding that CISC decode *was* a major issue, but not understanding that the 386 had avoided the worst of the issues.