By: S. Rao (sonny.rao.delete@this.gmail.com), February 3, 2013 10:32 pm
Room: Moderated Discussions
rwessel (robertwessel.delete@this.yahoo.com) on January 30, 2013 6:32 pm wrote:
> S. Rao (sonny.rao.delete@this.gmail.com) on January 30, 2013 5:35 pm wrote:
> > David Kanter (dkanter.delete@this.realworldtech.com) on January 28, 2013 1:47 pm wrote:
> > > The server market is at a potential inflection point, with a new breed of ARM-based microserver vendors
> > > (and Tilera) challenging the status quo, particularly for cloud computing. We survey 20 modern processors
> > > to understand the options for alternative architectures. To achieve disruptive performance, microserver
> > > vendors must deeply specialize in particular workloads.
> > > However, there is a trade-off between differentiation
> > > and market breadth. As the handful of microserver startups are culled to 1-2 viable vendors, only the
> > > companies which deliver compelling advantages to significant markets will survive.
> > >
> > > http://www.realworldtech.com/microservers
> > >
> > > Comments, questions and feedback welcome as always!
> > >
> > > David
> >
> > Hi David, thanks for the article. I'm curious about this statement:
> >
> > History suggests that anything less than a 4× advantage simply isn’t big enough
> > for customers to endure disruptive changes and deal with risky new vendors, although
> > some estimates indicate that at least a 10× advantage is necessary.
> >
> > Where do you get these numbers from? The linked article
> > makes the claims but I don't see any hard data used
> > to get that number. I don't disagree that being 2x better
> > might not be enough, but I'm curious if there is a
> > rigorous methodology being used to get these numbers or if it's just based of past shifts in the industry.
> >
> > If it's just based of previous experiences, the examples given in the article don't seem to really
> > prove his point at all. For example, he uses 64-bit on x86 as an example by saying:
> >
> > The introduction of the AMD64 instruction set by Advanced Micro Devices (also known as EM64T or "Intel 64"
> > on Intel processors, or generically as x86-64) represents the ultimate success case for the factor factor.
> > This isn't immediately clear, I suppose. Adopting the AMD64 standard required a lot of work by operating
> > system vendors and software developers, and the performance benefit was relatively mild in most cases.
> > But still, AMD64 was an immediate success because the performance benefit in certain applications--those
> > that simply wouldn't fit into a 32-bit address space--was practically infinite.
> >
> > He admits that performance benefits were mild in most cases, but then makes the claim that it had infinite
> > benefit for 32-bit applications. I don't understand that
> > because if people really needed 64-bit applications,
> > there were other alternatives which likely cost less than 10x more, and I'd argue most applications (like
> > huge transaction processing databases) which *really* needed
> > 64-bit were just using non x86 64-bit architectures
> > and then later switched to x86 for the cost savings. You might disagree about the specifics here, but I
> > still think it's a weak example overall, yet he classifies it as the ultimate example.
>
>
> One must be careful throwing infinities around... They're quite heavy, after all.
>
> The major problem with non-x86 64-bit platforms was that they didn't run the huge amount
> of 32-bit software, so people going that route often ended up with two machines on their
> desk. And it did happen. But the 64-bit transition was also somewhat tied to the price
> of RAM for most users, even if an application you'd have like to use could have made use
> of multiple gigabytes of RAM, it was a non-starter if you couldn't afford that much memory.
>
Yeah, I agree it had more to do with the price of RAM (and therefore the quantity of RAM put into systems) than anything else, and I don't think is was really a "disruptive" change in the same sense since the 64-bit x86 processors were fully compatible with their 32-bit predecessors. Again, IMO, it's a bad example and doesn't prove his point at all.
> For most PC users 64-bit still doesn't matter too much, except at the OS level where having
> more than 4GB available is a good thing, even if you're just running multiple 32-bit processes,
> and for the device drivers that support that (the availability of LME drivers for server versions
> of Windows notwithstanding). A few semi-common PC apps can, of course, make use of large
> amounts of memory, but it's definitely a minority of the stuff out there.
>
This is all very true, but I'm not sure if you're in agreement with me about the quality of this example with respect to disruptive changes?
I'd also still like to know where David got his 4x number from? That just seems too specific for it to not have a source (unlike the guy in the article who, I think, just made up the order of magnitude number on his own)
> S. Rao (sonny.rao.delete@this.gmail.com) on January 30, 2013 5:35 pm wrote:
> > David Kanter (dkanter.delete@this.realworldtech.com) on January 28, 2013 1:47 pm wrote:
> > > The server market is at a potential inflection point, with a new breed of ARM-based microserver vendors
> > > (and Tilera) challenging the status quo, particularly for cloud computing. We survey 20 modern processors
> > > to understand the options for alternative architectures. To achieve disruptive performance, microserver
> > > vendors must deeply specialize in particular workloads.
> > > However, there is a trade-off between differentiation
> > > and market breadth. As the handful of microserver startups are culled to 1-2 viable vendors, only the
> > > companies which deliver compelling advantages to significant markets will survive.
> > >
> > > http://www.realworldtech.com/microservers
> > >
> > > Comments, questions and feedback welcome as always!
> > >
> > > David
> >
> > Hi David, thanks for the article. I'm curious about this statement:
> >
> > History suggests that anything less than a 4× advantage simply isn’t big enough
> > for customers to endure disruptive changes and deal with risky new vendors, although
> > some estimates indicate that at least a 10× advantage is necessary.
> >
> > Where do you get these numbers from? The linked article
> > makes the claims but I don't see any hard data used
> > to get that number. I don't disagree that being 2x better
> > might not be enough, but I'm curious if there is a
> > rigorous methodology being used to get these numbers or if it's just based of past shifts in the industry.
> >
> > If it's just based of previous experiences, the examples given in the article don't seem to really
> > prove his point at all. For example, he uses 64-bit on x86 as an example by saying:
> >
> > The introduction of the AMD64 instruction set by Advanced Micro Devices (also known as EM64T or "Intel 64"
> > on Intel processors, or generically as x86-64) represents the ultimate success case for the factor factor.
> > This isn't immediately clear, I suppose. Adopting the AMD64 standard required a lot of work by operating
> > system vendors and software developers, and the performance benefit was relatively mild in most cases.
> > But still, AMD64 was an immediate success because the performance benefit in certain applications--those
> > that simply wouldn't fit into a 32-bit address space--was practically infinite.
> >
> > He admits that performance benefits were mild in most cases, but then makes the claim that it had infinite
> > benefit for 32-bit applications. I don't understand that
> > because if people really needed 64-bit applications,
> > there were other alternatives which likely cost less than 10x more, and I'd argue most applications (like
> > huge transaction processing databases) which *really* needed
> > 64-bit were just using non x86 64-bit architectures
> > and then later switched to x86 for the cost savings. You might disagree about the specifics here, but I
> > still think it's a weak example overall, yet he classifies it as the ultimate example.
>
>
> One must be careful throwing infinities around... They're quite heavy, after all.
>
> The major problem with non-x86 64-bit platforms was that they didn't run the huge amount
> of 32-bit software, so people going that route often ended up with two machines on their
> desk. And it did happen. But the 64-bit transition was also somewhat tied to the price
> of RAM for most users, even if an application you'd have like to use could have made use
> of multiple gigabytes of RAM, it was a non-starter if you couldn't afford that much memory.
>
Yeah, I agree it had more to do with the price of RAM (and therefore the quantity of RAM put into systems) than anything else, and I don't think is was really a "disruptive" change in the same sense since the 64-bit x86 processors were fully compatible with their 32-bit predecessors. Again, IMO, it's a bad example and doesn't prove his point at all.
> For most PC users 64-bit still doesn't matter too much, except at the OS level where having
> more than 4GB available is a good thing, even if you're just running multiple 32-bit processes,
> and for the device drivers that support that (the availability of LME drivers for server versions
> of Windows notwithstanding). A few semi-common PC apps can, of course, make use of large
> amounts of memory, but it's definitely a minority of the stuff out there.
>
This is all very true, but I'm not sure if you're in agreement with me about the quality of this example with respect to disruptive changes?
I'd also still like to know where David got his 4x number from? That just seems too specific for it to not have a source (unlike the guy in the article who, I think, just made up the order of magnitude number on his own)