By: S. Rao (sonny.rao.delete@this.gmail.com), January 30, 2013 5:35 pm
Room: Moderated Discussions
David Kanter (dkanter.delete@this.realworldtech.com) on January 28, 2013 1:47 pm wrote:
> The server market is at a potential inflection point, with a new breed of ARM-based microserver vendors
> (and Tilera) challenging the status quo, particularly for cloud computing. We survey 20 modern processors
> to understand the options for alternative architectures. To achieve disruptive performance, microserver
> vendors must deeply specialize in particular workloads. However, there is a trade-off between differentiation
> and market breadth. As the handful of microserver startups are culled to 1-2 viable vendors, only the
> companies which deliver compelling advantages to significant markets will survive.
>
> http://www.realworldtech.com/microservers
>
> Comments, questions and feedback welcome as always!
>
> David
Hi David, thanks for the article. I'm curious about this statement:
History suggests that anything less than a 4× advantage simply isn’t big enough for customers to endure disruptive changes and deal with risky new vendors, although some estimates indicate that at least a 10× advantage is necessary.
Where do you get these numbers from? The linked article makes the claims but I don't see any hard data used to get that number. I don't disagree that being 2x better might not be enough, but I'm curious if there is a rigorous methodology being used to get these numbers or if it's just based of past shifts in the industry.
If it's just based of previous experiences, the examples given in the article don't seem to really prove his point at all. For example, he uses 64-bit on x86 as an example by saying:
The introduction of the AMD64 instruction set by Advanced Micro Devices (also known as EM64T or "Intel 64" on Intel processors, or generically as x86-64) represents the ultimate success case for the factor factor.
This isn't immediately clear, I suppose. Adopting the AMD64 standard required a lot of work by operating system vendors and software developers, and the performance benefit was relatively mild in most cases. But still, AMD64 was an immediate success because the performance benefit in certain applications--those that simply wouldn't fit into a 32-bit address space--was practically infinite.
He admits that performance benefits were mild in most cases, but then makes the claim that it had infinite benefit for 32-bit applications. I don't understand that because if people really needed 64-bit applications, there were other alternatives which likely cost less than 10x more, and I'd argue most applications (like huge transaction processing databases) which *really* needed 64-bit were just using non x86 64-bit architectures and then later switched to x86 for the cost savings. You might disagree about the specifics here, but I still think it's a weak example overall, yet he classifies it as the ultimate example.
> The server market is at a potential inflection point, with a new breed of ARM-based microserver vendors
> (and Tilera) challenging the status quo, particularly for cloud computing. We survey 20 modern processors
> to understand the options for alternative architectures. To achieve disruptive performance, microserver
> vendors must deeply specialize in particular workloads. However, there is a trade-off between differentiation
> and market breadth. As the handful of microserver startups are culled to 1-2 viable vendors, only the
> companies which deliver compelling advantages to significant markets will survive.
>
> http://www.realworldtech.com/microservers
>
> Comments, questions and feedback welcome as always!
>
> David
Hi David, thanks for the article. I'm curious about this statement:
History suggests that anything less than a 4× advantage simply isn’t big enough for customers to endure disruptive changes and deal with risky new vendors, although some estimates indicate that at least a 10× advantage is necessary.
Where do you get these numbers from? The linked article makes the claims but I don't see any hard data used to get that number. I don't disagree that being 2x better might not be enough, but I'm curious if there is a rigorous methodology being used to get these numbers or if it's just based of past shifts in the industry.
If it's just based of previous experiences, the examples given in the article don't seem to really prove his point at all. For example, he uses 64-bit on x86 as an example by saying:
The introduction of the AMD64 instruction set by Advanced Micro Devices (also known as EM64T or "Intel 64" on Intel processors, or generically as x86-64) represents the ultimate success case for the factor factor.
This isn't immediately clear, I suppose. Adopting the AMD64 standard required a lot of work by operating system vendors and software developers, and the performance benefit was relatively mild in most cases. But still, AMD64 was an immediate success because the performance benefit in certain applications--those that simply wouldn't fit into a 32-bit address space--was practically infinite.
He admits that performance benefits were mild in most cases, but then makes the claim that it had infinite benefit for 32-bit applications. I don't understand that because if people really needed 64-bit applications, there were other alternatives which likely cost less than 10x more, and I'd argue most applications (like huge transaction processing databases) which *really* needed 64-bit were just using non x86 64-bit architectures and then later switched to x86 for the cost savings. You might disagree about the specifics here, but I still think it's a weak example overall, yet he classifies it as the ultimate example.