By: jigal (jigal2.delete@this.gmail.com), September 25, 2007 3:16 pm
Room: Moderated Discussions
1st, thanx guys for the in depth explanation.
I begin to understand the major point here, that CSI
is an extension of a CPU to CPU or CPU to memory bus,
whereas PCI-e is and always will be a peripheral bus.
CSI would carry x86 instruction streams and its data,
whereas PCI-e was never designed to do that, and would
probably not be extended to take it on.
I dare ask a few more questions then:
1. Wouldn't intel need to open CSI such that vendors can
produce new north bridges to communicate with it?
After all - a bridge between PCI-e and CSI is needed, no?
2. Could CSI (partially)
replace PCI-e in the slightly longer term?
E.g., would it be used for off chip GPU's instead of
PCIe?
3. On servers - what is the leading peripheral
connectivity technology?
PCI-e? HT? FC? InfiniBand? Or even 10GbE?
(InfiniBand seemed to enter only in HPC, no?)
Or is it completely vendor determined?
(e.g., IBM's servers would go FC,
Intel x86 based servers would go PCI-e,
AMD x86 based servers would go HT.., etc.)
Jonathan Kang (johnbk@gmail.com) on 9/25/07 wrote:
---------------------------
>jigal (jigal2@gmail.com) on 9/23/07 wrote:
>---------------------------
>>Thanx for the reply.
>>
>>Pretty much overwhelmed by the the level of technical details - since I am not a vlsi guy, but merely a s/w guy.
>>The reason I am asking, is that, after all, Intel invested
>>considerable effort in adding the AS layer on top of the
>>PCI-E, which seems to turn it to a full networking layer,
>>which CSI also seems to add.
>>(although you could argue that "Intel" in this sense is a
>>bunch of engineers toying with their own standard...)
>
>The key here is in the physical implementation and limitations rather than what's
>viewed from software. Keep in mind that software will not be the only thing using
>CSI. It's used by ASICs to communicate with each without the control of software.
>This adds a requirement that PCI-e didn't necessarily have: low-latency.
>
>This, on top of the power requirements (not just max power but average power) means
>that a new link had to be made that wasn't as power hungry and had significantly
>lower wake-up latencies (since they'd be shutting the link off during idle times) than PCI-e.
>
>>This is in contrast to AMD which has a consistent on board
>>p2p link - the HT link, which seems to be PCI friendly,
>>and was invented when PCI-E wasn't relevant, I guess.
>>I remember reading an pdf on their website trying to
>>differentiate HT from PCI as on intra board links vs.
>>inter-board links, or even inter-chassis, etc...
>
>Before PCI-e, AMD systems used PCI and its variant flavors along with HT links.
>The two are not meant for the same purpose and they have different physical limitations
>and features. Intel's original vision for PCI-e wasn't just to connect peripherals
>inside of a beige box but to use it as a chassis-to-chassis link as well. This
>means that it had to be a truly serial link where line-latency was a non-issue and
>signal strength/integrity had to be maintained over very long cables. This ruled
>something like HT (which is still parallel to a degree) out. Think of PCI-e more
>as fiber-channel or ethernet as it is *very* similar to both of those.
>
>>Is the HT used for coherency protocols between AMD asics?
>>And regardless, now that we have a new guy in town,
>>what is the future of PCI-E, if they're not going to
>>add coherency protocols on top of it?
>
>PCI-e will serve the purpose it was intended for, peripheral connection and inter-chassis
>(or inter-board) connection. It was made to tolerate very long delays and to operate
>between multiple devices (no common voltage required).
>
>CSI will compete with HT for inter-chip communication where latency and power are
>limiting factors (such as multi-processor configurations).
>
>>I guess on the upside, it would provide plenty of work
>>to engineers to develop and sell hubs for CSI/PCI-E/HT :-)
>
>Considering Intel's proprietary nature in the past, I'm not sure anybody outside
>of Intel will be using CSI. Intel had to open PCI-e in order to get peripheral
>vendors to use it but the same need not be said of CSI.
I begin to understand the major point here, that CSI
is an extension of a CPU to CPU or CPU to memory bus,
whereas PCI-e is and always will be a peripheral bus.
CSI would carry x86 instruction streams and its data,
whereas PCI-e was never designed to do that, and would
probably not be extended to take it on.
I dare ask a few more questions then:
1. Wouldn't intel need to open CSI such that vendors can
produce new north bridges to communicate with it?
After all - a bridge between PCI-e and CSI is needed, no?
2. Could CSI (partially)
replace PCI-e in the slightly longer term?
E.g., would it be used for off chip GPU's instead of
PCIe?
3. On servers - what is the leading peripheral
connectivity technology?
PCI-e? HT? FC? InfiniBand? Or even 10GbE?
(InfiniBand seemed to enter only in HPC, no?)
Or is it completely vendor determined?
(e.g., IBM's servers would go FC,
Intel x86 based servers would go PCI-e,
AMD x86 based servers would go HT.., etc.)
Jonathan Kang (johnbk@gmail.com) on 9/25/07 wrote:
---------------------------
>jigal (jigal2@gmail.com) on 9/23/07 wrote:
>---------------------------
>>Thanx for the reply.
>>
>>Pretty much overwhelmed by the the level of technical details - since I am not a vlsi guy, but merely a s/w guy.
>>The reason I am asking, is that, after all, Intel invested
>>considerable effort in adding the AS layer on top of the
>>PCI-E, which seems to turn it to a full networking layer,
>>which CSI also seems to add.
>>(although you could argue that "Intel" in this sense is a
>>bunch of engineers toying with their own standard...)
>
>The key here is in the physical implementation and limitations rather than what's
>viewed from software. Keep in mind that software will not be the only thing using
>CSI. It's used by ASICs to communicate with each without the control of software.
>This adds a requirement that PCI-e didn't necessarily have: low-latency.
>
>This, on top of the power requirements (not just max power but average power) means
>that a new link had to be made that wasn't as power hungry and had significantly
>lower wake-up latencies (since they'd be shutting the link off during idle times) than PCI-e.
>
>>This is in contrast to AMD which has a consistent on board
>>p2p link - the HT link, which seems to be PCI friendly,
>>and was invented when PCI-E wasn't relevant, I guess.
>>I remember reading an pdf on their website trying to
>>differentiate HT from PCI as on intra board links vs.
>>inter-board links, or even inter-chassis, etc...
>
>Before PCI-e, AMD systems used PCI and its variant flavors along with HT links.
>The two are not meant for the same purpose and they have different physical limitations
>and features. Intel's original vision for PCI-e wasn't just to connect peripherals
>inside of a beige box but to use it as a chassis-to-chassis link as well. This
>means that it had to be a truly serial link where line-latency was a non-issue and
>signal strength/integrity had to be maintained over very long cables. This ruled
>something like HT (which is still parallel to a degree) out. Think of PCI-e more
>as fiber-channel or ethernet as it is *very* similar to both of those.
>
>>Is the HT used for coherency protocols between AMD asics?
>>And regardless, now that we have a new guy in town,
>>what is the future of PCI-E, if they're not going to
>>add coherency protocols on top of it?
>
>PCI-e will serve the purpose it was intended for, peripheral connection and inter-chassis
>(or inter-board) connection. It was made to tolerate very long delays and to operate
>between multiple devices (no common voltage required).
>
>CSI will compete with HT for inter-chip communication where latency and power are
>limiting factors (such as multi-processor configurations).
>
>>I guess on the upside, it would provide plenty of work
>>to engineers to develop and sell hubs for CSI/PCI-E/HT :-)
>
>Considering Intel's proprietary nature in the past, I'm not sure anybody outside
>of Intel will be using CSI. Intel had to open PCI-e in order to get peripheral
>vendors to use it but the same need not be said of CSI.