By: Michael S (already5chosen.delete@this.yahoo.com), September 26, 2007 4:14 am
Room: Moderated Discussions
jigal (jigal2@gmail.com) on 9/25/07 wrote:
---------------------------
>1st, thanx guys for the in depth explanation.
>
>I begin to understand the major point here, that CSI
>is an extension of a CPU to CPU or CPU to memory bus,
No, CSI is not CPU to memory bus. Never will be.
>whereas PCI-e is and always will be a peripheral bus.
There is one point where PCIe and CSI compete - Northbridge to Southbridge link. I'd expect it to remain on PCIe variant but who knows.
>CSI would carry x86 instruction streams and its data,
>whereas PCI-e was never designed to do that, and would
>probably not be extended to take it on.
What do you mean by "carry x86 instruction streams"? CSI carries data in coherent or non-coherent ways. If the data happens to be a code - so be it. In that regard, not different at all from PCIe or any other "direct/random access" bus.
>
>I dare ask a few more questions then:
>1. Wouldn't intel need to open CSI such that vendors can
>produce new north bridges to communicate with it?
>After all - a bridge between PCI-e and CSI is needed, no?
>
Yes, Intel would have to open CSI to IBM (for XeonMP). Whether it will open it to NVidea (for mobile and desktop chipsets) depends on several current unknown factors.
In XeonDP market Intel don't need partners.
Intel could also provide grey-box (black for end user but visible for FPGA vendor) CSI IT modules for Xilinx and Altera.
>2. Could CSI (partially)
>replace PCI-e in the slightly longer term?
>E.g., would it be used for off chip GPU's instead of
>PCIe?
>
Good question.
>3. On servers - what is the leading peripheral
>connectivity technology?
>PCI-e? HT? FC? InfiniBand? Or even 10GbE?
>(InfiniBand seemed to enter only in HPC, no?)
>Or is it completely vendor determined?
>(e.g., IBM's servers would go FC,
>Intel x86 based servers would go PCI-e,
>AMD x86 based servers would go HT.., etc.)
>
The technologies you mentioned don't really compete except, to some extent, IB vs 10GbE as cluster interconnects and for even smaller extent HT vs PCIs as a mean to connect a CPU/NB to IB/10GbE adapter.
---------------------------
>1st, thanx guys for the in depth explanation.
>
>I begin to understand the major point here, that CSI
>is an extension of a CPU to CPU or CPU to memory bus,
No, CSI is not CPU to memory bus. Never will be.
>whereas PCI-e is and always will be a peripheral bus.
There is one point where PCIe and CSI compete - Northbridge to Southbridge link. I'd expect it to remain on PCIe variant but who knows.
>CSI would carry x86 instruction streams and its data,
>whereas PCI-e was never designed to do that, and would
>probably not be extended to take it on.
What do you mean by "carry x86 instruction streams"? CSI carries data in coherent or non-coherent ways. If the data happens to be a code - so be it. In that regard, not different at all from PCIe or any other "direct/random access" bus.
>
>I dare ask a few more questions then:
>1. Wouldn't intel need to open CSI such that vendors can
>produce new north bridges to communicate with it?
>After all - a bridge between PCI-e and CSI is needed, no?
>
Yes, Intel would have to open CSI to IBM (for XeonMP). Whether it will open it to NVidea (for mobile and desktop chipsets) depends on several current unknown factors.
In XeonDP market Intel don't need partners.
Intel could also provide grey-box (black for end user but visible for FPGA vendor) CSI IT modules for Xilinx and Altera.
>2. Could CSI (partially)
>replace PCI-e in the slightly longer term?
>E.g., would it be used for off chip GPU's instead of
>PCIe?
>
Good question.
>3. On servers - what is the leading peripheral
>connectivity technology?
>PCI-e? HT? FC? InfiniBand? Or even 10GbE?
>(InfiniBand seemed to enter only in HPC, no?)
>Or is it completely vendor determined?
>(e.g., IBM's servers would go FC,
>Intel x86 based servers would go PCI-e,
>AMD x86 based servers would go HT.., etc.)
>
The technologies you mentioned don't really compete except, to some extent, IB vs 10GbE as cluster interconnects and for even smaller extent HT vs PCIs as a mean to connect a CPU/NB to IB/10GbE adapter.