By: Jonathan Kang (johnbk.delete@this.gmail.com), September 26, 2007 6:48 am
Room: Moderated Discussions
jigal (jigal2@gmail.com) on 9/25/07 wrote:
---------------------------
>1st, thanx guys for the in depth explanation.
>
>I begin to understand the major point here, that CSI
>is an extension of a CPU to CPU or CPU to memory bus,
>whereas PCI-e is and always will be a peripheral bus.
>CSI would carry x86 instruction streams and its data,
>whereas PCI-e was never designed to do that, and would
>probably not be extended to take it on.
>
>I dare ask a few more questions then:
>1. Wouldn't intel need to open CSI such that vendors can
>produce new north bridges to communicate with it?
>After all - a bridge between PCI-e and CSI is needed, no?
Depends. Intel makes their own chipsets (Northbridge and Southbridge chips) so in theory, they could take the Apple approach and simply say "we only sell entire boxes" and let the external world interface with PCI-e. I imagine select vendors (someone else mentioned IBM) would get access to the spec as well.
>2. Could CSI (partially)
>replace PCI-e in the slightly longer term?
>E.g., would it be used for off chip GPU's instead of
>PCIe?
Yes and no. Keep in mind the issues of signal integrity and versatility when it comes to different transmission mediums. When communicating with peripherals like a graphics card, it's easy to forget that it has to go through a connector (unless you're talking integrated GPU on the motherboard, in which case I fully expect CSI to compete and soundingly beat and replace PCI-e for). This usually makes it prohibitive to send "naked" high-speed signals through. That, to a degree, is why PCI-e encodes its data and sends it serially. A connector represents an impedance inconsistency on the transmission line no matter how well it's built.
It also means that said connectivity will change. What happens if someone puts in an extended daughter-card? What happens if someone uses a long cable instead of a PCI-e socket? CSI was not designed to cope with this. It's meant to go over traces (of somewhat limited length) on a PCB.
>3. On servers - what is the leading peripheral
>connectivity technology?
>PCI-e? HT? FC? InfiniBand? Or even 10GbE?
>(InfiniBand seemed to enter only in HPC, no?)
>Or is it completely vendor determined?
>(e.g., IBM's servers would go FC,
>Intel x86 based servers would go PCI-e,
>AMD x86 based servers would go HT.., etc.)
I'm not sure there is a dominance here. Previously, it had been PCI, PCI-X, PCI-64, etc. Some vendors have their own peripheral connectivity but I'd say PCI-e has pretty much come in and replaced all of that.
---------------------------
>1st, thanx guys for the in depth explanation.
>
>I begin to understand the major point here, that CSI
>is an extension of a CPU to CPU or CPU to memory bus,
>whereas PCI-e is and always will be a peripheral bus.
>CSI would carry x86 instruction streams and its data,
>whereas PCI-e was never designed to do that, and would
>probably not be extended to take it on.
>
>I dare ask a few more questions then:
>1. Wouldn't intel need to open CSI such that vendors can
>produce new north bridges to communicate with it?
>After all - a bridge between PCI-e and CSI is needed, no?
Depends. Intel makes their own chipsets (Northbridge and Southbridge chips) so in theory, they could take the Apple approach and simply say "we only sell entire boxes" and let the external world interface with PCI-e. I imagine select vendors (someone else mentioned IBM) would get access to the spec as well.
>2. Could CSI (partially)
>replace PCI-e in the slightly longer term?
>E.g., would it be used for off chip GPU's instead of
>PCIe?
Yes and no. Keep in mind the issues of signal integrity and versatility when it comes to different transmission mediums. When communicating with peripherals like a graphics card, it's easy to forget that it has to go through a connector (unless you're talking integrated GPU on the motherboard, in which case I fully expect CSI to compete and soundingly beat and replace PCI-e for). This usually makes it prohibitive to send "naked" high-speed signals through. That, to a degree, is why PCI-e encodes its data and sends it serially. A connector represents an impedance inconsistency on the transmission line no matter how well it's built.
It also means that said connectivity will change. What happens if someone puts in an extended daughter-card? What happens if someone uses a long cable instead of a PCI-e socket? CSI was not designed to cope with this. It's meant to go over traces (of somewhat limited length) on a PCB.
>3. On servers - what is the leading peripheral
>connectivity technology?
>PCI-e? HT? FC? InfiniBand? Or even 10GbE?
>(InfiniBand seemed to enter only in HPC, no?)
>Or is it completely vendor determined?
>(e.g., IBM's servers would go FC,
>Intel x86 based servers would go PCI-e,
>AMD x86 based servers would go HT.., etc.)
I'm not sure there is a dominance here. Previously, it had been PCI, PCI-X, PCI-64, etc. Some vendors have their own peripheral connectivity but I'd say PCI-e has pretty much come in and replaced all of that.