By: David Kanter (dkanter.delete@this.realworldtech.com), September 23, 2007 1:43 pm
Room: Moderated Discussions
Michael S (already5chosen@yahoo.com) on 9/23/07 wrote:
---------------------------
>David Kanter (dkanter@realworldtech.com) on 9/23/07 wrote:
>---------------------------
>>Michael S (already5chosen@yahoo.com) on 9/23/07 wrote:
>>
>>>Nah. Algorithmic delay of 8b/10b decoding is equal to 10T >regardless of the size
>>>of packet. At CSI data rates 10T=1.5ns=lost in noise.
>>
>>No, it's not. At 6.4GT/s that means you have the effective latency of a 0.64GT/s
>>interface, which is just lousy.
>
>Why do you say that it is lousy?
>Algorithmic part of the delay imposed by 8b/10b is indeed >that short.
I wouldn't call 10 cycles short. Suppose you have a larger IPF system with 64 processors, arrayed in 4 socket cells...you're going to have at least 3 hops to get to remote processors (although perhaps only 2 to hit the remote directory). That's a minimum 20-30 cycle delay you are adding.
>Implementation
>part of delay shouldn't be very long either - close to zero >on xmt side and assuming
>implementation in CPU-class silicon about 1-2ns on the rcv >side. Comparatively to
>algorithmic+implementation delay of CRC-protected packet it >is lost in noise even for the short (80bit?) NACK packets.
How do you know that CRC calculations are blocking? CRC errors rarely occur, so I would design my interconnects to speculate that no CRC errors occur, with a way to replay if they do occur.
>Besides, non-systematic 8b/10b encoding is needed only for >AC coupling, not for
>Clock-data recovery itself.
Hrmm...could you elaborate?
>When an AC coupling is not desirable CDR could happily
>live with systematic 8b/10b or even 8b/9b encoding schemes >that have zero delay.
What exactly do you mean?
[snip]
>>Intel also had an interesting announcement about low power >>that came out of Intel's circuits group:
>>
>>A Scalable 5-15Gbps, 14-75mW Low Power I/O Transceiver in >>65nm CMOS
>>
>>It dissipates ~2-5mW/gb/s, but obviously goes much faster.
>>
>>DK
>
>Are you sure that Intel listed maximum power numbers? >AFAIR, they were looking
>for ways of reduction of average power consumptions that >had little effect on maximum power.
>Or may be I am mixing different Intel announcements.
Well it's the average power consumption that matters the most. Maximum only matters for power distribution and heatsinks.
DK
---------------------------
>David Kanter (dkanter@realworldtech.com) on 9/23/07 wrote:
>---------------------------
>>Michael S (already5chosen@yahoo.com) on 9/23/07 wrote:
>>
>>>Nah. Algorithmic delay of 8b/10b decoding is equal to 10T >regardless of the size
>>>of packet. At CSI data rates 10T=1.5ns=lost in noise.
>>
>>No, it's not. At 6.4GT/s that means you have the effective latency of a 0.64GT/s
>>interface, which is just lousy.
>
>Why do you say that it is lousy?
>Algorithmic part of the delay imposed by 8b/10b is indeed >that short.
I wouldn't call 10 cycles short. Suppose you have a larger IPF system with 64 processors, arrayed in 4 socket cells...you're going to have at least 3 hops to get to remote processors (although perhaps only 2 to hit the remote directory). That's a minimum 20-30 cycle delay you are adding.
>Implementation
>part of delay shouldn't be very long either - close to zero >on xmt side and assuming
>implementation in CPU-class silicon about 1-2ns on the rcv >side. Comparatively to
>algorithmic+implementation delay of CRC-protected packet it >is lost in noise even for the short (80bit?) NACK packets.
How do you know that CRC calculations are blocking? CRC errors rarely occur, so I would design my interconnects to speculate that no CRC errors occur, with a way to replay if they do occur.
>Besides, non-systematic 8b/10b encoding is needed only for >AC coupling, not for
>Clock-data recovery itself.
Hrmm...could you elaborate?
>When an AC coupling is not desirable CDR could happily
>live with systematic 8b/10b or even 8b/9b encoding schemes >that have zero delay.
What exactly do you mean?
[snip]
>>Intel also had an interesting announcement about low power >>that came out of Intel's circuits group:
>>
>>A Scalable 5-15Gbps, 14-75mW Low Power I/O Transceiver in >>65nm CMOS
>>
>>It dissipates ~2-5mW/gb/s, but obviously goes much faster.
>>
>>DK
>
>Are you sure that Intel listed maximum power numbers? >AFAIR, they were looking
>for ways of reduction of average power consumptions that >had little effect on maximum power.
>Or may be I am mixing different Intel announcements.
Well it's the average power consumption that matters the most. Maximum only matters for power distribution and heatsinks.
DK