By: Mark Roulo (nothanks.delete@this.xxx.com), January 7, 2021 7:00 pm
Room: Moderated Discussions
Jörn Engel (joern.delete@this.purestorage.com) on January 7, 2021 4:29 pm wrote:
> rwessel (rwessel.delete@this.yahoo.com) on January 7, 2021 9:25 am wrote:
> >
> > So SECDEC on a 32-bit word requires seven bits, eight bits on a 64-bit word, nine bits on a 128-bit word.
> > Doing that on 64-bit words has the advantage of allowing fairly simple 8x9 or 9x8 RAM configurations.
>
> If the memory interface is 64bit, but the cacheline size is 64 _Bytes_, you need to do 8 reads anyway.
> I wonder how hard it would be to do 9 reads to get the extra ECC information. That would simplify
> things quite a bit, you can use the same DIMMs and motherboards. You can also get away with relatively
> fewer bits for ECC, so it might be possible to reduce overhead from 12.5% to something closer to
> 2.15%. Memory bandwidth is reduced by 11%, which would be fine in my book.
>
> Any CPU manufacturer should be able to do something like that. And I believe I can find old
> implementations of that going back close to 20 years. So why isn't it done all the time?
Random Googling suggests that many motherboards accept ECC DRAM even if the CPU in the motherboard does not support ECC. Everything works, you just don't get the ECC protection.
If the CPU supports ECC at all (current schemes or your 64 byte scheme), I don't think the extra cost of "real" ECC DRAM would be off-putting to the customers who wanted ECC.
So the answer to your question, "why isn't it done all the time," is that the CPU vendor has to build ECC into the CPU and the vendors that don't do your scheme don't want to support ANY ECC on those chips. Almost certainly for market segmentation. This is not a technical limitation. It is a business decision.
> rwessel (rwessel.delete@this.yahoo.com) on January 7, 2021 9:25 am wrote:
> >
> > So SECDEC on a 32-bit word requires seven bits, eight bits on a 64-bit word, nine bits on a 128-bit word.
> > Doing that on 64-bit words has the advantage of allowing fairly simple 8x9 or 9x8 RAM configurations.
>
> If the memory interface is 64bit, but the cacheline size is 64 _Bytes_, you need to do 8 reads anyway.
> I wonder how hard it would be to do 9 reads to get the extra ECC information. That would simplify
> things quite a bit, you can use the same DIMMs and motherboards. You can also get away with relatively
> fewer bits for ECC, so it might be possible to reduce overhead from 12.5% to something closer to
> 2.15%. Memory bandwidth is reduced by 11%, which would be fine in my book.
>
> Any CPU manufacturer should be able to do something like that. And I believe I can find old
> implementations of that going back close to 20 years. So why isn't it done all the time?
Random Googling suggests that many motherboards accept ECC DRAM even if the CPU in the motherboard does not support ECC. Everything works, you just don't get the ECC protection.
If the CPU supports ECC at all (current schemes or your 64 byte scheme), I don't think the extra cost of "real" ECC DRAM would be off-putting to the customers who wanted ECC.
So the answer to your question, "why isn't it done all the time," is that the CPU vendor has to build ECC into the CPU and the vendors that don't do your scheme don't want to support ANY ECC on those chips. Almost certainly for market segmentation. This is not a technical limitation. It is a business decision.