By: James (no.delete@this.thanks.invalid), May 6, 2021 1:36 am
Room: Moderated Discussions
Etienne Lorrain (etienne_lorrain.delete@this.yahoo.fr) on May 6, 2021 1:08 am wrote:
> I do not think there is any advantage in an "ECC line size" bigger than a very few hundred bytes (and
> there is disadvantages like latency)
That was one of the big advantages Western Digital claimed back when 4K disk blocks were first being introduced:
> I do not think there is any advantage in an "ECC line size" bigger than a very few hundred bytes (and
> there is disadvantages like latency)
That was one of the big advantages Western Digital claimed back when 4K disk blocks were first being introduced:
The principle (sic) problem here is that ECC correction takes place in 512B chunks, while ECC can be more efficient when used over larger chunks of data. If ECC data is calculated against a larger sector, even though more ECC data is necessary than for a single 512B sector, less ECC data than the sum of multiple sectors is needed to maintain the same level of operational reliability. One estimate for 4K sector technology puts this at 100 bytes of ECC data needed for a 4K sector, versus 320 (40x8) for 8 512B sectors. Furthermore the larger sectors means that larger erroneous chunks of data can be corrected (burst error correction), something that was becoming harder as greater areal densities made it easier to wipe out larger parts of a 512B sector.