By: Björn Ragnar Björnsson (bjorn.ragnar.delete@this.gmail.com), June 14, 2022 4:20 pm
Room: Moderated Discussions
Bill K (bill.delete@this.gmail.com) on June 14, 2022 2:52 pm wrote:
> I agree. Roughly 1/3 to 1/2 the chip area in a Xeon processor is memory and redundancy can easily
> be used for that. Sapphire Rapids has a total L2+L3 cache size of 280 MBytes. Similar to what
> Mark Roulo described for the Nvidia A100, each of the four 400 mm^2 Sapphire Rapids tiles contain
> 15 CPU cores. When some CPU cores are defective, Intel can sell the device as a different part
> number. The maximum number of CPU cores Intel will enable per tile is 14.
>
> Changing the subject, the thermal design power of the 56 core Sapphire Rapids
> is 350W and the maximum turbo power is 420W. Intel 4 is needed to improve these
> numbers, protect the polar bears and compete with AMD on performance per Watt.
Thank you all for your informative answers. Going to chiplet/tile configuration is of course automatically going to lead to smaller dies (ceteris paribus) and therefore higher yields (at some cost in performance and/or complexity/packaging costs). I was particularly interested in whether there was a difference in defect rates per area in Intel 7 comparing say Core+Cache vs rest of the SOC.
> I agree. Roughly 1/3 to 1/2 the chip area in a Xeon processor is memory and redundancy can easily
> be used for that. Sapphire Rapids has a total L2+L3 cache size of 280 MBytes. Similar to what
> Mark Roulo described for the Nvidia A100, each of the four 400 mm^2 Sapphire Rapids tiles contain
> 15 CPU cores. When some CPU cores are defective, Intel can sell the device as a different part
> number. The maximum number of CPU cores Intel will enable per tile is 14.
>
> Changing the subject, the thermal design power of the 56 core Sapphire Rapids
> is 350W and the maximum turbo power is 420W. Intel 4 is needed to improve these
> numbers, protect the polar bears and compete with AMD on performance per Watt.
Thank you all for your informative answers. Going to chiplet/tile configuration is of course automatically going to lead to smaller dies (ceteris paribus) and therefore higher yields (at some cost in performance and/or complexity/packaging costs). I was particularly interested in whether there was a difference in defect rates per area in Intel 7 comparing say Core+Cache vs rest of the SOC.