By: Megabytephreak (roukemap.delete@this.gmail.com), October 29, 2020 10:16 am
Room: Moderated Discussions
anonymou5 (no.delete@this.spam.com) on October 29, 2020 9:52 am wrote:
> > The die size for an FPGA that can make good use of the IF bandwidth and do something
> > worth replacing a CPU chiplet with is going to be a fair bit bigger than the CPU chiplet
> > I think. The Rome package is already pretty full, so I think you'd probably need to
> > sacrifice multiple chiplets, probably even half of them to make it work.
> >
> > It might be more practical, particularly in the short term to allow pairing a specially packaged
> > FPGA with a normal CPU in a 2-socket system, using the FPGA transceivers to implement the inter-chip
> > variant of Infinity Fabric and also supporting the existing DRAM slots. For the very short term
> > you could also do PCIe between sockets as well, although then you loose cache coherencey.
>
> Past attempts by AMD and Intel weren't exactly stunning commercial successes.
>
> The term "solution looking for a problem" comes to mind, to be honest.
I don't disagree. My main point was that die sizes probably preclude simply dropping an FPGA onto a Rome-type package as a chiplet.
For FPGA in socket, I do think that the tradeoffs may be slightly better if they spun a version of the FPGA with hardened support for Infinity Fabric coherency. The other issue that arises when dropping and FPGA into a CPU socket is that you lose the associated IO and memory connectivity of that socket. This either puts a severe crimp on available IO or requires a custom motherboard. With the chiplet architecture of Rome (and presumably it's follow ons), you might be able to alleviate this somewhat by putting both and FPGA and the IO die in the package, thus retaining the memory and IO capabilities of the socket.
Overall though, as someone who does FPGA design full-time, I'm fairly pessimistic about FPGAs in the datacenter. Unless there is a real breakthrough in tools I think the development effort will stay prohibitive for many applications. There will always be temporary niches for new concepts, such as early SmartNICs as I understand Microsoft did/as doing, but I think in the long run such concepts will migrate to Custom ASICs for power, cost and performance gains. My personal experience in FPGAs is much more in the embedded space, which has different dynamics.
> > The die size for an FPGA that can make good use of the IF bandwidth and do something
> > worth replacing a CPU chiplet with is going to be a fair bit bigger than the CPU chiplet
> > I think. The Rome package is already pretty full, so I think you'd probably need to
> > sacrifice multiple chiplets, probably even half of them to make it work.
> >
> > It might be more practical, particularly in the short term to allow pairing a specially packaged
> > FPGA with a normal CPU in a 2-socket system, using the FPGA transceivers to implement the inter-chip
> > variant of Infinity Fabric and also supporting the existing DRAM slots. For the very short term
> > you could also do PCIe between sockets as well, although then you loose cache coherencey.
>
> Past attempts by AMD and Intel weren't exactly stunning commercial successes.
>
> The term "solution looking for a problem" comes to mind, to be honest.
I don't disagree. My main point was that die sizes probably preclude simply dropping an FPGA onto a Rome-type package as a chiplet.
For FPGA in socket, I do think that the tradeoffs may be slightly better if they spun a version of the FPGA with hardened support for Infinity Fabric coherency. The other issue that arises when dropping and FPGA into a CPU socket is that you lose the associated IO and memory connectivity of that socket. This either puts a severe crimp on available IO or requires a custom motherboard. With the chiplet architecture of Rome (and presumably it's follow ons), you might be able to alleviate this somewhat by putting both and FPGA and the IO die in the package, thus retaining the memory and IO capabilities of the socket.
Overall though, as someone who does FPGA design full-time, I'm fairly pessimistic about FPGAs in the datacenter. Unless there is a real breakthrough in tools I think the development effort will stay prohibitive for many applications. There will always be temporary niches for new concepts, such as early SmartNICs as I understand Microsoft did/as doing, but I think in the long run such concepts will migrate to Custom ASICs for power, cost and performance gains. My personal experience in FPGAs is much more in the embedded space, which has different dynamics.