By: noko (noko.delete@this.noko.com), August 29, 2022 11:54 pm
Room: Moderated Discussions
Freddie (freddie.delete@this.witherden.org) on August 29, 2022 5:32 pm wrote:
> anonymous2 (anonymous2.delete@this.example.com) on August 29, 2022 5:08 pm wrote:
> > AVX-512 (ISA details murky) on Zen 4 but 2 cycles vs 1 on Intel so only 256b internally.
> >
> > Small win for those who want the ISA, but from a performance perspective limited value?
> >
>
> Execution time is not particularly important for SIMD instructions, and it is almost
> never one cycle for floating point anyway. What matters throughput and here Zen 4
> is likely to be half rate on a per-cycle basis compared to high end Intel cores.
>
> Saying that a lot of the value from AVX-512 comes from the ISA. Extra registers, embedded broadcasts
> in FMAs, and predication, to name a few. These are all useful. Moreover, most compilers will
> only emit 256-bit AVX-512 code by default unless explicitly told otherwise with -mprefer-vector-width=512
> due to historical down-clocking issues on Intel CPUs. Thus, assuming AMD have not messed up the
> implementation it is likely to have some utility even for non-ML code.
For data processing, I think 2x256b was pretty expected? Though I wouldn't rely on the current game of telephone to assume that the implementation is "2 cycles" and not "combining" the two 256b ALUs for the cycle the AVX-512 µop issues. Like, Icelake or Neoverse V1, but unlike Zen 1 that cracked 256b instructions into two 128b µops.
More interesting is whether gather and permute is still slow; Zen 1-3 gather is so slow that it's about as useless as it was on Haswell... And Intel having a full 512b permute unit even on client chips helped simdjson grab headlines with their AVX-512 implementation.
> anonymous2 (anonymous2.delete@this.example.com) on August 29, 2022 5:08 pm wrote:
> > AVX-512 (ISA details murky) on Zen 4 but 2 cycles vs 1 on Intel so only 256b internally.
> >
> > Small win for those who want the ISA, but from a performance perspective limited value?
> >
>
> Execution time is not particularly important for SIMD instructions, and it is almost
> never one cycle for floating point anyway. What matters throughput and here Zen 4
> is likely to be half rate on a per-cycle basis compared to high end Intel cores.
>
> Saying that a lot of the value from AVX-512 comes from the ISA. Extra registers, embedded broadcasts
> in FMAs, and predication, to name a few. These are all useful. Moreover, most compilers will
> only emit 256-bit AVX-512 code by default unless explicitly told otherwise with -mprefer-vector-width=512
> due to historical down-clocking issues on Intel CPUs. Thus, assuming AMD have not messed up the
> implementation it is likely to have some utility even for non-ML code.
For data processing, I think 2x256b was pretty expected? Though I wouldn't rely on the current game of telephone to assume that the implementation is "2 cycles" and not "combining" the two 256b ALUs for the cycle the AVX-512 µop issues. Like, Icelake or Neoverse V1, but unlike Zen 1 that cracked 256b instructions into two 128b µops.
More interesting is whether gather and permute is still slow; Zen 1-3 gather is so slow that it's about as useless as it was on Haswell... And Intel having a full 512b permute unit even on client chips helped simdjson grab headlines with their AVX-512 implementation.