Probably IEEE fp16 (blah)

Article: IBM's Machine Learning Accelerator at VLSI 2018
By: dmcq (, October 12, 2018 12:56 pm
Room: Moderated Discussions
wumpus ( on October 12, 2018 6:55 am wrote:
> David Kanter ( on October 11, 2018 10:53 pm wrote:
> Apparently they are working on an 8b FP format as well!
> >
> > David
> >
> At this point just use a logarithmic representation (might work for 16 bits, but that probably requires too
> many weird circuits. But once you get people to work with log8, they will probably want log16 as well).
> Yes, it might need a scaling factor if you don't fit the exact same "law" as the
> hardware. But float8 is only going to be less forgiving about scaling factors.

There's been a number of schemes described over the years besides pure logarithmic which can cut the complexity a bit. But with that number of bits practically anything can be done pretty efficiently. Compared to IEEE pure logarithmic multiplication is just addition, addition is more complicated best done by converting to something more like IEEE. In AI one would normally want the sum of a number of multiplies and the total can be converted back at the end with rounding.

< Previous Post in ThreadNext Post in Thread >
TopicPosted ByDate
VLSI 2018: IBM's machine learning acceleratorDavid Kanter2018/10/09 02:28 PM
  New article! (NT)David Kanter2018/10/09 09:12 PM
  VLSI 2018: IBM's machine learning acceleratorlockederboss2018/10/10 12:20 PM
     Probably IEEE fp16Mark Roulo2018/10/10 03:41 PM
       Probably IEEE fp16David Kanter2018/10/11 10:53 PM
         Probably IEEE fp16 (blah)wumpus2018/10/12 06:55 AM
           Probably IEEE fp16 (blah)dmcq2018/10/12 12:56 PM
             Probably IEEE fp16 (blah)dmcq2018/10/12 01:07 PM
Reply to this Topic
Body: No Text
How do you spell green?