Probably IEEE fp16 (blah)

Article: IBM's Machine Learning Accelerator at VLSI 2018
By: wumpus (, October 12, 2018 6:55 am
Room: Moderated Discussions
David Kanter ( on October 11, 2018 10:53 pm wrote:
Apparently they are working on an 8b FP format as well!
> David

At this point just use a logarithmic representation (might work for 16 bits, but that probably requires too many weird circuits. But once you get people to work with log8, they will probably want log16 as well).

Yes, it might need a scaling factor if you don't fit the exact same "law" as the hardware. But float8 is only going to be less forgiving about scaling factors.
< Previous Post in ThreadNext Post in Thread >
TopicPosted ByDate
VLSI 2018: IBM's machine learning acceleratorDavid Kanter2018/10/09 02:28 PM
  New article! (NT)David Kanter2018/10/09 09:12 PM
  VLSI 2018: IBM's machine learning acceleratorlockederboss2018/10/10 12:20 PM
     Probably IEEE fp16Mark Roulo2018/10/10 03:41 PM
       Probably IEEE fp16David Kanter2018/10/11 10:53 PM
         Probably IEEE fp16 (blah)wumpus2018/10/12 06:55 AM
           Probably IEEE fp16 (blah)dmcq2018/10/12 12:56 PM
             Probably IEEE fp16 (blah)dmcq2018/10/12 01:07 PM
This Post has been deleted
This Post has been deleted
Reply to this Topic
Body: No Text
How do you spell tangerine? 🍊