By: wumpus (lost.delete@this.in.a.cave), October 12, 2018 6:55 am
Room: Moderated Discussions
David Kanter (dkanter.delete@this.realworldtech.com) on October 11, 2018 10:53 pm wrote:
Apparently they are working on an 8b FP format as well!
>
> David
>
At this point just use a logarithmic representation (might work for 16 bits, but that probably requires too many weird circuits. But once you get people to work with log8, they will probably want log16 as well).
Yes, it might need a scaling factor if you don't fit the exact same "law" as the hardware. But float8 is only going to be less forgiving about scaling factors.
Apparently they are working on an 8b FP format as well!
>
> David
>
At this point just use a logarithmic representation (might work for 16 bits, but that probably requires too many weird circuits. But once you get people to work with log8, they will probably want log16 as well).
Yes, it might need a scaling factor if you don't fit the exact same "law" as the hardware. But float8 is only going to be less forgiving about scaling factors.
Topic | Posted By | Date | |
---|---|---|---|
VLSI 2018: IBM's machine learning accelerator | David Kanter | 2018/10/09 02:28 PM | |
New article! (NT) | David Kanter | 2018/10/09 09:12 PM | |
VLSI 2018: IBM's machine learning accelerator | lockederboss | 2018/10/10 12:20 PM | |
Probably IEEE fp16 | Mark Roulo | 2018/10/10 03:41 PM | |
Probably IEEE fp16 | David Kanter | 2018/10/11 10:53 PM | |
Probably IEEE fp16 (blah) | wumpus | 2018/10/12 06:55 AM | |
Probably IEEE fp16 (blah) | dmcq | 2018/10/12 12:56 PM | |
Probably IEEE fp16 (blah) | dmcq | 2018/10/12 01:07 PM | |
This Post has been deleted | |||
This Post has been deleted |