By: dmcq (dmcq.delete@this.fano.co.uk), October 12, 2018 12:56 pm
Room: Moderated Discussions
wumpus (lost.delete@this.in.a.cave) on October 12, 2018 6:55 am wrote:
> David Kanter (dkanter.delete@this.realworldtech.com) on October 11, 2018 10:53 pm wrote:
> Apparently they are working on an 8b FP format as well!
> >
> > David
> >
>
> At this point just use a logarithmic representation (might work for 16 bits, but that probably requires too
> many weird circuits. But once you get people to work with log8, they will probably want log16 as well).
>
> Yes, it might need a scaling factor if you don't fit the exact same "law" as the
> hardware. But float8 is only going to be less forgiving about scaling factors.
There's been a number of schemes described over the years besides pure logarithmic which can cut the complexity a bit. But with that number of bits practically anything can be done pretty efficiently. Compared to IEEE pure logarithmic multiplication is just addition, addition is more complicated best done by converting to something more like IEEE. In AI one would normally want the sum of a number of multiplies and the total can be converted back at the end with rounding.
> David Kanter (dkanter.delete@this.realworldtech.com) on October 11, 2018 10:53 pm wrote:
> Apparently they are working on an 8b FP format as well!
> >
> > David
> >
>
> At this point just use a logarithmic representation (might work for 16 bits, but that probably requires too
> many weird circuits. But once you get people to work with log8, they will probably want log16 as well).
>
> Yes, it might need a scaling factor if you don't fit the exact same "law" as the
> hardware. But float8 is only going to be less forgiving about scaling factors.
There's been a number of schemes described over the years besides pure logarithmic which can cut the complexity a bit. But with that number of bits practically anything can be done pretty efficiently. Compared to IEEE pure logarithmic multiplication is just addition, addition is more complicated best done by converting to something more like IEEE. In AI one would normally want the sum of a number of multiplies and the total can be converted back at the end with rounding.
Topic | Posted By | Date | |
---|---|---|---|
VLSI 2018: IBM's machine learning accelerator | David Kanter | 2018/10/09 02:28 PM | |
New article! (NT) | David Kanter | 2018/10/09 09:12 PM | |
VLSI 2018: IBM's machine learning accelerator | lockederboss | 2018/10/10 12:20 PM | |
Probably IEEE fp16 | Mark Roulo | 2018/10/10 03:41 PM | |
Probably IEEE fp16 | David Kanter | 2018/10/11 10:53 PM | |
Probably IEEE fp16 (blah) | wumpus | 2018/10/12 06:55 AM | |
Probably IEEE fp16 (blah) | dmcq | 2018/10/12 12:56 PM | |
Probably IEEE fp16 (blah) | dmcq | 2018/10/12 01:07 PM | |
This Post has been deleted | |||
This Post has been deleted |