By: David Kanter (dkanter.delete@this.realworldtech.com), October 11, 2018 10:53 pm
Room: Moderated Discussions
Mark Roulo (nothanks.delete@this.xxx.com) on October 10, 2018 3:41 pm wrote:
> lockederboss (locke.delete@this.derboss.nl) on October 10, 2018 12:20 pm wrote:
> > David Kanter (dkanter.delete@this.realworldtech.com) on October 9, 2018 2:28 pm wrote:
> > > Here's my first quick article from VLSI 2018. Hopefully, this will be the first of several!
> > >
> > > IBM presented a neural network accelerator at VLSI 2018 showcasing a variety of architectural techniques
> > > for machine learning, including a regular 2D array of small processing elements optimized for dataflow
> > > computation, reduced precision arithmetic, and explicitly addressed memories.
> > >
> > > https://www.realworldtech.com/vlsi2018-ibm-machine-learning/
> > >
> > > It was quite interesting to compare and contrast to big GPUs, TPUs, and other hardware!
> > >
> > > David
> >
> > Did they discuss details of the 16-bit floating point format
> > (IEEE half precision, bfloat16 or something custom)?
>
> The paper here:
>
> https://www.ibm.com/blogs/research/2018/06/approximate-computing-ai-acceleration/
>
> strongly suggests IEEE fp16
It's something custom. They have a paper at NIPS where they will discuss more details. Apparently they are working on an 8b FP format as well!
David
> lockederboss (locke.delete@this.derboss.nl) on October 10, 2018 12:20 pm wrote:
> > David Kanter (dkanter.delete@this.realworldtech.com) on October 9, 2018 2:28 pm wrote:
> > > Here's my first quick article from VLSI 2018. Hopefully, this will be the first of several!
> > >
> > > IBM presented a neural network accelerator at VLSI 2018 showcasing a variety of architectural techniques
> > > for machine learning, including a regular 2D array of small processing elements optimized for dataflow
> > > computation, reduced precision arithmetic, and explicitly addressed memories.
> > >
> > > https://www.realworldtech.com/vlsi2018-ibm-machine-learning/
> > >
> > > It was quite interesting to compare and contrast to big GPUs, TPUs, and other hardware!
> > >
> > > David
> >
> > Did they discuss details of the 16-bit floating point format
> > (IEEE half precision, bfloat16 or something custom)?
>
> The paper here:
>
> https://www.ibm.com/blogs/research/2018/06/approximate-computing-ai-acceleration/
>
> strongly suggests IEEE fp16
It's something custom. They have a paper at NIPS where they will discuss more details. Apparently they are working on an 8b FP format as well!
David
Topic | Posted By | Date | |
---|---|---|---|
VLSI 2018: IBM's machine learning accelerator | David Kanter | 2018/10/09 02:28 PM | |
New article! (NT) | David Kanter | 2018/10/09 09:12 PM | |
VLSI 2018: IBM's machine learning accelerator | lockederboss | 2018/10/10 12:20 PM | |
Probably IEEE fp16 | Mark Roulo | 2018/10/10 03:41 PM | |
Probably IEEE fp16 | David Kanter | 2018/10/11 10:53 PM | |
Probably IEEE fp16 (blah) | wumpus | 2018/10/12 06:55 AM | |
Probably IEEE fp16 (blah) | dmcq | 2018/10/12 12:56 PM | |
Probably IEEE fp16 (blah) | dmcq | 2018/10/12 01:07 PM | |
This Post has been deleted | |||
This Post has been deleted |