VLSI 2018: IBM's machine learning accelerator

Article: IBM's Machine Learning Accelerator at VLSI 2018
By: lockederboss (locke.delete@this.derboss.nl), October 10, 2018 12:20 pm
Room: Moderated Discussions
David Kanter (dkanter.delete@this.realworldtech.com) on October 9, 2018 2:28 pm wrote:
> Here's my first quick article from VLSI 2018. Hopefully, this will be the first of several!
>
> IBM presented a neural network accelerator at VLSI 2018 showcasing a variety of architectural techniques
> for machine learning, including a regular 2D array of small processing elements optimized for dataflow
> computation, reduced precision arithmetic, and explicitly addressed memories.
>
> https://www.realworldtech.com/vlsi2018-ibm-machine-learning/
>
> It was quite interesting to compare and contrast to big GPUs, TPUs, and other hardware!
>
> David

Did they discuss details of the 16-bit floating point format (IEEE half precision, bfloat16 or something custom)?
< Previous Post in ThreadNext Post in Thread >
TopicPosted ByDate
VLSI 2018: IBM's machine learning acceleratorDavid Kanter2018/10/09 02:28 PM
  New article! (NT)David Kanter2018/10/09 09:12 PM
  VLSI 2018: IBM's machine learning acceleratorlockederboss2018/10/10 12:20 PM
     Probably IEEE fp16Mark Roulo2018/10/10 03:41 PM
       Probably IEEE fp16David Kanter2018/10/11 10:53 PM
         Probably IEEE fp16 (blah)wumpus2018/10/12 06:55 AM
           Probably IEEE fp16 (blah)dmcq2018/10/12 12:56 PM
             Probably IEEE fp16 (blah)dmcq2018/10/12 01:07 PM
This Post has been deleted
This Post has been deleted
Reply to this Topic
Name:
Email:
Topic:
Body: No Text
How do you spell tangerine? 🍊