How I joined MLPerf

By: David Kanter (, March 27, 2019 11:47 am
Room: Moderated Discussions
Hey Folks,

I realized that some of you may not know...but I'm currently one of the chairs of MLPerf inference. EETimes asked me to write a piece about how I got involved. An excerpt is below:

My involvement in MLPerf is the culmination of a long-standing interest in machine learning and my involvement in the world of computer architecture. My curiosity about machine learning dates back to my years studying mathematics and economics at the University of Chicago.

I was briefly involved in a project that attempted to analyze patents and discover techniques that would increase the likelihood of the USPTO accepting an application and granting a patent. Years later, I proposed that Google use its expertise in machine learning for analyzing patents. My suggestion was rewarded with an interview for a product management position and the creation of what would become While I bombed one of my in-person interviews, I use Google’s patent search engine to this day.

My interest in computing was kindled in high school and my expertise was built over years of running Real World Technologies,, analyzing CPUs and machine learning accelerators at the Microprocessor Report, co-founding a microprocessor startup, and advising clients on technology, lawsuits, and investments.

In that time, I connected with leading technologists – several of whom were instrumental to my involvement in MLPerf. In particular, I was professionally acquainted and friendly with Greg Diamos, Kim Hazelwood, Dave Patterson, and Jonah Alben.

My two interests serendipitously came together in 2017. I had been retained as a technical expert to assist two colleagues with the patent portfolio of a great startup team that is designing and commercializing an impressive and innovative accelerator for machine learning. In essence, I was offered an opportunity to become an expert in hardware architecture for machine learning and it was an easy choice to dive in.

MLPerf started as a collaboration between companies such as Baidu and Google and academic researchers at Berkeley, Harvard and Stanford to measure performance for machine learning systems. The effort was inspired in many ways by the success of the SPEC benchmarks for CPUs, and adapted for the 21st century to be incredibly open and embrace the guiding principles of rapid development and iteration.

Initial discussions took place on a public mailing list in conjunction with community meetings and the usual private communication channels. It was through these discussions that I discovered and was drawn into the MLPerf community. In particular, there was debate in early 2018 over whether (and how) to measure power consumption for training systems that piqued my interest. A friend had led the creation of SPECpower_ssj2008, and we had frequently discussed the complexities and challenges of his work. I decided to contribute my knowledge and insights to a discussion with Dave, Jonah, and several others.

I may try and republish it on RWT, but TBD. But in the mean time, I'll post a link here:


TopicPosted ByDate
How I joined MLPerfDavid Kanter2019/03/27 11:47 AM
Reply to this Topic
Body: No Text
How do you spell purple?