Google TPU architecture evolution paper

By: Net Random (net.random.delete@this.random.net), January 7, 2022 10:26 am
Room: Moderated Discussions
Sorry if this has been discussed here before, but I liked reading this paper from Google on the evolution of the architecture of their TPUs.

Ten Lessons From Three Generations Shaped Google’s TPUv4i

There's text in the paper along the lines of "some of these lessons may be obvious to you, but they were surprises to us."

Abstract: Google deployed several TPU generations since 2015, teaching us lessons that changed our views:

1. semiconductor technology advances unequally

2. compiler compatibility trumps binary compatibility, especially for VLIW domain-specific architectures (DSA)

3. target total cost of ownership vs initial cost

4. support multi-tenancy

5. deep neural networks (DNN) grow 1.5X annually

6. DNN advances evolve workloads

7. some inference tasks require floating point

8. inference DSAs need air-cooling

9. apps limit latency, not batch size

10. backwards ML compatibility helps deploy DNNs quickly.

These lessons molded TPUv4i, an inference DSA deployed since 2020.
TopicPosted ByDate
Google TPU architecture evolution paperNet Random2022/01/07 10:26 AM
Reply to this Topic
Name:
Email:
Topic:
Body: No Text
How do you spell avocado?