Balancing Graphics and Display
To understand why the iPad 3 graphics was unbalanced and inferior to previous products, it is important to step back and clarify our terminology. The crucial observation is that from a quantitative perspective, graphics performance relative to display resolution, took a big step backwards.
The first half of the equation for is graphics performance. Modern applications use programmable graphics shaders to apply visual effects to geometric primitives, vertices, and pixels. The most common graphics APIs are Microsoft’s DirectX and OpenGL. The shader software is compiled by the GPU driver and sent to the hardware. Certain portions of the application will use fixed function hardware, such as the triangle setup engine and rasterizer. Increasingly though, the programmable shaders are the bulk of the workload and they are executed on the shader array of the GPU, which resembles a multi-core processor, and is a good proxy for overall graphics performance.
Since most computer graphics requires 32-bits of accuracy (or less), a simple measure for the compute performance of the shader array is the peak single precision GFLOP/s. As demonstrated in a previous article on GPU compute performance, this is actually a fairly good metric and can be used to make accurate projections for common benchmarks such as 3D Mark.
The second half of the equation is the display, which determines how many pixels are drawn on the screen. While a higher pixel count improves the quality of the display to the human eye, it also adds a greater burden to the GPU. There are several types of programmable shaders (e.g., vertex, geometry, pixel) that must be executed for each frame drawn on the display. Doubling the number of display pixels will generally double the number of shader operations executed by the GPU. Modern PC and mobile displays vary, but typically support 1-4 MPixels for mainstream models. Exotic products offer higher resolutions, with dramatically higher prices, although 8 MPixel resolutions (i.e., 4Kx2K) will probably phase in over the next few years.
The other aspect of the display is the frame rate. Anything below 30 FPS is likely to induce nausea and annoy the viewer. However, even at 30 FPS there is often significant motion blur and a single high latency frame can ruin the experience. A good target for the human eye is 60 frames per second (FPS), which is more amenable to action scenes and robust against dips in performance. Certainly there are visual advantages to higher frame rates, particularly for scenes with fast action. However, there are few 2D displays that actually draw at 120 FPS or 240 FPS. Assuming 60 FPS, this translates into 60-240 MPixels/sec for a mainstream display.
Combining these two measures, the ratio of FLOPs/pixel is a simple metric that expresses the quality of the graphics. Quantitatively, it describes how many calculations the GPU can afford to perform for each pixel (on average), while maintaining the desired frame rate. This in turn limits the techniques that graphics programmers can apply; more complex shaders will make the scene more attractive, but decrease the frame rate and may hurt the overall experience. One of the key reasons to buy a high-end discrete graphics card for a PC is to increase the quality of the visual effects. For example, generating exponential variance shadow maps (EVSMs) on a 3MPixel display can consume about 64 GFLOP/s and at least 20 GB/s of memory bandwidth (at 60 frames/sec).
Figure 1 shows the evolution of the graphics capabilities of the iPad, with FLOPs/pixel on the vertical axis. Each data point shows the newest iPad model at the launch date. Generally speaking, the capabilities of a product should always improve over time, or at the very least stay constant. However, the release of the iPad 3 in March 2012 was a giant step backwards, dropping to 0.52× from 381 FLOPs/pixel to 198 FLOPs/pixel. To put this in context, Intel’s Ivy Bridge GPU was released around the same time and variants intended for notebooks have 1.3 KFLOPs/pixel for a massive 2560×1600 display (most notebooks are actually 1920×1080, which implies 2.5 KFLOPs/pixel). Practically speaking, this means that any 3D applications which were moderately taxing on the iPad 2 could not take advantage of the new 2048×1536 “Retina” display.
Apple’s approach to handling these 3D applications was a clever hack. The resolution of the iPad 3 was exactly 2× in each dimension, and 4× larger in total. Applications could run at the older 1024×768 resolution and scale up without any nasty visual artifacts, although the graphical quality was identical to the older systems. For some consumers, this was undoubtedly quite disappointing; the new high resolution display was primarily useful for 2D applications. However, it is rather intelligent in that it offers consumers a better experience in some circumstances, without degrading other usage models (aside from the battery life issues). For keen students of history, Apple took the same approach when moving the iPhone from a 480×320 display to a 960×640 display. This also highlights a potential benefit of a tightly controlled platform; an open ecosystem with variable resolution could not scale icons, fonts, etc. without graphical artifacts for some configurations.
Discuss (133 comments)