Memory Bandwidth and GPU Performance

Pages: 1 2

Memory bandwidth is a critical to feeding the shader arrays in programmable GPUs. We show that memory is an integral part of a good performance model and can impact graphics by 40% or more. The implications are important for upcoming integrated graphics, such as AMD’s Llano and Intel’s Ivy Bridge – as the bandwidth constraints will play a key role in determining overall performance.

Read More (2 pages)Discuss (11 comments)

Predicting AMD and Nvidia GPU Performance

Pages: 1 2 3

Modern graphics processors are incredibly complex, but understanding their performance is essential, as they become an increasingly important component of computer systems. In this report, we use a set of benchmark results to build accurate performance models for AMD and Nvidia GPUs. We verify that our model can predict performance within roughly 6-8% for many desktop graphics cards and show how Nvidia’s microarchitecture and drivers achieve roughly 2X higher utilization than AMD’s VLIW5 design.

Read More (3 pages)Discuss (64 comments)

AMD’s Cayman GPU Architecture

Pages: 1 2 3 4 5 6 7 8 9 10 11

The major trend in graphics is programmability and targeting highly parallel, general-purpose workloads. Historically, AMD has focused on gaming performance. However, DirectCompute and OpenCL are beginning to take hold and create the seeds of a software ecosystem. AMD’s new Cayman architecture is a gradual and evolutionary step towards more general purpose hardware and a cautious embrace of GPU computing. While primarily a graphics processor, Cayman has made some fundamental microarchitecture changes to improve programmability and performance. In this article, we explore the Cayman architecture including the new VLIW4 SIMD, dynamic power management and other enhancements. Our report concludes with a preliminary assessment of the Radeon 6970 and 9650 graphics cards and projections for frequency, power and performance of future compute products.

Read More (11 pages)Discuss (44 comments)

Introduction to OpenCL

Pages: 1 2 3 4

A critical question for GPU computing is how programmers will interface with the underlying hardware. Users have the choice between three APIs: Nvidia’s proprietary CUDA, Microsoft’s DirectCompute and OpenCL. Of the three, OpenCL has garnered the most enthusiasm across the PC ecosystem (e.g. AMD, IBM, Intel and Nvidia) and the mobile and embedded market (e.g. ARM and Imagination Technologies). While still a nascent technology, OpenCL is very popular because it is an open, industry standard that promises compatibility on a huge variety of hardware. This article explores aspects of OpenCL, including the early development efforts at Apple and the standard itself, including the execution and memory model.

Read More (4 pages)Discuss (44 comments)

PhysX87: Software Deficiency

Pages: 1 2 3 4 5

PhysX is a key application that Nvidia uses to showcase the advantages of GPU computing (GPGPU) for consumers. PhysX executing on an Nvidia GPU an improve performance by 2-4X compared to running on a CPU from Intel or AMD. We investigated and discovered that CPU PhysX exclusively uses x87 rather than the faster SSE instructions. This hobbles the performance of CPUs, calling into question the real benefits of PhysX on a GPU.

Read More (5 pages)Discuss (152 comments)

Larrabee 1 Defers Graphics, Bins Rendering

Pages: 1 2

Larrabee is Intel’s unique architecture for a family of throughput processors, developed for the graphics and HPC markets. We have recently learned that graphics products based on Larrabee 1, the first implementation, have been canceled and that it will instead be used as a software development vehicle. Larrabee’s troubles lay in software, and now the question is what lies ahead in the future for Larrabee and Intel’s graphics products.

Read More (2 pages)Discuss (3 comments)

Inside Fermi: Nvidia’s HPC Push

Pages: 1 2 3 4 5 6 7 8 9 10 11

In the last several years, the landscape for computing has become increasingly interesting and diverse. GPUs have gradually evolved to be less application specific and slightly more generalized than their fixed function ancestors. The changes started in the DirectX 9 time frame, with real floating point (FP) data types, but still fixed vertex, geometry and pixel processing. DX10 hardware was really the turning point with unified shaders, relatively complete data types (i.e. integers were added) and slightly more flexible control flow. Today the high-end is a four horse race between AMD nee ATI, Intel’s and AMD’s integrated graphics and Larrabee, and Nvidia. All four face different goals, constraints and hence have taken slightly different paths. It is in this context that Nvidia has announced a next generation architecture, Fermi, which aims for even greater performance, reliability and programmability; unlocking even more software capabilities.

Read More (11 pages)Discuss (281 comments)

Computational Efficiency in Modern Processors

Pages: 1 2 3

The computer industry is on the cusp of yet another turn of the Wheel of Reincarnation, with the graphics processor unit (GPU) cast as the heir apparent of the floating point co-processors of days long gone. Modern GPUs are ostensibly higher performance and more power efficient than CPUs for their target workload, and many companies and media outlets claim they are leaving CPUs in the dust. Is this really the case though? This article explores the quantitative basis for these claims, with some surprising results.

Read More (3 pages)Discuss (60 comments)

The Case for ECC Memory in Nvidia’s Next GPU

Nvidia’s corporate strategy firmly rests on expanding the market for GPUs beyond graphics to include certain types of computation. Specifically, Nvidia’s efforts with CUDA are aimed at moving GPUs into the high performance computing (HPC) market, where the substantial compute capabilities and memory bandwidth directly translate into performance. Nvidia’s Tesla products (GPUs designed for computation instead of graphics) have made a bit of a splash, but at the moment the adoption is extremely limited. GPU clusters are basically non-existent, at least in part due to the lack of error detection and correction, which we believe will be corrected in the next product release from Nvidia.

Read MoreDiscuss (45 comments)

NVIDIA’s GT200: Inside a Parallel Processor

Pages: 1 2 3 4 5 6 7 8 9 10 11 12

Our analysis of NVIDIA’s latest GPU, the G100 (also known as the GT200 or GTX280)

Read More (12 pages)Discuss (72 comments)