A critical question for GPU computing is how programmers will interface with the underlying hardware. Users have the choice between three APIs: Nvidia’s proprietary CUDA, Microsoft’s DirectCompute and OpenCL. Of the three, OpenCL has garnered the most enthusiasm across the PC ecosystem (e.g. AMD, IBM, Intel and Nvidia) and the mobile and embedded market (e.g. ARM and Imagination Technologies). While still a nascent technology, OpenCL is very popular because it is an open, industry standard that promises compatibility on a huge variety of hardware. This article explores aspects of OpenCL, including the early development efforts at Apple and the standard itself, including the execution and memory model.
PhysX87: Software Deficiency
PhysX is a key application that Nvidia uses to showcase the advantages of GPU computing (GPGPU) for consumers. PhysX executing on an Nvidia GPU an improve performance by 2-4X compared to running on a CPU from Intel or AMD. We investigated and discovered that CPU PhysX exclusively uses x87 rather than the faster SSE instructions. This hobbles the performance of CPUs, calling into question the real benefits of PhysX on a GPU.
Larrabee 1 Defers Graphics, Bins Rendering
Larrabee is Intel’s unique architecture for a family of throughput processors, developed for the graphics and HPC markets. We have recently learned that graphics products based on Larrabee 1, the first implementation, have been canceled and that it will instead be used as a software development vehicle. Larrabee’s troubles lay in software, and now the question is what lies ahead in the future for Larrabee and Intel’s graphics products.
Inside Fermi: Nvidia’s HPC Push
Pages: 1 2 3 4 5 6 7 8 9 10 11
In the last several years, the landscape for computing has become increasingly interesting and diverse. GPUs have gradually evolved to be less application specific and slightly more generalized than their fixed function ancestors. The changes started in the DirectX 9 time frame, with real floating point (FP) data types, but still fixed vertex, geometry and pixel processing. DX10 hardware was really the turning point with unified shaders, relatively complete data types (i.e. integers were added) and slightly more flexible control flow. Today the high-end is a four horse race between AMD nee ATI, Intel’s and AMD’s integrated graphics and Larrabee, and Nvidia. All four face different goals, constraints and hence have taken slightly different paths. It is in this context that Nvidia has announced a next generation architecture, Fermi, which aims for even greater performance, reliability and programmability; unlocking even more software capabilities.
Computational Efficiency in Modern Processors
The computer industry is on the cusp of yet another turn of the Wheel of Reincarnation, with the graphics processor unit (GPU) cast as the heir apparent of the floating point co-processors of days long gone. Modern GPUs are ostensibly higher performance and more power efficient than CPUs for their target workload, and many companies and media outlets claim they are leaving CPUs in the dust. Is this really the case though? This article explores the quantitative basis for these claims, with some surprising results.
The Case for ECC Memory in Nvidia’s Next GPU
Nvidia’s corporate strategy firmly rests on expanding the market for GPUs beyond graphics to include certain types of computation. Specifically, Nvidia’s efforts with CUDA are aimed at moving GPUs into the high performance computing (HPC) market, where the substantial compute capabilities and memory bandwidth directly translate into performance. Nvidia’s Tesla products (GPUs designed for computation instead of graphics) have made a bit of a splash, but at the moment the adoption is extremely limited. GPU clusters are basically non-existent, at least in part due to the lack of error detection and correction, which we believe will be corrected in the next product release from Nvidia.
NVIDIA’s GT200: Inside a Parallel Processor
Pages: 1 2 3 4 5 6 7 8 9 10 11 12
Our analysis of NVIDIA’s latest GPU, the G100 (also known as the GT200 or GTX280)
Rambus Sets the Bandwidth Bar at a Terabyte/Second
In this article, David Kanter covers Rambus’ recent announcement of the Terabyte Bandwidth Initiative (TBI), which is likely to be the successor to the XDR and XDR2 memory interface. The TBI is a high speed interface, which significantly improves the command/address architecture for better performance and is targeted at next generation consoles and graphics applications.
An Overview of High Frequency Processor-System Interconnects
David reports on IBM’s system interconnect scheme, called Elastic I/O, that was presented at the Microprocessor Forum 2002.
Winstone Reporting Methodology Examined
Bill examines the recommended Winstone reporting methodology, and gives it a poor performance rating.