I’ve been following the GPGPU industry for a few years now. Initially Nvidia was the only player with a supported development platform. While they provided support for GPGPU on most of their cards, they only had support for double precision on their Tesla cards. If you were interested in doing anything serious with GPGPU you used Nvidia’s CUDA SDK and a Tesla C1060 card (or S1070 1U server).
Fast forward a year or so and the GPGPU landscape has changed significantly. AMD / ATI have entered the GPGPU arena with their stream computing and Fusion initiatives. Intel claimed that they would take over both the graphics market and the HPC market with their Larrabee initiative and then they recently started backpedaling. With all of the recent changes I thought it would be useful to take a look at current (and soon to be released) product offerings from AMD /ATI and Nvidia.
First let’s take a look at the desktop offerings. While AMD / ATI and Nvidia support GPGPU on a large number of cards I will limit my comparison to the high end devices. (If you're reading this through a feed aggregrator you will need to hit the actual site http://gpgpu-computing3.blogspot.com/ in a browser to get the tabular data... sorry)
While at first glance it might seem as though the AMD / ATI cards offer better performance at a cheaper cost. You should be cautious in this assessment. Often times what these vendors are reporting is theoretical performance not actual. Additionally depending on what types of algorithms you are running you might be more limited by memory than the number of processors. Additionally Nvidia has been specifically modifying the design of their GPUs to make them better at performing GPGPU tasks while AMD / ATI has just gotten into the game. AMD / ATI is working with SciSoft to hammer out an OpenCL GPGPU benchmark suite. Once this is completed then we will be able to intelligently make comparisons.
Another important factor is product availability. Both ATI cards are currently available, but due to low fabrication yields, in limited numbers. On the Nvidia side only the C1060 cards are currently available. The C2050 and C2070 are rumored to be available Q2 2010. Nvidia does have what they are calling “The Mad Science Program”. This sales initiative will allow you to purchase current generation Tesla products that can be upgraded when the next gen products are released. All you have to do is pay the difference and ship back the old product.
Well that covers the desktop side of the GPGPU world but what about server side solutions? Well on the server side there are a slew of solutions based on Nvidia GPUs. Nvidia has a M1060 card that they sell to OEMs. The M1060 is designed specifically to be integrated by OEMs into server based solutions. The M1060 is similar to the C1060 minus the “thermal solution”. The M1060 relies on the server’s fans to cool it. This makes the card much smaller so that you can pack more of them into a server while still providing adequate cooling. As far as I know AMD / ATI has no such animal. In theory you could drop an ATI card into any Dell or HP server that has a PCIe x 16 slot in it but I wouldn’t be surprised if it overheated from time to time.
Another server based solution that Nvidia has is their GPU 1U offload servers. The term “server” is perhaps not the best descriptive term for these devices. They do not run an OS or have any CPUs. They are basically a 1U box containing a power supply, 4 GPUs, some heat syncs, and fans. These GPU offload servers need to be connected to a “pizza box” that actually runs an OS via a PCIe x 16 extension cable. Your GPGPU program runs on the “pizza box” and loads your GPU kernel across the extension cable, copies your data across the cable, crunches the numbers on the GPU offload server, and then finally copies your results back. Nvidia 1U offload servers info:
The S1070 is available today but the S2050 / S2070 will not be out until sometime midyear 2010. Nvidia’s “Mad Science” program applies to these devices as well. As far as I know ATI has nothing currently that is comparable to these.
Something else that must be taken into consideration is OS support for the development toolkits (CUDA, OpenCL) provided by each vendor.
Overall I’m trying to be objective and not have any bias towards AMD / ATI or Nvidia but it is difficult. I have been doing GPGPU for a few years now and it has all been on Nvidia based products. Once AMD / ATI has a product that I can evaluate on my OS of choice (REL) and they start providing server side solutions I may change my tune… but for now… I’m sticking with Nvidia.