A third of desktops to go multi-GPU in 2012? Not likely

A detailed and scholarly report on the history and future of multi-GPU technology runs crashingly aground with a prediction of 30 percent multi-GPU penetration in the desktop space by 2012. What's stopping a bright SLI/Crossfire/XGP future? Ars explains.

Last week, the respected Jon Peddie Research Group released a major report on the history, technology, and future of multi-GPU computing that predicts the now-esoteric technology will experience a major renaissance by 2012, with two-thirds of new desktop systems multi-GPU capable, and fully 30 percent packing two GPUs (not 50 percent, as has been reported elsewhere). This is a major surprise to Ars, and we're skeptical of JPR's reasoning.

The JPR story: 3D gaming and simulation drive demand for multi-GPU

The drive for multi-GPU, JPR reports, comes from performance. High performance is increasingly desirable and attainable in the related fields of gaming and simulation, and the workloads are both highly parallel and totally unsuited for CPU applications. GPUs are needed, and the parallel nature of the rendering workload makes it possible to use multiple GPUs, while escalating performance needs make it necessary.

Meanwhile, software advances will make the performance hits relative to linear scaling smaller and smaller. These forces, JPR asserts, will push multi-GPU computing into more and more systems over time. There are several problems with this line of reasoning.

Die-level integration and manufacturing realities

GPUs are composed of many parallel processing units, so any multi-GPU system involves simply ganging together still more of such small, simple processor cores. Because the cores are small and the workload is parallel, there is no limit on core count analogous to the limit on the number of processors that can profitably be used in a single x86 CPU. The limits on single-die GPU horsepower are manufacturing limits.

In general, the semiconductor industry trends encapsulated in Moore's law predict that it's cheaper to put multiple circuit blocks on the same die than to split them across multiple dies, and it's cheaper to put multiple dies on one PCB than splitting them across multiple PCBs. The number of transistors for which it's cheaper to do this grows exponentially, doubling roughly every two years with process transitions and yield maturity.

For this reason, single-GPU systems are always more economical except where yield and platter edge concerns make it infeasible (for reasons of yield or wafer edge losses) to place enough silicon on one die to provide the performance needed. The transistor count it's economical to put on one GPU is probably, right now, somewhere between the 1.4 billion transistors on the die of an NVIDIA GTX295 and the 959 million transistors on the die of an AMD Radeon 4890.

In round terms, then, the current figure is one billion transistors, and the 2012 figure will be in the vicinity of four billion transistors (perhaps more, since Intel's Larrabee will bring to the GPU market manufacturing prowess much in advance of TSMC's foundries).

Demand realities: Most systems are satisfied by Moore's law

Right now, the Moore's law ceiling for single-GPU systems is more than adequate to meet the demands of all but a small percentage of gaming PCs. Last year, only two percent of all desktop PCs sold carried multiple GPUs. And, indeed, it has been this way through all of history; multi-GPU systems have never been the province of any but a tiny minority of users. In fact, most desktop PCs sold don't even have discrete GPUs. In the fourth quarter of last year, for instance, 38.5 million desktops shipped, but only 15.2 million discrete GPUs were sold, meaning that less than 40 percent of desktops shipped with discrete GPUs, probably less than a third once multi-GPU systems and upgrades are accounted for. Most desktop users simply aren't playing 3D games or running 3D simulations, and those that do don't need multiple GPUs, and likely won't.

There's no good reason to believe that a huge percentage of desktop users will develop an intense need for mountains of GPU silicon over the next three years, and if a good reason were to be found, it would be considerably more structural and considerably more detailed than a hand-waving assertion that everybody likes realistic graphics.

Indeed, the trend of two decades in computing has been the exact opposite of JPR's prediction: more and more functions are subsumed into fewer and fewer silicon dies as times goes on. Sound cards, storage, I/O and networking controllers, and memory controllers—all are onboard. Northbridge functions like memory controllers are migrating to the CPU die. Most desktops shipped now have GPUs in their northbridge dies. In fact, the GPU survives as the only add-in silicon chip of any marketshare significance, the lone survivor of a good half-dozen ISA cards from a typical 1980s workstation. Finally, both major CPU vendors plan to offer, well in advance of 2012, CPUs with onboard GPUs.

How could it happen? Desktop death, and an end to ConSKUsion

So, how could this prediction possibly come true? We're still pessimistic about it, but there are some forces and possible developments that JPR doesn't explicitly identify that could push the GPU market more in the direction of multiple GPUs per system.

JPR could be proven right if the desktop PC dies, but the gamers and workstation users die off much more slowly. In the first quarter of 2009, numbers from iSuppli indicate that desktop sales were down 23 percent year over year, while laptop sales grew ten percent. If this trend continues, with everyday home, education, and business users migrating to laptops while workstation users and a winnowing crop of ever-harder core PC gamers soldier on, the percentage of multi-GPU systems could rise somewhat prodigiously.

The other alternative which could bring the JPR prediction true is a miraculous technological advance in the performance scaling of multi-GPU systems. Currently, scaling on 2-GPU SLI and Crossfire systems is lucky to hit 80 percent, if that, while doubling the number of stream processors on one die gets close to 100 percent. If advances in drivers and hardware push that figure closer to one hundred percent for arbitrarily many dies, GPU vendors may design one GPU die in each process generation, with a transistor count much lower than the Moore's law ceiling, and gang an appropriate number of them together on one PCB.

Something like this approach has already been taken with the Radeon 4870x2 and a number of other less successful projects. The GPU vendors would be able to further shrink their GPU SKU counts, while tailoring performance more precisely. Such a business model would be more sensible at the smaller volumes some predict, as design costs begin to overshadow manufacturing costs in the balance sheets of NVIDIA and AMD.

In the end, though, these possibilities don't seem likely. It's more probable that the future won't hold a renaissance of multi-GPU desktops running free across users' desks, monitors, and wallets. The decades-long trend will continue, and tomorrow's desktop will be made up of fewer dies, not more.

Source: ars technica

Tags: CPUs, GPU

Add comment

Your name:
Sign in with:
Your comment:

Enter code:

E-mail (not required)
E-mail will not be disclosed to the third party

Last news

Is this an error or it is really happening?
The proposed topic and architecture was originally presented to the JPEG Committee in 2016
The rest of the specs as seen on Geekbench include 8GB of RAM
Samsung is likely to be made official in February at next year's Mobile World Congress in Barcelona
Lumia 950 XL shown running Windows 10 on ARM
The company has studied such an option in the past
The Samsung Galaxy A5 (2017) Review
The evolution of the successful smartphone, now with a waterproof body and USB Type-C
February 7, 2017 /
Samsung Galaxy TabPro S - a tablet with the Windows-keyboard
The first Windows-tablet with the 12-inch display Super AMOLED
June 7, 2016 /
Keyboards for iOS
Ten iOS keyboards review
July 18, 2015 /
Samsung E1200 Mobile Phone Review
A cheap phone with a good screen
March 8, 2015 / 4
Creative Sound Blaster Z sound card review
Good sound for those who are not satisfied with the onboard solution
September 25, 2014 / 2
Samsung Galaxy Gear: Smartwatch at High Price
The first smartwatch from Samsung - almost a smartphone with a small body
December 19, 2013 /

News Archive



Do you use microSD card with your phone?
or leave your own version in comments (10)