NVDA : Intensifying competition and execution issues

July 17, 2008

Nvidia (NVDA) is the company which pioneered the development of the GPU, a class of chips dedicated to graphics processing and found at the heart of all displays cards. Nvidia’s stock took a dramatic dive right after its recent earnings call that revealed 3 major problems. Firstly, some of its laptop display cards are failing due to a weak die, and the company took a $150-200 million charge to earnings to replace the cards. Secondly, it has been forced by competition to cut prices on some of its display cards. Lastly, it missed a development deadline, and will not be able to deliver its next generation GPUs on time. While the manufacturing defects are probably a one-time issue (one hopes), the margin pressure and the development deadline miss suggest that in the near-term, revenue will be flat or declining; the important question is, will these setbacks permanently impair profitability, or are these transient problems? In this article, I will examine the competitive landscape of the GPU industry, and try to outline the challenges NVDA faces in the mid- to long-term.

Nvidia is generally considered the technological leader in the GPU industry, and it is known for going after the high end of the market, especially the gamers. Nvidia works closely with game developers to ensure that their chips will be the most advanced and capable of supporting the latest games. Nvidia’s chips are found in GeForce display cards for PC desktops and laptops, the Macintosh display cards for Mac Pro desktops and the MacBook Pro laptops, and in Playstation 3. Competition comes mainly from ATI (now acquired by AMD), and Intel. If you consider all display chips (including integrated chipsets), then Intel has about a 40% market share (mostly in low to mid-range integrated chipsets), ATI/AMD has a 20% market share (at all price points), and Nvidia has a 30% share (at all price points, but skewed towards high-end market), with the rest of the market filled by minor low-end chipset manufacturers.

ATI is the most direct competitor for Nvidia with its Radeon display cards. In the midst of a difficult merger with AMD, ATI nonetheless managed to surprise Nvidia with a novel technological strategy. While Nvidia has followed the classic Intel strategy of producing one powerful chip for its top-line display cards, and then gradually moving that chip down the value chain, ATI followed a new strategy of producing mid-range chips, and then bundling a few of them up for its top-line cards. This strategy requires some nifty engineering with regards to inter-processor communications, but if feasible could cut power consumption dramatically. It remains to be seen whether this strategy will scale well in the future, but at least in the here and now, ATI’s Radeon cards using this technology has forced Nvidia to cut the prices on some of its mid-to-high range GeForce cards.

And then there is the ever-present competition from AMD and Intel at the lower end of the market. Intel is increasingly pushing platform solutions as a way to include more of its chips in a product, a strategy used to great effect in its Centrino line of wireless cards. While Intel is already a large player for low-end integrated graphics chipsets, it also has the “Larrabee” project, which nominally aims to produce an advanced GPU to compete head-to-head with Nvidia. However, the design specs for Larrabee uses an x86 instruction rather than the typical graphics-specialized instruction set that both Nvidia and ATI chips use. This suggests that Larrabee will be more suitable for general graphics tasks rather than the speed-sensitive high-end tasks like DVD playback or gaming. On its part, AMD has its “Fusion” project, which aims to put CPU, GPU and memory controller on the same die. This strategy will increase communication speed between the CPU and the GPU and will reduce power consumption, but squeezing so many components onto one piece of silicon will limit the number of transistors in each component. Accordingly, AMD’s first Fusion processor product, code-name Shrike, will be meant for smaller mobile computers and sub-notebooks. These products are unlikely to directly challenge Nvidia’s high end products, but may squeeze its profit in lower-end markets.

Then there is the universally hated extortionist of the industry, Rambus, which has just sued Nvidia for patent infringement. Rambus purportedly designs memory products, but never implements those designs in practice, and makes a living by suing other chip companies for patent infringements. Nvidia is just the latest in a long line of chip companies that it has sued, arguing that it deserves licensing fees when other companies include SDRAM in their products (as Nvidia does in its GeForce cards). The legal case is complicated and turns on obscure points of technology, and the licensing fees that are potentially due are difficult to estimate since Rambus’s patents only cover a small portion of the SDRAM specifications. Nonetheless, it is not in Rambus’s own interest to demand fees that would make it uneconomical for a company to make its product, or severely compromise the competitive position of a company, and the impact on Nvidia will probably be minor.

In the next article, I will look at Nvidia’s long-term prospects, and attempt a valuation.

More on this topic (What's this?)
Story Stock Of The Day: NVIDIA (NVDA)
Sold New NVDA Naked Puts
Read more on NVIDIA at Wikinvest

Leave a Comment

{ 1 trackback }

Previous post:

Next post: