Nvidia Stock at a Glance
• Fair Value Estimate: $480.00
• Morningstar Rating: 2 stars
• Morningstar Uncertainty Rating: Very High
• Morningstar Economic Moat Rating: Wide
Nvidia Stock Update
Wide-moat Nvidia’s (NVDA) results and outlook were well ahead of our expectations and FactSet consensus estimates, as the company is a dominant supplier of artificial intelligence (AI) accelerators into cloud computing providers.
We raise our fair value estimate to $480 from $300 and lift our Uncertainty Rating to Very High. We are now much more optimistic about the rise of AI workloads and how Nvidia’s wide moat should cement the company as an AI chip leader.
Based on the results, guidance, and supply expansion at key partners like TSMC, we forecast that Nvidia’s data center, or DC, business, which includes AI graphics processors, will generate $41 billion in revenue in fiscal 2024 (ending January). This compares with $15 billion a year ago and only $3 billion just four years ago.
We could be wrong, but we see little evidence that these GPU orders are up-front spending or a one-time build. Based on our estimates of capital expenditures at leading cloud providers, manufacturing expansion at TSMC, and management forecasts, we anticipate growth of $60 billion in DC revenue in fiscal 2025, rising to $100 billion in fiscal 2028.
Such growth might be unprecedented in large-cap tech, but we foresee all types of enterprises investing in AI. Similarly, all cloud providers will need to offer Nvidia’s GPUs to allow them to train AI models, while Nvidia is making the right moves to capture AI inference workloads and branch out into networking and software.
Nvidia’s near-term results support our long-term optimism. In the July quarter, total revenue was $13.5 billion, up 88% sequentially, up 101% year over year, and well past guidance of $11 billion that was eye-popping when provided to investors in May. DC revenue was $10.3 billion, up 141% sequentially and 171% year over year. We have little doubt that Nvidia will sell every GPU it can secure from TSMC in the quarters ahead.
Why Nvidia Will Continue to Dominate AI
Nvidia (NVDA) has a wide Economic Moat, thanks to its clear leadership in graphics processing units, or GPUs, hardware and software tools needed to enable the exponentially growing market around artificial intelligence. In the long run, we expect tech titans to strive to find second-sources or in-house solutions to diversify away from Nvidia in AI, but most likely, these efforts will chip away at, but not supplant, Nvidia’s AI dominance.
Nvidia’s GPUs specifically handle parallel processing workloads, using many cores to efficiently process data at the same time. In contrast, central processing units, or CPUs, such as Intel’s processors for PCs and servers, or Apple’s processors for its Macs and iPhones, process the data of "0s and 1s" in a serial fashion. The wheelhouse of GPUs has been the gaming market, and Nvidia’s GPU graphics cards have long been considered best of breed. More recently, cryptocurrency miners found the need for parallel processing GPUs, leading to a boom-and-bust cycle for Nvidia.
More important, parallel processing has emerged as a near-requirement to accelerate AI workloads. Nvidia took an early lead in AI GPU hardware, but more important, developed a proprietary software platform, Cuda, and these tools allow AI developers to build their models with Nvidia. We believe Nvidia not only has a hardware lead, but benefits from high customer switching costs around Cuda, making it unlikely for another GPU vendor to emerge as a leader in AI training.
We think Nvidia’s prospects will be tied to the AI market, for better or worse, for quite some time. We expect leading cloud vendors to continue to invest in in-house semis (with Google and Amazon leading the way), while CPU titans AMD and Intel are working on GPUs and AI accelerators for the data center. However, we view Nvidia’s GPUs and Cuda as the industry leaders, and the firm’s massive valuation will hinge on whether, and for how long, the company can stay ahead of the rest of the pack.
Nvidia Has a Wide Economic Moat
We assign Nvidia a wide economic moat rating, thanks to intangible assets around its graphics processing units and, increasingly, switching costs around its proprietary software, such as its Cuda platform for AI tools, which enables developers to use Nvidia’s GPUs to build AI models.
Nvidia was an early leader and designer of GPUs, which were originally developed to offload graphic processing tasks on PCs and gaming consoles. Nvidia has emerged as the clear market share leader in discrete GPUs (over 80% share, per Mercury Research).
We attribute Nvidia’s leadership to intangible assets associated with GPU design, as well as the associated software, frameworks, and tools required by developers to work with these GPUs. Recent introductions, such as ray-tracing technology and the use of AI tensor cores in gaming applications, are signs, in our view, that Nvidia has not lost its GPU leadership in any way. A quick scan of GPU pricing in both gaming and data center shows that Nvidia’s average selling prices can often be twice as high as those from its closest competitor, AMD.
Meanwhile, we don’t foresee any emerging companies as becoming a third relevant player in the GPU market outside of Nvidia or AMD. Even Intel, the chip industry behemoth, has struggled for many years in trying to build a high-end GPU that would be adopted by gaming enthusiasts, and its next effort for a discrete GPU is slated to launch in 2025.
We do see integrated GPU functionality within many of Intel’s PC processors, as well as portions of Apple’s and Qualcomm’s system-on-chip solutions in smartphones, but we perceive these integrated solutions as "good enough" for non-gamers, but not on par with high end GPU needs.
Beyond Nvidia’s AI prowess today, which we believe is exceptionally strong, we think the company is making the proper moves to widen its moat even further. Nvidia’s software efforts with Cuda remain impressive, while Nvidia is also expanding into networking solutions, most notably with its acquisition of Mellanox. We don’t want to discount Nvidia’s know-how here either. Many AI models don’t run on solo GPUs, but rather a connected system of many GPUs running in tandem. Nvidia’s proprietary NVLink products do a good job of connecting Nvidia GPUs together to run these larger models.