NVIDIA’s Winning Platform Strategy with CUDA

NVIDIA's platform approach with CUDA and GPUs is an enviable moat in the industry! Find out why?

Introduction

Nvidia is a graphics processing chip manufacturer that currently generates most of its revenue from the sales of graphics processing units (GPUs), which are used for competitive gaming, professional visualization, cryptocurrency mining, and many other applications [1]. NVIDIA has a platform strategy, bringing together hardware, system software, programmable algorithms, libraries, systems, and services to create unique value for the markets they serve. Although the requirements of these end markets are diverse, NVIDIA’s unified underlying architecture leveraging GPUs and software stacks built on CUDA architecture address them efficiently [3].

What is CUDA?

CUDA is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers can dramatically speed up computing applications by harnessing the power of GPUs.

The CUDA Platform [8]

Platform Components

The CUDA architecture, coupled with the GPUs (hardware) creates a winning platform for NVIDIA. The platform has three key components / players:

  1. Software Developers: GPUs are specialized hardware and would need very highly skilled programmers to code. However, there are at least 20X number of software developers than skilled hardware developers [4][5]. CUDA brings the convenience of software development for specialized hardware products.
  1. Hardware Manufacturers: CUDA is all software, but it needs hardware to run with it. NVIDIA decided to keep this component closed – only NVIDIA GPUs can run CUDA. This turned out to be a fantastic move in providing superior performance to the users (discussed below).
  1. Industry Users / Consumers: This includes all the key industry players who would need fast computing. Think Automotive Industry, Gaming Industry, AI Computing, Healthcare, Retail …

NVIDIA’s Strategy with CUDA

After launching CUDA in 2006, NVIDIA’s first strategy was to target the “Software Developers”. They went all out chasing them and marketing the value proposition of using CUDA to achieve high computational power without big learning barriers (because developers already knew the common programming languages such as C, and CUDA is like C/C++). NVIDIA invested heavily to enable developers get accustomed to the platform. It was free to use, and NVIDIA stressed on educating the upcoming developers. University courses and training webinars were conducted to get them on board.

CUDA is just software and it doesn’t work without an accompanying hardware. NVIDIA quickly realized that a tighter software-hardware integration will be the key to create value and keep the competitors at bay. NVIDIA partnered with TSMC (Taiwan based chip-manufacturing company) and outsourced a very capital-intensive manufacturing process. TSMC became bigger as a lot of fabless chip design companies sprung. Another big competitor AMD manufactured specialized chips, but did not have a CUDA-like platform.

NVIDIA kept CUDA-GPU integration closed. This meant that CUDA could only run on NVIDIA’s GPUs and even though TSMC was manufacturing for NVIDIA, other competitors could not leverage the CUDA platform that NVIDIA had invested heavily in. This had another advantage – it allowed NVIDIA to rapidly iterate on better designs and bring the best-in-class hardware-software integration to the developers. There were other similar open platforms such as OpenCL that could work with any hardware, however the performance was limited because of weaker integration mechanisms compared to what NVIDIA had. This created tremendous value for the developers using CUDA on NVIDIA’s GPUs and led to industry-leading computations and several research papers leveraging CUDA.

All these investments had started with Gaming industry as the key consumer. However, Deep Learning and the Cryptocurrency revolution [6] changed NVIDIA’s fortunes and enabled them to become a leader in computing in general (as opposed to only in the gaming market).

NVIDIA rapidly established different verticals by industry and launched industry-specific sub-platforms based on the initial CUDA architecture for Automotive industry, Robotics, Data Centers, Deep Learning, Genomics, and other industries. NVIDIA invested heavily in software engineers that enhanced the capabilities of the CUDA platform. NVIDIA leveraged a single platform across various domains, which helps it reduce its hardware costs. The company made CUDA compatible with a range of applications, including Adobe (ADBE), Autodesk (ADSK), and other design, media, and entertainment applications. Moreover, every version of CUDA is compatible with its previous versions. NVIDIA GPUs can run with all versions of CUDA, giving it the flexibility to use various permutations of hardware and software, and creating a whole CUDA-based ecosystem [2].

Creation of this whole ecosystem with many developers and large number of industries and application enabled two-sided network effects to kick-in. More industries were hiring for CUDA talent, utilizing GPUs and that enabled more developers to pick-up CUDA.

Value Capture

NVIDIA gives CUDA for free and charges a heavy premium on its GPUs. The closed CUDA-GPU integration enables the value capture for the firm very easy. The demand for faster computing is increasing and in the digital era, it is expected to further increase. Nvidia’s (NASDAQ: NVDA) total revenue has grown from $9.71 billion in 2018 to $10.92 billion in 2020 and is expected to further grow to $13.10 billion in 2021. Nvidia’s GPU segment is expected to make 87% of the $13.10 billion in Nvidia’s expected 2021 revenues, while also being key to Nvidia’s revenue growth [7]. All this at a massive 62% Gross Margins [3] that are closer to software margins than hardware company margins.

Looking at the current trends, this business seems very sustainable and poised for growth as no other platforms come close. The fast-rollout cycle and the hardware-software integration gives NVIDIA an enviable moat.

 

References:

[1] https://www.investopedia.com/articles/insights/121216/how-nvidia-makes-money-nvda.asp

[2] https://ww.marketrealist.com/2017/06/a-look-inside-nvidias-platform-strategy/

[3] 10-K report Filed Feb 2020

[4] https://www.bls.gov/ooh/computer-and-information-technology/software-developers.htm

[5] https://www.bls.gov/ooh/architecture-and-engineering/computer-hardware-engineers.htm#tab-1

[6] https://www.quora.com/Why-did-NVIDIA-win-the-GPU-market

[7] https://www.nasdaq.com/articles/nvidia-has-a-gpu-business-and-its-big-2020-02-25

[8]https://www.nextplatform.com/2019/06/17/nvidia-makes-arm-a-peer-to-x86-and-power-for-gpu-acceleration/

Previous:

Everything is teachable – Platform play in education

Next:

Dingdong– 29 mins vegetable delivery to door

Student comments on NVIDIA’s Winning Platform Strategy with CUDA

  1. Great post, I have never come across such a succinct explanation of NVIDIA moat.
    The platform seems to be well protected for the moment, but longer-term I wonder if the performance eventually reaches the plateau (i.e. becomes “good enough”), NVIDIA’s hardware architecture standardizes to bring down costs from larger scale while faster roll-out cycles lose the significance. If that happens, then the competitors should be able to quickly reverse-engineer the most advanced and popular CUDA-GPU integrative solution, copy CUDA syntax, make it all open-source and let the competitive market forces to lower the costs even further to win the competition with NVIDIA. Although this IBM-to-PC scenario happened to other industries multiple times I am curious if you have any understanding what is the runaway for current NVIDIA’s advantage and whether they could establish new moats utilizing their ample cash flows.

    1. Thank you Alex! Great points. So, Google is already trying to do this with Tensorflow + TPU architecture. Microsoft is trying to do this with FPGAs (hardware) and still figuring out the software for it but way behind. However, although Tensorflow is opensource, Google has the most contributors and owns the edits on it. This also gives Google a significant advantage and prepares it to make it work better on Google Cloud TPUs vs AWS / Azure for example. The integration problem is very complex and a simple copy-paste probably wont work, but you’re right in saying this is getting a stiff competition and NVIDIA will have to be on top of the game to ensure its dominance especially given it doesn’t have a big Cloud presence. Just the army of software engineers that NVIDIA has employed to bring in the latest models in CUDA can’t be matched by anyone (NVIDIA can do it because of the current scale).

  2. Interesting. Thanks for the reply!

Leave a comment