NVIDIA Inside: Take a Lesson from Intel

NVIDIA is well-positioned to benefit from the early modularization we see in the AR and VR industry. Think “NVIDIA inside.”

The reading talks about hardware integrators, software and content companies, and platforms such as Facebook and Google. But one segment that remains unmentioned is the critical hardware architectures such as GPUs and CPUs needed to enable VR and AR experiences. There are opportunities for companies like NVIDIA to win big-time in this space.

VR and AR are very broad terms. One can imagine application-specific network effects, where certain platforms dominate certain use cases. Facebook or Google may win in social and consumer applications, but they are not necessarily the best platforms for many industrial applications such as construction, geology, and engineering design. The potential winners in these categories are yet to be identified.

The industry is at an interesting juncture. Products are already modularizing before mass-market adoption. We could expect multiple platforms to thrive, with people multi-homing between them on their devices. The hardware manufacturers will end up in a price battle soon, unless they provide a superior architecture experience. This would mean not only bringing on the best content and platform support, but all the innovation would have to begin with AR and VR specific chips and data processing architecture.

Companies like NVIDIA have a strong head-start, because their existing GPUs are a good stop-gap measure. NVIDIA currently creates and captures value by designing products meant for high-end gaming systems. But completing a VR system requires integration of capabilities to analyze motion and physical space around the user. If NVIDIA can take current GPU solutions as the starting point, and integrate additional functionality needed for AR and VR, they have the potential to become the go-to processor solution in this nascent industry.

NVIDIA has a massive learning curve advantage. Developing chip-design capability, especially in highly targeted spaces such as graphics processing takes years and millions of dollars. In addition, NVIDIA is already deeply plugged into the sales and supply chain network for the consumer electronics industry. As soon as they have initial versions of chips that can integrate physical inputs with graphics rendering, they can move quickly to partner with hardware companies. Hardware integrators would keenly look forward to such solutions, because current AR and VR headsets are bulky and expensive. A faster, more integrated processing capacity would free up precious device real-estate, thus enabling more portability and additional features. Integrated processors will also help lower product costs significantly by eliminating the need for procuring a whole suite of sensors. NVIDIA’s goal should be to become the processor architecture-of-choice for the AR and VR industry, and then capture significant value.

NVIDIA is well-positioned to benefit from the early modularization we see in the AR and VR industry. During the PC revolution, Microsoft won the software platform battle, and Intel became the partner-in-crime through the “Wintel” architecture. Once content and applications begin to develop keeping a particular processing architecture in mind, barriers-to-entry are extremely high, and the entire value-chain buys into the indirect network effects. Developing AR and VR-specific processors would enable NVIDIA to extract value regardless of who integrates the hardware, so long as there is “NVIDIA inside.”

Previous:

Labster – teaching lab experiments in VR

Next:

Jaunt

Student comments on NVIDIA Inside: Take a Lesson from Intel

  1. Hi Kunal…You have raised a very important point through your post. It is very important for the internal hardware to be able to support the speed and efficiency required to support any AR/VR app. Qualcomm has already started working on this by offering processors chips for smart phones which can be inserted into VR headsets. So this is clearly a growth area for NVIDIA.

  2. I agree with your observation on Nvidia’s capability in this space. Not an expert in this field but I study a little bit about CPUs. GPUs with parallel processing capability can gradually replace some workloads CPU conducts traditionally. Just wonder what’s the blue sky TAM scenario for GPU or the integrated chips you mentioned and what’s the timeline for us to get to that adoption curve

Leave a comment