Nvidia – The Way It’s Meant to Be Played
Bringing VR gaming hardware to the masses
While VR has enormous potential to change the way we produce and consume content, many pundits believe that it will still be some time before VR transitions from an enthusiast technology to the mainstream. Gaming is likely to be one leading categories of content that helps drive this change, however a key challenge for the segment at the moment is that VR gaming hardware is expensive, clunky and is seen as something for only the most hardcore of gamers. There is also a relative lack of compelling content because without a broad installed base of hardware at the consumer level, there is little commercial reason for gaming companies to prioritize the production of VR content over traditional games.
Nvidia, an American technology company founded in 1993, is trying to change this, first and foremost by making VR-grade graphics processing capability increasingly available in personal computing. Nvidia has long been seen as the gold standard in graphics processing units (GPUs) and its GeForce graphics cards are beloved among gamers. AMD’s (former ATI) Radeon graphic cards are a distant second with its latest flagship models (RX 570 and 580) performing only on par with Nvidia current entry level cards (GTX 1060). Similar to how Intel launches a new generation CPUs every year that are more powerful, more energy efficient and smaller in architecture, each year Nvidia announces a new generation of graphics card. However, unlike Intel, Nvidia doesn’t actually manufacture any graphics cards. Rather Nvidia focuses on the technological design of core architecture and outsources production to OEMs such as Asus, Gigabyte and Sapphire, enabling it to focus on innovation instead of manufacturing.
But why is this important? This past year Nvidia latest generation of graphics cards, codenamed Pascal, the GTX 1060, GTX 1070 and GTX 1080, represented a huge step forward in making VR available at the mass consumer level. The GTX 1060 in particular was the first entry level GPU which was seen as capable of rendering VR games – Nvidia designates such cards as “VR Ready”. What’s more impressive though is that, while notebook dedicated GPUs are often seen as watered down variants of their desktop counterparts, the notebook variants of the Pascal GPUs are both energy efficient enough to fit inside notebooks that are less than 20mm thick and also powerful enough to be badged “VR Ready”. This was a significant first step towards solving VR’s mobility and portability issues. Previously only the flagship hardcore gaming GPUs would have been able to render VR level graphics and any notebook with this capability would have been a 10-pound brick.
Projecting forward with Moore’s Law and as Nvidia continues to innovate on GPU technology, it is possible to foresee a near term future where GPUs are so small, efficient, and powerful that they could be embedded directly into VR headsets (which should also become more streamlined over time). This could enable headsets to operate untethered and have a battery life of at least an entire day. Coupling this with other innovations such as GeForce Now, Nvidia’s recently announced GPU streaming service which does away with the need to own a GPU all together, and mainstream consumers will have ready access to functional and affordable VR capable hardware.
In addition to hardware, on the software side, Nvidia has VRWorks which it calls “a comprehensive suite of APIs, libraries, and engines that enable application and headset developers to create amazing virtual reality experiences.” Nvidia’s goal is to have VRWorks become a common foundation for all VR games, as well as to have those games be optimized for Nvidia GPUs and their associated technologies such SLI (Nvidia solution from running multiple graphics cards in parallel) and PhysX (Nvidia’s in-game physics engine). To this end, Nvidia is also partnering with industry leading game engines such as Unreal Engine 4 and Unity 5, as well as ensuring content developed using VRWorks is compatible with PC standards such as Microsoft’s DirectX 12.
Recognizing that VR could become the next smartphone, and that it is a logical extension of its core business, Nvidia’s end goal is for its technologies to be the foundational platform for the development, optimization and consumption of VR content. It is a challenging multi-sided platform play involving consumers, content developers and hardware OEMs, however given its track record for innovation and market leading position, there is no reason why its dreams should not become a reality.
Student comments on Nvidia – The Way It’s Meant to Be Played
Great post Yun. Who are their design house/ODM competitors, if any, and how close are they to NVIDIA’s hardware, software and multi-sided platform strategy?
Thanks Boris. The leading competitor is AMD’s LiquidVR, which is a similar platform play to Nvidia VRWorks. At the moment it seems like VR headset companies like Oculus and Valve are partnering with both, while VRWorks has more game and content developers working with them (at least in terms of what is publicly reported).
Great post! I strongly agree with you that for mass adoption, VR needs hardware manufacturers, such as NVIDIA, to build light and convenient devices at low cost. Has anyone projected when the affordable VR devices will be available?
Thanks Jing. I haven’t been able to find any firm estimates. However I expect it would follow a similar trend to other consumer electronics.
Very interesting post. One asset Nvidia has as it pursues this multi-sided platform strategy is its brand strength with gamers, which may contribute to them having more developers than AMD. The Nvidia team members are treated like gods at PAX East.
Hi, Yun. Great article. It’s hard for me to imagine a future GPU market without NVIDIA, but do you think it’s possible for VR to take off without high-end, discrete GPUs? I’m wondering if there will be some way to circumvent having a GPU installed on-board the HMD, rather than pre-rendering scenes and beaming them to the HMD via low latency tech. What do you think?