Nvidia Says Real-Time Path Tracing Is On the Horizon, But What Is It?

0
46

Real-time path tracing, the next stage of game rendering, is on the horizon,

Whether they’re in two or three dimensions, games are an illusion. When you’re solving puzzles in Outer Wilds or dodging Margit’s giant hammer in Elden Ring, there’s a complicated tangle of mathematical magic tricks running in the background to make you think you’re looking at a natural and organic thing. Behind the scenes, your GPU is doing billions of calculations per second to make it all move and function.

While games are more graphically impressive than ever, it would be easy to look at the last decade or so of games and say that graphical advancements have slowed down–we’re not seeing the easily visible jumps in fidelity that came with the transition from 2D to 3D, or from basic 3D to more advanced rendering techniques. But the truth is that big stuff is happening behind the scenes that will change the way games are made and maybe even how we play them.

First, let’s talk about the primary different rendering methods used to put games on our screens.

Rasterization, Tracing, and Light

Rasterization is the way games are rendered right now, and the way they’ve been rendered for decades–it’ll most likely never go away completely.

In a rasterized scene, the different triangles don’t “talk” to each other. Instead, their shadows and reflections are added on after the fact, as outlined above. In a ray-traced scene, though, those triangles can talk to each other through the medium of light, meaning that instead of just rendering each triangle individually, the computer takes into account how the color and material of one triangle will light up another. Ray tracing “captures those effects by working back from our eye (or view camera),” Nvidia explains, tracing the path of a light ray through each pixel on a two-dimensional viewing surface, out into a 3D-modeled scene. Each light bounce adds information to the ray (as well as complexity and calculation time), such as color information (the red of the apple), reflectivity (a matte table versus a mirror-polished one), and refraction (where else does the light go?), and so on.

To light a room in a rasterized scene, a developer might have to place a bunch of different light sources to simulate the ways in which ray-traced light might bounce. Imagine a simple cube-shaped room with a ceiling light and sun filtering in through a window. There’s a table in the middle with a clear glass pitcher. The ceiling light and sun outside the window both emit light, and then different objects in the room will bounce that light around. A curvy piece of glassware will refract the light down onto the table and onto the walls. The computer can’t calculate how light rays move through the room in a traditional rasterized scene, so the developer has to simulate it manually. They might put invisible light sources up in the corners of the room to account for sunlight refracting off the pitcher; then they could apply a cube map to show how the table, window, and ceiling light reflect off of the glass.

In a ray-traced scene, much of that manual configuration goes away because the computer figures out how light should interact with the scene. Each object and light source has clearly established properties–emissiveness, reflectivity, refraction, diffusion, and so on–and those properties are simply allowed to work together to build the scene before you. Instead of a cube map on that pitcher, its reflectivity allows it to accurately simulate the reflection of the table and window; add that red apple from before to the scene, and it’ll reflect naturally without the artist having to recreate a cube map. The way the pitcher bends light and splashes it onto the table is calculated rather than drawn by an artist. You drop in your emissive light sources, and the computer figures out how they light the room. The artist is now free to work on creative stuff–character designs, art direction, and the like–rather than spending a bunch of time doing magic tricks to make us feel like the room is real. That stuff all just happens.

However, while ray tracing is much closer to the natural calculation of light than rasterization, where the artists are in charge of imagining and then simulating all the visible effects, it’s still a shortcut, or a subset of path tracing. Think of it like “reverse path tracing.”

Ray and Path Tracing

Let’s talk about the difference between ray tracing and path tracing. As we discussed previously, ray tracing starts from your camera, the player’s perspective. From there, it looks at what objects are around the camera, drawing rays from the camera to those objects, and then it looks around at what light sources it has to sample from. From there, depending on how much processing power is available, the developer can add additional light bounces to add more information and definition. Path tracing is a more holistic version of ray tracing that begins from light sources and creates a more accurate version of the intended image by bouncing random rays across many bounces, instead of just a few bounces as we see with ray tracing right now.

In path tracing, rays are cast by their original light sources, and the rays then work their way to our eyes/the game’s camera. One of the biggest benefits of this is that optical effects like depth of field and indirect lighting don’t require extra algorithms.

What this means in video games is a little bit more complicated; beause this technology requires dedicated hardware, that means people have to buy new graphics cards and consoles to make use of it, and that takes time. Until the hardware becomes more widely adopted, there’s less incentive for developers to utilize the tech in their games. With that said, ray tracing isn’t the first time GPU makers have introduced new hardware-bound technology; programmable shaders are an integral part of game development today, but when that technology was first developed, they were available only on a single graphics card, the Nvidia GeForce 3. As the tech becomes cheaper and easier to implement, expect it to become more common.

Though path tracing might be on the horizon, we’re still in this growing adoption phase for ray tracing. Nvidia’s RTX cards were a huge jump forward for ray tracing, but are still very limited. When games do use ray tracing, it’s often just for reflections or just for shadows, while other aspects of the game are rendered through traditional methods. Modern GPUs were designed to render rasterized scenes as efficiently as possible, and game developers are used to working in that space. Even four years after the release of the first ray-tracing GPUs, developers are still mostly using rasterization with ray racing helping things along.

That’s why, when you drop into a PC game’s graphics settings, there can be dozens of toggles to turn things off. With a fully path-traced game–something that will become more and more possible with each new generation of GPU–none of those settings are necessary. The shadows cast by a light-blocking object, the colors spread by bouncing light, and the depth of field differential between close-up and far-off objects are all a natural part of tracing that path, and turning them on or off wouldn’t affect the efficiency of a fully traced path.

Full path tracing is still a ways off–the processing power and algorithms that help it along still have a ways to go–and we’ll probably never get to the point of truly organic path-traced graphics; even a single lightbulb casts billions of rays that bounce, reflect, and diffuse, and that’s still too much for the hardware to handle. The path forward here is two-fold. Dedicated hardware is only part of the equation. Also crucial to this will be the algorithms that bridge the gap between the inherent complexity of simulating light and working in the limited bounds of computational power. For example, optimizing the use of Monte Carlo ray tracing, which uses a random sampling equation called the Monte Carlo method, already plays a big role and will continue to. That random sampling then has to be de-noised to give the viewer a clean image. If you’ve ever used ray tracing or watched videos about its use in games and noticed static around the corners of the image, that’s the GPU de-noising the ray tracing in real time. That’s just as important as the light simulation itself at the moment.

Real-time ray and path tracing have both come a long way, and people who work with computer graphics can already use them to render the basic look of a scene, but the full render can still take minutes, hours, or even days. But using path tracing in real-time games is getting closer than ever.

There will be some road bumps along the way. We’ve witnessed one of the big ones, which is that, at least right now, ray-traced scenes don’t look terribly different from rasterized scenes, but they perform worse. That makes implementing ray tracing look like a sacrifice with no benefits and, right now, that’s because it kind of is, in this early stage of the technology. Again, modern graphics cards are optimized to handle rasterized rendering with incredible efficiency, while ray and path tracing are, despite being pretty old ideas, very new in this space. But as GPUs become more powerful and path tracing algorithms become more efficient, the sacrifice will begin to give way to more benefits. Increased computing power and optimized algorithims, as well as increased buy-in from developers and adoption by consumers will turn this from bleeding edge to everyday technology. While ray tracing-enabled cards are still a smaller part of the market, they’re growing quickly. From November 2021 to March 2022, the percentage of cards with ray tracing abilities jumped by nearly 4%, according to the Steam Hardware Survey.

For the developer, this means simpler scenes with less programming. Instead of having to place invisible lights and create cube maps and the like , they’ll simply drop lights into a scene, set up object materials–maybe even pulled from a database of materials information or something like that–and let the path tracing do the rest of the work. That should let developers spend more time creating interesting visuals instead of getting the math of the visuals to work, and that translates directly to the potential for more interesting games for us.

For us gamers, it also means less time spent tweaking game settings to get acceptable game performance–eventually. It also offers the possibility of new mechanics. Imagine Alien: Isolation, but you can see the shadow of the Xenomorph looming behind you, or see its blurry reflection in the brushed metal wall of the space station. Or a puzzle that involves accurately bounced and blended lights in a game like Resident Evil or Tomb Raider. These kinds of things will take a while to come to fruition, but with the percentage of ray tracing cards growing quarterly, alongside the same functionality in the Xbox Series S and X and PlayStation 5, that day is quickly getting closer.

Path tracing is the final stage of simulating light, and from here it’s a matter of making it work as well as it could with those improved algorithms and some dedicated silicon on our GPUs. Even a few years into the ray-tracing era, developers are still learning how to use it to make games better, and advancement feels slow in the moment, but the truth is that this stuff is moving quickly, and we’re only beginning to see how path tracing will change games.

Image Credit: Wikimedia Commons, Qutorial