Real-time path tracing, the next stage of game rendering, is on the horizon, Nvidia says. But wait–didn’t RTX just get here? So what is path tracing, how is it different from ray tracing, and why does it matter? It’s all about the way light works–both the way developers use it and the way gamers interact with it–and about turning a complex magic trick into simple physics with the quickly-growing graphical horsepower we have in our PCs and consoles.
Whether they’re in two or three dimensions, games are an illusion. When you’re solving puzzles in Outer Wilds or dodging Margit’s giant hammer in Elden Ring, there’s a complicated tangle of mathematical magic tricks running in the background to make you think you’re looking at a natural and organic thing. Behind the scenes, your GPU is doing billions of calculations per second to make it all move and function.
While games are more graphically impressive than ever, it would be easy to look at the last decade or so of games and say that graphical advancements have slowed down–we’re not seeing the easily visible jumps in fidelity that came with the transition from 2D to 3D, or from basic 3D to more advanced rendering techniques. But the truth is that big stuff is happening behind the scenes that will change the way games are made and maybe even how we play them.
First, let’s talk about the primary different rendering methods used to put games on our screens.
Rasterization, Tracing, and Light
Rasterization is the way games are rendered right now, and the way they’ve been rendered for decades–it’ll most likely never go away completely. Rasterization is the act of rendering 3D models as 2D images. As explained by Nvidia, “objects on the screen are created from a mesh of virtual triangles… computers then convert the triangles of the 3D models into pixels on a 2D screen.” Other processing, like anti-aliasing, is then applied to those pixels to show you the final product. On a 4K display, your GPU is calculating and displaying the color information for 8 million pixels, and then refreshing that data 30, 60, or even 144 times per second.
This is computationally intensive, and so developers use shortcuts to help speed things up so that our graphics cards don’t choke on the pixels and just give up. For example, many games are rendered at a lower resolution and then upscaled, or rendered in a checkerboard pattern on your screen to cut down on the number of pixels the GPU has to worry about.
As far as light goes, ray tracing and path tracing are about tracing the way light bounces in different ways, while rasterization shows you what the game world looks like if, instead of bouncing, the light just stopped at the first thing it hit. The object is illuminated, but that illumination doesn’t affect any of the other objects on the screen in front of you.
Stupid Computer Tricks
When I talk about games being an illusion, lighting is one of the biggest examples. In the real world, light is complicated. If you put a bright red apple on a white table, light bouncing off the apple will cast a red hue on the table below, while light bouncing off of the table will cast a white hue up onto the apple. If there are two lights in the room, the apple might cast two slightly different shadows on the table. Every light source and object can emit, reflect, scatter, or absorb light.
Simulating all of that has historically been out of the range of real-time graphics processing. So instead, much of this information is pre-baked into a scene. Game developers have gotten incredibly good at faking natural-looking lighting effects. If you’ve ever seen those videos from Japanese television of people pretending to play ping pong while other people in black suits move the players around, game lighting is kind of like that–there are a lot of manually crafted tricks going on in the background. For example, to make sure your GPU has enough time to render everything on your display, stuff that you can’t see is dropped or “culled.” In real life, reflections reflect whether you’re looking at them or not (though we can definitely get into some philosophical discussions about that if you want to). When rendering a game, though, those elements are ignored until they’re on-screen again. That means that a dynamic light source or a reflective surface that’s just off-screen might be skipped over–not rendered–until it’s on-screen. You’ll see that manifest as reflections suddenly appearing when you turn your game camera just slightly. If you look at a reflective water surface in just about any modern game, that’s often the easiest place to spot this effect.
Additional effects are then added on after the fact. The shadows that your character casts are calculated separately from your character, and you can turn this setting individually up and down on many PC games, adjusting from a blocky mess that’s barely discernible as a shadow to a high-fidelity one that looks believable. This shadow isn’t being calculated based on the exact placement of the light source and your character or object, though; it’s more like an estimation of what the shadow would look like, rendered as a two-dimensional image on the ground below that object.
Reflecting on Reflections
Reflections, meanwhile, are a separate thing altogether and have caused gamers plenty of confusion and consternation, sometimes even inspiring conspiracy theories. One example of this is in Marvel’s Spider-Man for PlayStation 4 from developer Insomniac, in which gamers took what was just a reflection technique as a tribute to 9/11. Spider-Man’s New York City is chock full of tall buildings covered in glass panels, and gamers expect to see reflections when they get close to a sheet of glass. With current rendering techniques, though, these reflections aren’t being calculated. Instead, they’re something called cube maps–literally, a cube-shaped image that simulates a reflection–and they’re created by an artist. For a given area of the game, the artist might create cube maps for street-level windows and high-up windows, for day and night, and things like that.
Because of the fact that most buildings are just tall boxes, this works most of the time, but it has its limits. In Spider-Man, if you crawl along certain buildings near “Ground Zero,” the site of the 9/11 World Trade Center attacks, you can see the hazy image of two buildings. Some gamers believed this initially to be a quiet tribute to the Twin Towers. The truth, though, is that it was a cube map simulating reflections–a static, generic image embedded in the reflective object, rather than a genuine reflection of the surroundings.
These are the places where the seams in the illusion are visible: reflections, lighting, the edges of your visible play space. The more you play games, the more you can see how these illusions are good enough to make games look realistic, but hardly something calculated in real-time.
Light, Simulated
That’s where ray tracing comes in.
Ray tracing isn’t new–not even close. It’s actually one of the oldest ideas for rendering three-dimensional graphics. Ray tracing is a method of lighting a computer-generated scene by simulating the movement of light. Instead of a designer having to manually estimate how scenes might look in different lighting–even though game developers have become very good at this–lights are placed, and the bounces of the light rays emitted by that object are traced from the camera (player), back to the light source. This is mathematically elegant and straightforward, but it takes a lot of graphical horsepower. In older computer-animated movies that used ray tracing, single frames could take hours to render. Until the last few years, real-time ray tracing was an impossibility.
Pixar has been using some ray tracing since the early 2000s, and has been fully ray tracing scenes since 2013’s Monsters University. Early ideas on the concept of understanding how light moves were first established in the 1500s, and the first equations for ray tracing in computer graphics have been around since 1969, when Arthur Appel detailed it in a paper called Some Techniques for Shading Machine Renderings of Solids.
In a rasterized scene, the different triangles don’t “talk” to each other. Instead, their shadows and reflections are added on after the fact, as outlined above. In a ray-traced scene, though, those triangles can talk to each other through the medium of light, meaning that instead of just rendering each triangle individually, the computer takes into account how the color and material of one triangle will light up another. Ray tracing “captures those effects by working back from our eye (or view camera),” Nvidia explains, tracing the path of a light ray through each pixel on a two-dimensional viewing surface, out into a 3D-modeled scene. Each light bounce adds information to the ray (as well as complexity and calculation time), such as color information (the red of the apple), reflectivity (a matte table versus a mirror-polished one), and refraction (where else does the light go?), and so on.
To light a room in a rasterized scene, a developer might have to place a bunch of different light sources to simulate the ways in which ray-traced light might bounce. Imagine a simple cube-shaped room with a ceiling light and sun filtering in through a window. There’s a table in the middle with a clear glass pitcher. The ceiling light and sun outside the window both emit light, and then different objects in the room will bounce that light around. A curvy piece of glassware will refract the light down onto the table and onto the walls. The computer can’t calculate how light rays move through the room in a traditional rasterized scene, so the developer has to simulate it manually. They might put invisible light sources up in the corners of the room to account for sunlight refracting off the pitcher; then they could apply a cube map to show how the table, window, and ceiling light reflect off of the glass.
In a ray-traced scene, much of that manual configuration goes away because the computer figures out how light should interact with the scene. Each object and light source has clearly established properties–emissiveness, reflectivity, refraction, diffusion, and so on–and those properties are simply allowed to work together to build the scene before you. Instead of a cube map on that pitcher, its reflectivity allows it to accurately simulate the reflection of the table and window; add that red apple from before to the scene, and it’ll reflect naturally without the artist having to recreate a cube map. The way the pitcher bends light and splashes it onto the table is calculated rather than drawn by an artist. You drop in your emissive light sources, and the computer figures out how they light the room. The artist is now free to work on creative stuff–character designs, art direction, and the like–rather than spending a bunch of time doing magic tricks to make us feel like the room is real. That stuff all just happens.
However, while ray tracing is much closer to the natural calculation of light than rasterization, where the artists are in charge of imagining and then simulating all the visible effects, it’s still a shortcut, or a subset of path tracing. Think of it like “reverse path tracing.”
Ray and Path Tracing
Let’s talk about the difference between ray tracing and path tracing. As we discussed previously, ray tracing starts from your camera, the player’s perspective. From there, it looks at what objects are around the camera, drawing rays from the camera to those objects, and then it looks around at what light sources it has to sample from. From there, depending on how much processing power is available, the developer can add additional light bounces to add more information and definition. Path tracing is a more holistic version of ray tracing that begins from light sources and creates a more accurate version of the intended image by bouncing random rays across many bounces, instead of just a few bounces as we see with ray tracing right now.
In path tracing, rays are cast by their original light sources, and the rays then work their way to our eyes/the game’s camera. One of the biggest benefits of this is that optical effects like depth of field and indirect lighting don’t require extra algorithms.
What this means in video games is a little bit more complicated; beause this technology requires dedicated hardware, that means people have to buy new graphics cards and consoles to make use of it, and that takes time. Until the hardware becomes more widely adopted, there’s less incentive for developers to utilize the tech in their games. With that said, ray tracing isn’t the first time GPU makers have introduced new hardware-bound technology; programmable shaders are an integral part of game development today, but when that technology was first developed, they were available only on a single graphics card, the Nvidia GeForce 3. As the tech becomes cheaper and easier to implement, expect it to become more common.
Though path tracing might be on the horizon, we’re still in this growing adoption phase for ray tracing. Nvidia’s RTX cards were a huge jump forward for ray tracing, but are still very limited. When games do use ray tracing, it’s often just for reflections or just for shadows, while other aspects of the game are rendered through traditional methods. Modern GPUs were designed to render rasterized scenes as efficiently as possible, and game developers are used to working in that space. Even four years after the release of the first ray-tracing GPUs, developers are still mostly using rasterization with ray racing helping things along.
That’s why, when you drop into a PC game’s graphics settings, there can be dozens of toggles to turn things off. With a fully path-traced game–something that will become more and more possible with each new generation of GPU–none of those settings are necessary. The shadows cast by a light-blocking object, the colors spread by bouncing light, and the depth of field differential between close-up and far-off objects are all a natural part of tracing that path, and turning them on or off wouldn’t affect the efficiency of a fully traced path.
Full path tracing is still a ways off–the processing power and algorithms that help it along still have a ways to go–and we’ll probably never get to the point of truly organic path-traced graphics; even a single lightbulb casts billions of rays that bounce, reflect, and diffuse, and that’s still too much for the hardware to handle. The path forward here is two-fold. Dedicated hardware is only part of the equation. Also crucial to this will be the algorithms that bridge the gap between the inherent complexity of simulating light and working in the limited bounds of computational power. For example, optimizing the use of Monte Carlo ray tracing, which uses a random sampling equation called the Monte Carlo method, already plays a big role and will continue to. That random sampling then has to be de-noised to give the viewer a clean image. If you’ve ever used ray tracing or watched videos about its use in games and noticed static around the corners of the image, that’s the GPU de-noising the ray tracing in real time. That’s just as important as the light simulation itself at the moment.
Real-time ray and path tracing have both come a long way, and people who work with computer graphics can already use them to render the basic look of a scene, but the full render can still take minutes, hours, or even days. But using path tracing in real-time games is getting closer than ever.
There will be some road bumps along the way. We’ve witnessed one of the big ones, which is that, at least right now, ray-traced scenes don’t look terribly different from rasterized scenes, but they perform worse. That makes implementing ray tracing look like a sacrifice with no benefits and, right now, that’s because it kind of is, in this early stage of the technology. Again, modern graphics cards are optimized to handle rasterized rendering with incredible efficiency, while ray and path tracing are, despite being pretty old ideas, very new in this space. But as GPUs become more powerful and path tracing algorithms become more efficient, the sacrifice will begin to give way to more benefits. Increased computing power and optimized algorithims, as well as increased buy-in from developers and adoption by consumers will turn this from bleeding edge to everyday technology. While ray tracing-enabled cards are still a smaller part of the market, they’re growing quickly. From November 2021 to March 2022, the percentage of cards with ray tracing abilities jumped by nearly 4%, according to the Steam Hardware Survey.
For the developer, this means simpler scenes with less programming. Instead of having to place invisible lights and create cube maps and the like , they’ll simply drop lights into a scene, set up object materials–maybe even pulled from a database of materials information or something like that–and let the path tracing do the rest of the work. That should let developers spend more time creating interesting visuals instead of getting the math of the visuals to work, and that translates directly to the potential for more interesting games for us.
For us gamers, it also means less time spent tweaking game settings to get acceptable game performance–eventually. It also offers the possibility of new mechanics. Imagine Alien: Isolation, but you can see the shadow of the Xenomorph looming behind you, or see its blurry reflection in the brushed metal wall of the space station. Or a puzzle that involves accurately bounced and blended lights in a game like Resident Evil or Tomb Raider. These kinds of things will take a while to come to fruition, but with the percentage of ray tracing cards growing quarterly, alongside the same functionality in the Xbox Series S and X and PlayStation 5, that day is quickly getting closer.
Path tracing is the final stage of simulating light, and from here it’s a matter of making it work as well as it could with those improved algorithms and some dedicated silicon on our GPUs. Even a few years into the ray-tracing era, developers are still learning how to use it to make games better, and advancement feels slow in the moment, but the truth is that this stuff is moving quickly, and we’re only beginning to see how path tracing will change games.
Image Credit: Wikimedia Commons, Qutorial