Dev Blog #35 | Foundry Friday: Drilling Down into First-Person Rendering
Author: Rambus,
published 1 year ago,
First-person games like Foundry often face a common problem: How do you accurately render the player's gun—or in our case, drill—in front of the camera? The drill should follow you wherever you go and receive correct lighting.
[h2]The Naïve Solution
[/h2]
The naïve implementation involves simply "parenting" the drill to the camera so that it follows you around and appears to look correct. However, what happens when you get too close to a wall? Since the drill exists in the world, it goes right into the wall, causing an issue known as clipping.
[img]https://i.imgur.com/os7n4ZX.gif[/img][i]Oh no! That drill doesn't look right[/i]
[h2]Solved Problem[/h2]
Don't worry—this is a solved problem. We've all played shooters and seen the gun displayed correctly when up against a wall.
[img]https://i.imgur.com/KpHGRDj.jpg[/img]
[i]Random Wikipedia CSGO image[/i]
Some modern AAA games tackle this by rendering the first person in world space, using IK or other animation techniques to have the hands pull away when you're against a wall. This makes sense for first-person shooters when shooting doesn't make mechanical sense. But in our game, we spend a lot of time up against a wall, drilling!
The more common solution for ensuring that the first-person elements are always on top is to use two cameras: one to draw the world and another to draw the first-person elements. This guarantees that the drill is always rendered above the world. An added benefit is that each camera can have different field-of-view settings, ensuring that the drill doesn't look distorted on ultra-wide monitors.
[img]https://i.imgur.com/TzkRIKU.png[/img][i]Rendering and compositing[/i]
[h2]The Catch[/h2]
So, let's try that, shall we? Perfect, it looks great.
Well, if only things were that simple. Foundry uses something called "deferred rendering" ([url=https://store.steampowered.com/news/app/983870/view/3731840159852576819]which we discussed in more detail in this previous post[/url]) through Unity's "default" rendering pipeline. Drawing with two cameras requires running the entire pipeline twice, resulting in unacceptably high performance overhead. Even if we run the second camera in forward mode, we still couldn't reduce the overhead to acceptable levels. It was adding as much as 25% overhead to our frame time, resulting in lower frame rates across the board.
So, we need to creatively solve first-person rendering within a single camera, using a deferred rendering pipeline.
[h2]Time to Get Creative[/h2]
To do this, we simulate drawing with multiple cameras within a single camera setup, even when dealing with complex multi-object first-person geometry.
First, we render all first-person objects into the "stencil buffer," a special buffer that allows us to efficiently mask out areas in the image for future draws. We also draw the depth at this point, with "ztest always" set, ensuring that these meshes are drawn regardless of their position in world space.
[img]https://i.imgur.com/yhX2CBn.png[/img][i]Stencil buffer after drawing our drill[/i]
After filling the stencil buffer with first-person geometry, we draw the color data. We first verify that we have the correct first-person stencil set, and then check if the depth in the depth buffer is "zequal." If that's the case, we draw.
[img]https://i.imgur.com/q1xUrfs.png[/img][i]Depth for the drill is also drawn, regardless of if the drill *should* be inside a wall[/i]
One final issue remains: World Space UI actually draws after all our geometry, so we need to handle that as well. Unity UI includes a special component called a "Mask," which prevents child elements from drawing outside the graphic. Here, we apply another custom shader to check the stencil mask from before and determine whether we should draw the UI.
[img]https://i.imgur.com/xoHqHj5.png[/img][i]Reusing Unity's UI masking system[/i]
After all this, we had more masking and positioning work to do for forward-rendered objects like smoke and wide-aspect monitors. But the solutions largely follow the approach we selected, so I won't delve into that here.
[img]https://i.imgur.com/CqWZnCI.gif.gif[/img][i]Tada! It works, and it's fast[/i]
[h2]In Summary[/h2]
Developing games is a mix of best practices and creative problem-solving. Every project will have its technology limitations. Instead of lamenting the high costs of compositing two images in your engine, your job becomes finding a solution with the tools you have.
-MarkL