In this part we address occlusion issues, and potential conflicts with other assets by decoupling the effect from the camera depth/stencil buffer. We’ll also make improvements to make it easier to highlight multiple objects at the same time.
Quick recap of what we’ve been doing so far:
Acquire an 8 bit texture (without depth) to store the group ID.
Render all relevant objects again, to write their group ID to the GroupID texture, while also testing against camera depth and setting stencil to 1. The group ID is provided as a shader variable.
Bind Camera Color + Camera Depth as render target.
Render all relevant objects, and use the stored group ID to determine outlines, while testing against camera depth and stencil. Fill color and Outline color are provided as shader variable.
Issues with this method:
Another asset might rely on the camera stencil buffer, resulting in a potential conflict.
During Step 3, we test against the camera depth buffer. This might fail if there are overlay objects occluding one another, because the camera depth buffer only has depth information on scene objects. In such a situation, far objects might render on top of near objects.Left: Incorrect ordering, Right: expected result.
Using a separate depth/stencil buffer:
Acquire an 8 bit texture (with depth/stencil) to store the group ID.
Render all relevant objects again, while testing against GroupID depth and stencil, and use their group ID to determine outlines. Optionally test against camera Depth when a see through effect is not desired. Fill color and Outline color, are provided as shader variable. Since the Camera depth buffer is no longer bound, we need to do our Z tests against the _CameraDepthTexture texture.
Depth testing against “_CameraDepthTexture”
In order to test the depth of a fragment against a depth value from _CameraDepthTexture we need to bring both into the same space. This example shows how to do so in linear [0, 1] space.
Keeping a list of all instances is a matter of personal preference. I like my component data and logic separate, but this is not required. You could also add the fill and outline colors directly to the OverlaySelectable component. I opted to keep a list of all group colors in OverlayRenderer instead.
I recently wrote an overlay effect to highlight units through walls, using the following criteria:
Ability to see through walls
Support alpha blending
A thin, crisp outline
Grouping (multiple objects share the same outline if they touch)
I thought that this was a good opportunity to share not only how it works, but also my thought process behind developing the effect. Now, if all you want to do is overlay a color over an object in Unity, you could just render the object with an unlit shader. However, in order to produce more complex effects, such as outlines, or interaction with other highlighted objects, we need to get a little more creative.
Command Buffers
First we need to determine when to render our effect. Ideally it should be rendered either right before or right after other post processing effects, depending on whether we want other effects, such as “Bloom” to be applied over our effect. We can do so using Command Buffers, which give us full control over when to render the effect. A Command Buffer holds a list of rendering commands to be executed by unity at a specific point in the render pipeline. For more information see Extending the Built-in Render Pipeline with Command Buffers.
This has the advantage of managing the command buffer for you, plus it provides many useful utility methods and a nice inspector for the shader variables. It does come with some caveats (primarily with TAA) which I’ll address later.
This instructs Unity to render a single mesh using the provided material (our custom shader). However we most likely want Unity to render all meshes within a GameObject and its children, not just a single mesh.
Let’s write a small extension method to render all the meshes of a GameObject (and its children) in the correct position:
Note that this only works for non-static GameObjects! Static batching might otherwise merge its “meshFilter.sharedMesh” with other objects.
ℹ
I’m using the static list _meshFilters to store the result of GetComponentsInChildren, instead of using the version of GetComponentsInChildren that returns an array of components. This is to prevent garbage generation, as this method will be executed every game update.
We’re now at a point where we can run our first test. To do so I added a GameObject field to the OverlayRenderer component, and linked it with the Unity safety hat™. (this is just for testing)
Neat. But in case you happen to use the PPV2 method, and have Termporal Anti Aliasing enabled, you will notice that the overlay image is flickering/shaking:
This is caused by TAA, which jitters the camera’s projection matrix during normal rendering. It then uses a complex algorithm to compare the current samples with samples from previous frames (hence the name “temporal”) in order to produce an anti-aliased image. The issue in our case is that PostProcessEvent.BeforeStack and PostProcessEvent.AfterStack are rendered after TAA. Thus our hat is rendered with the jittered projection matrix, but it is never handled by the TAA algorithm, and thus remains jittered.
If you do want to use TAA in your project (if you use PPv2 you probably do), there are ways to work around this.
You could use PostProcessEvent.BeforeTransparent to render it before TAA is appplied, however this can cause weird interactions with transparent objects.
Pass a non-jittered projection matrix to the shader. This will stop the jittering, but will also prevent the overlay effect from being anti-aliased by TAA
Use a custom command buffer to render during CameraEvent.BeforeImageEffects (see first example), which comes after transparent object rendering, but before TAA is applied. This method is compatible with TAA, and works regardless of whether PPv2 is being used or not. You’ll lose the nice utility functions from PPv2. But you can always look up how they’re implemented. I prefer this method, I can do what I want, when I want, and it doesn’t interfere with post processing
Let’s use mouse picking to highlight the object at the mouse cursor, instead of selecting it in the inspector. First, I will mark objects that can be highlighted using a component. You can also use tags or layers. The advantage of using a component is that we can attach extra information. (for now I’ll keep it empty)
1
publicclassOverlaySelectable:MonoBehaviour{}
These objects will also need a collider for Physics.Raycast to work. Let’s add both to scene objects that we want to display a mouse over effect for, such as the Unity safety hat™
Note the use of LateUpdate instead of Update to ensure that the position of the object is captured after any movement. (otherwise the effect might lag 1 frame behind the actual object).
Outlines
There are many different methods for creating outlines. I’ll showcase my favorite method, which produces thin, crisp, pixel perfect, camera independent outlines. This method does not rely on vertex displacement (Inverted Hull). Instead, it uses a fragment shader for per-pixel edge detection.
In order to determine whether a pixel is on the edge of an object, we need to access its neighbors. Let’s zoom in really close (5×5 pixels) on the edge of a highlighted object. (top left image) The pixel to be tested is marked in white. We’ll use a 3×3 box pattern to check if any of the neighbor pixels (a total of 8 texture samples) do not belong to the current object.
ℹ
In this screenshot I’ve changed the overlay color to blue (it was red in earlier examples).
Top left: original image, with the pixel to be tested marked as white. Bottom left: Neighbor pixels belonging to the same object marked as green, and other neighbor pixels marked as red.
If any of the tested pixel does not belong to the current object, the pixel is an edge/outline pixel. We can only test for this, if all pixels are marked ahead of time. This means that we need to render the effect in 2 passes.
Pass 1: Marking every pixel belonging to the highlighted object
We will do so by using a new RenderTexture to store the needed information. Since we only store an id (initially just a boolean) we don’t need a full RGBA texture. An 8 bit single channel texture is perfectly sufficient as it provides 256 distinct values. We can request a Temporary Render Texture from Unity. This is a neat feature designed to help with situations exactly like this, and means we don’t have to worry about things like resizing the render texture when the resolution changes.
You can, of course, use other sampling patterns. My favorite pattern also uses 8 taps, but they’re arranged in a diamond shape rather than the box shape in the previous example. The outline will be slightly thicker, but in my opinion, provides a more pleasant result.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
// - - x - -
// - x - x -
// x - c - x
// - x - x -
// - - x - -
staticconstintsamples_Diamond8_Count=8;
staticconstfloat2 samples_Diamond8[8]=
{
float2(2,0),
float2(-2,0),
float2(1,1),
float2(-1,-1),
float2(0,2),
float2(0,-2),
float2(-1,1),
float2(1,-1)
};
Left: 8 Tap box pattern. Right: 8 Tap diamond pattern.
⚠
Larger patterns will result in a thicker outline. While you might be inclined to widen the outline like this, please keep in mind how many texture samples you’re taking. A 3×3 pattern requires 8 taps per pixel, which is perfectly fine. A 5×5 pattern requires 24 taps per pixel. A 7×7 pattern needs 48 taps. This quickly grows out of control. If you need a properly thick outline, I recommend looking at the Inverted Hull method instead.
Skinned Meshes
Currently, our overlay renderer is not compatible with skinned meshes.
This won’t work because SkinnedMeshRenderer.sharedMesh will always return the bind pose of the mesh, resulting in the following behaviour:
❌
Do not use SkinnedMeshRenderer.BakeMesh to circumvent this issue. Yes, you can use it to acquire a new mesh every update, but constantly removing and introducing new mesh colliders is a huge strain on the physics engine. Don’t do this.
As for the collider, it is best to rely on multiple primitive colliders (box, sphere, capsule) instead. For characters, you can often times get away with using a single capsule collider. If you need more precision, you can attach additional colliders to individual bones. For example, you could approximate an arm using 2 capsule colliders.
Depth Testing
By modifying the depth test in the shader we can achieve multiple useful effects, such as looking through walls: Left: Normal Depth Testing
1
2
3
4
5
6
7
8
9
10
Pass// Pass 0: Write overlay ID
{
ZTest LEqual
// [...]
}
Pass// Pass 1: Render overlay
{
ZTest LEqual
// [...]
}
Center: Ignore the depth buffer.
1
2
3
4
5
6
7
8
9
10
Pass// Pass 0: Write overlay ID
{
ZTest Always
// [...]
}
Pass// Pass 1: Render overlay
{
ZTest Always
// [...]
}
This allows us to “look through” objects in front of the character. It does however also look through the character model itself, resulting in a layered look. This occurs because some pixels are handled multiple times (for example, the torso behind the right arm. It’s especially obvious with the hair). A “clean” non-layered look can be achieved by making sure every pixel is only handled once.
Right: Same as center, but mark pixels using the stencil buffer, so they are only rendered once.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Pass// Pass 0: Write overlay ID
{
Stencil{
Ref1
Comp always
Pass replace
}
ZTest Always
// [...]
}
Pass// Pass 1: Render overlay
{
Stencil{
Ref1
Comp equal
Pass IncrWrap
}
ZTest Always
// [...]
}
In the first pass (Pass 0) we simply set the stencil mask for every pixel to 1. In the second pass, we only render when the stencil mask equals 1. We also increment it by 1 if we do so. This way each pixel can only get rendered once.
ℹ
Using the stencil buffer to achieve this effect is a neat little trick, but unfortunately it does have some limitations. If more control is needed, or another asset already uses the stencil buffer at this stage of rendering, it might be best to utilize a separate depth buffer. I will elaborate further on this in my next post.
Grouping
Left: Matching group IDs. Right: Different group IDs
Currently, in the second pass, we compare the group id of a pixel to that of its neighbors. Up until now that id has always been 0 (no overlay) or 1 (overlay). If multiple objects are rendered with the same group id, they will share a single outline. If we want them separated, all we need to do is give them different group IDs. Let’s modify our first pass shader in order to write a group id to the output texture, rather than just 1 or 0. Identical
1
2
3
4
5
6
7
8
// Range [0, 255]
float_GroupID;
floatFragWriteOverlayID(v2fi):SV_Target
{
// Map range [0, 255] to [0.0f, 1.0f]
return_GroupID/255.0f;
}
Notes
While there is much more to be said, this post is already a lot longer than I initially intended. It is also a decent point in time to provide a very basic implementation of the shader (before I’ll add more complex features). I’ll cut it off here and post a “part 2” in the near future, where I’ll go into more advanced topics / features.
Downloads
Here’s a minimal implementation of the current state of this overlay effect. It highlights objects when hovering the mouse over them.
Determine which units are within the selection box
Highlight selected units
Drawing rectangles
There are many ways to draw rectangles in Unity. In this example, I will use GUI.DrawTexture, as it is an easy and straightforward way to achieve our goal. We can draw a simple colored Rect using GUI.DrawTexture by setting GUI.color to the desired color, and then passing a white 1×1 texture to GUI.DrawTexture. I wrote a short utility class to make this easier:
Keep in mind that GUI methods (and thus also our DrawScreenRect utility method) can only be called during OnGUI(), and make sure you create the white texture only once for performance reasons.
We can now draw screen rectangles from any component in OnGUI():
Screen space in Unity has its origin (0,0) at the bottom left of the screen. This is inconsistent with the Rect struct, which has its origin at the top left. Input.mousePosition gives us the position of the mouse in screen space, so we have to be careful when using mouse positions to create a Rect. Knowing this, it is fairly easy to create a Rect from 2 screen space (mouse) positions:
In order to determine which objects are within the bounds of the selection box, we need to bring both the selectable objects and the selection box into the same space.
Your fist idea might be to run these tests in world space, but I personally prefer doing it in post-projection (viewport) space because it can be a bit tricky to convert a selection box into world space if your camera uses a perspective projection. With a perspective projection, the world shape of the selection box is a frustum. You’d have to calculate the correct viewprojection matrix, extract the frustum planes, and then test against all 6 planes.
Getting the viewport bounds for a selection box is much easier:
There are many different ways to highlight units. In RTS games it seems to be pretty popular to place a small circle below selected units, so that’s what we are going to do.
In order for the circles to work well with sloped terrain we are going to use projectors (rather than just drawing a circle sprite below selected units). Projectors can project materials onto other geometry, but they need a special type of shader in order to do so. The Unity 5 standard assets contain both a Project/Light shader and a Project/Multiply shader, but unfortunately neither of those shaders are appropriate for what we want to do: Add circles to geometry below selected units. We’ll have to write our own projector shader.
It will take 2 parameters: a cookie texture (alpha mask) that defines which pixels are going to be affected, and a tint color, which will define the color of the affected pixels:
1
2
3
4
Properties{
_Color("Tint Color",Color)=(1,1,1,1)
_ShadowTex("Cookie",2D)="gray"{}
}
Set up shader states for projection, but use additive blending:
1
2
3
4
ZWrite Off
ColorMask RGB
Blend SrcAlpha One// Additive blending
Offset-1,-1
Combine the alpha mask and the tint color to determine the final color:
Using this shader, we can set up a projector like we initially intended. In the following example I use a simple circular alpha mask to project a circle below a unit. Note the import settings I used for the alpha mask. It needs to be a cookie texture with light type spotlight (this sets the wrap mode to clamp) and alpha from grayscale needs to be checked. While we are now capable of rendering our selection circles, there are still a few issues that we need to address.
Ignoring layers
By default projectors project onto everything. In our situation, we want projectors to ignore other units in order to avoid the behaviour seen in this image. This can be achieved using the Ignore Layers property of the projector. Personally, I like to have a ground layer which contains all of the terrain, and I simply ignore all other layers in the projector.
Attenuation
Projections can sometimes appear on objects outside of the projectors frustum. In this example, a projection appears on the terrain above the unit. This happens because the terrains bounding box intersects the projectors frustum (even though its geometry does not). Granted, this scenario is fairly unlikely in an RTS game, but it is easily solved by introducing an attenuation factor. This will also fade out the projection when a unit stands close to a cliff.
I have created a Unity 5 project that implements everything discussed in this post. I also extended the unit selection component to preview unit selection and output all selected units. You can find the download link below.