Developing an Overlay Shader for Unity (Part 2)


In this part we address occlusion issues, and potential conflicts with other assets by decoupling the effect from the camera depth/stencil buffer. We’ll also make improvements to make it easier to highlight multiple objects at the same time.

Quick recap of what we’ve been doing so far:

  1. Acquire an 8 bit texture (without depth) to store the group ID.
  2. Bind GroupID Color + Camera Depth as render target.
  3. Render all relevant objects again, to write their group ID to the GroupID texture, while also testing against camera depth and setting stencil to 1. The group ID is provided as a shader variable.
  4. Bind Camera Color + Camera Depth as render target.
  5. Render all relevant objects, and use the stored group ID to determine outlines, while testing against camera depth and stencil. Fill color and Outline color are provided as shader variable.

Issues with this method:

  1. Another asset might rely on the camera stencil buffer, resulting in a potential conflict.
  2. During Step 3, we test against the camera depth buffer. This might fail if there are overlay objects occluding one another, because the camera depth buffer only has depth information on scene objects. In such a situation, far objects might render on top of near objects.
    Left: Incorrect ordering, Right: expected result.

Using a separate depth/stencil buffer:

  1. Acquire an 8 bit texture (with depth/stencil) to store the group ID.
  2. Bind GroupID Color + Depth as render target.
  3. Render all relevant objects to write their group ID to the GroupID texture with ZTest and ZWrite enabled, and set Stencil to 1.
  4. Bind Camera Color + GroupID Depth Stencil as render target.
  5. Render all relevant objects again, while testing against GroupID depth and stencil, and use their group ID to determine outlines. Optionally test against camera Depth when a see through effect is not desired. Fill color and Outline color, are provided as shader variable. Since the Camera depth buffer is no longer bound, we need to do our Z tests against the _CameraDepthTexture texture.

Depth testing against “_CameraDepthTexture”

In order to test the depth of a fragment against a depth value from _CameraDepthTexture we need to bring both into the same space. This example shows how to do so in linear [0, 1] space.

Calculating linear depth for a fragment

Vertex to Fragment struct

Vertex shader

Fragment shader

Calculating linear depth for a value from_CameraDepthTexture

Fragment shader

Performing the depth test

Fragment shader

Since the calculations for both vDepth and gDepth differ, we need to use a small ZBias to avoid floating point imprecision induced Z fighting

Better multi-object support

We’ll extend OverlaySelectable to offer better access to all instances. We’ll also add the option to always highlight an object.

Keeping a list of all instances is a matter of personal preference. I like my component data and logic separate, but this is not required. You could also add the fill and outline colors directly to the OverlaySelectable component. I opted to keep a list of all group colors in OverlayRenderer instead.


Content used

Developing an Overlay Shader for Unity (Part 1)



I recently wrote an overlay effect to highlight units through walls, using the following criteria:

  • Ability to see through walls
  • Support alpha blending
  • A thin, crisp outline
  • Grouping (multiple objects share the same outline if they touch)

I thought that this was a good opportunity to share not only how it works, but also my thought process behind developing the effect. Now, if all you want to do is overlay a color over an object in Unity, you could just render the object with an unlit shader. However, in order to produce more complex effects, such as outlines, or interaction with other highlighted objects, we need to get a little more creative.

Command Buffers

First we need to determine when to render our effect. Ideally it should be rendered either right before or right after other post processing effects, depending on whether we want other effects, such as “Bloom” to be applied over our effect.
We can do so using Command Buffers, which give us full control over when to render the effect.
A Command Buffer holds a list of rendering commands to be executed by unity at a specific point in the render pipeline. For more information see Extending the Built-in Render Pipeline with Command Buffers.


Alternatively, if you are using the Post Processing v2 Package (aka PPv2), you can write a custom post processor:

This has the advantage of managing the command buffer for you, plus it provides many useful utility methods and a nice inspector for the shader variables. It does come with some caveats (primarily with TAA) which I’ll address later.

If you’re using the High Definition Render Pipeline (HDRP), rather than the Built-in Render Pipeline you’ll need to use a custom pass instead of command buffers.

Shader Setup

I’ll keep the shader very bare bones for now. All it does is output the color red.

Drawing with Command Buffers

We need to instruct unity to render a mesh that we want to be highlighted. We do so by adding a CommandBuffer.DrawMesh command to the command buffer.

This instructs Unity to render a single mesh using the provided material (our custom shader).
However we most likely want Unity to render all meshes within a GameObject and its children, not just a single mesh.

Let’s write a small extension method to render all the meshes of a GameObject (and its children) in the correct position:

Note that this only works for non-static GameObjects!
Static batching might otherwise merge its “meshFilter.sharedMesh” with other objects.
I’m using the static list _meshFilters to store the result of GetComponentsInChildren, instead of using the version of GetComponentsInChildren that returns an array of components. This is to prevent garbage generation, as this method will be executed every game update.

We’re now at a point where we can run our first test.
To do so I added a GameObject field to the OverlayRenderer component, and linked it with the Unity safety hat™. (this is just for testing)


Neat. But in case you happen to use the PPV2 method, and have Termporal Anti Aliasing enabled,
you will notice that the overlay image is flickering/shaking:

This is caused by TAA, which jitters the camera’s projection matrix during normal rendering. It then uses a complex algorithm to compare the current samples with samples from previous frames (hence the name “temporal”) in order to produce an anti-aliased image.
The issue in our case is that PostProcessEvent.BeforeStack and PostProcessEvent.AfterStack are rendered after TAA. Thus our hat is rendered with the jittered projection matrix, but it is never handled by the TAA algorithm, and thus remains jittered.


If you do want to use TAA in your project (if you use PPv2 you probably do), there are ways to work around this.

  • You could use PostProcessEvent.BeforeTransparent to render it before TAA is appplied, however this can cause weird interactions with transparent objects.
  • Pass a non-jittered projection matrix to the shader. This will stop the jittering, but will also prevent the overlay effect from being anti-aliased by TAA
  • Use a custom command buffer to render during CameraEvent.BeforeImageEffects (see first example), which comes after transparent object rendering, but before TAA is applied.
    This method is compatible with TAA, and works regardless of whether PPv2 is being used or not. You’ll lose the nice utility functions from PPv2. But you can always look up how they’re implemented.
    I prefer this method, I can do what I want, when I want, and it doesn’t interfere with post processing


Mouse Picking

Let’s use mouse picking to highlight the object at the mouse cursor, instead of selecting it in the inspector.
First, I will mark objects that can be highlighted using a component. You can also use tags or layers. The advantage of using a component is that we can attach extra information. (for now I’ll keep it empty)

These objects will also need a collider for Physics.Raycast to work.
Let’s add both to scene objects that we want to display a mouse over effect for, such as the Unity safety hat™


Next, we add basic mouse picking.

Note the use of LateUpdate instead of Update to ensure that the position of the object is captured after any movement. (otherwise the effect might lag 1 frame behind the actual object).


There are many different methods for creating outlines. I’ll showcase my favorite method, which produces thin, crisp, pixel perfect, camera independent outlines. This method does not rely on vertex displacement (Inverted Hull).
Instead, it uses a fragment shader for per-pixel edge detection.


In order to determine whether a pixel is on the edge of an object, we need to access its neighbors.
Let’s zoom in really close (5×5 pixels) on the edge of a highlighted object. (top left image)
The pixel to be tested is marked in white. We’ll use a 3×3 box pattern to check if any of the neighbor pixels (a total of 8 texture samples) do not belong to the current object.

In this screenshot I’ve changed the overlay color to blue (it was red in earlier examples).

Top left: original image, with the pixel to be tested marked as white.
Bottom left: Neighbor pixels belonging to the same object marked as green, and other neighbor pixels marked as red.


If any of the tested pixel does not belong to the current object, the pixel is an edge/outline pixel.
We can only test for this, if all pixels are marked ahead of time.
This means that we need to render the effect in 2 passes.


Pass 1: Marking every pixel belonging to the highlighted object

We will do so by using a new RenderTexture to store the needed information.
Since we only store an id (initially just a boolean) we don’t need a full RGBA texture. An 8 bit single channel texture is perfectly sufficient as it provides 256 distinct values.
We can request a Temporary Render Texture from Unity. This is a neat feature designed to help with situations exactly like this, and means we don’t have to worry about things like resizing the render texture when the resolution changes.

The shader code for this pass will be almost identical to what we’ve done before. Only the fragment shader has changed.

overlayIDTexture pixels are now 1.0 if they belong to the object, and 0.0 if they do not.


Pass 2: Using the collected information to highlight edge pixels

For the second pass we use the information collected during the first pass to distinguish between outline and fill.

_OutlineColor=(255,255,255,255), _FillColor=(0,157,255,101)


You can, of course, use other sampling patterns. My favorite pattern also uses 8 taps, but they’re arranged in a diamond shape rather than the box shape in the previous example. The outline will be slightly thicker, but in my opinion, provides a more pleasant result.

Left: 8 Tap box pattern. Right: 8 Tap diamond pattern.

Larger patterns will result in a thicker outline. While you might be inclined to widen the outline like this, please keep in mind how many texture samples you’re taking. A 3×3 pattern requires 8 taps per pixel, which is perfectly fine. A 5×5 pattern requires 24 taps per pixel. A 7×7 pattern needs 48 taps. This quickly grows out of control. If you need a properly thick outline, I recommend looking at the Inverted Hull method instead.

Skinned Meshes

Currently, our overlay renderer is not compatible with skinned meshes.

This won’t work because SkinnedMeshRenderer.sharedMesh will always return the bind pose of the mesh, resulting in the following behaviour:

Do not use SkinnedMeshRenderer.BakeMesh to circumvent this issue.
Yes, you can use it to acquire a new mesh every update, but constantly removing and introducing new mesh colliders is a huge strain on the physics engine. Don’t do this.

For rendering, use CommandBuffer.DrawRenderer instead:


As for the collider, it is best to rely on multiple primitive colliders (box, sphere, capsule) instead.
For characters, you can often times get away with using a single capsule collider. If you need more precision, you can attach additional colliders to individual bones. For example, you could approximate an arm using 2 capsule colliders.


Depth Testing

By modifying the depth test in the shader we can achieve multiple useful effects, such as looking through walls:
Left: Normal Depth Testing

Center: Ignore the depth buffer.

This allows us to “look through” objects in front of the character. It does however also look through the character model itself, resulting in a layered look. This occurs because some pixels are handled multiple times (for example, the torso behind the right arm. It’s especially obvious with the hair). A “clean” non-layered look can be achieved by making sure every pixel is only handled once.


Right: Same as center, but mark pixels using the stencil buffer, so they are only rendered once.

In the first pass (Pass 0) we simply set the stencil mask for every pixel to 1.
In the second pass, we only render when the stencil mask equals 1. We also increment it by 1 if we do so. This way each pixel can only get rendered once.

Using the stencil buffer to achieve this effect is a neat little trick, but unfortunately it does have some limitations. If more control is needed, or another asset already uses the stencil buffer at this stage of rendering, it might be best to utilize a separate depth buffer. I will elaborate further on this in my next post.


Left: Matching group IDs. Right: Different group IDs


Currently, in the second pass, we compare the group id of a pixel to that of its neighbors. Up until now that id has always been 0 (no overlay) or 1 (overlay). If multiple objects are rendered with the same group id, they will share a single outline. If we want them separated, all we need to do is give them different group IDs. Let’s modify our first pass shader in order to write a group id to the output texture, rather than just 1 or 0. Identical


While there is much more to be said, this post is already a lot longer than I initially intended. It is also a decent point in time to provide a very basic implementation of the shader (before I’ll add more complex features). I’ll cut it off here and post a “part 2” in the near future, where I’ll go into more advanced topics / features.


Here’s a minimal implementation of the current state of this overlay effect.
It highlights objects when hovering the mouse over them.

Content used

RTS Style Unit Selection in Unity 5

  • Draw a selection box using the mouse
  • Determine which units are within the selection box
  • Highlight selected units

Drawing rectangles

There are many ways to draw rectangles in Unity. In this example, I will use GUI.DrawTexture, as it is an easy and straightforward way to achieve our goal. We can draw a simple colored Rect using GUI.DrawTexture by setting GUI.color to the desired color, and then passing a white 1×1 texture to GUI.DrawTexture. I wrote a short utility class to make this easier:

Keep in mind that GUI methods (and thus also our DrawScreenRect utility method) can only be called during OnGUI(), and make sure you create the white texture only once for performance reasons.

We can now draw screen rectangles from any component in OnGUI():

With the following utility method we can also draw borders for a rect:


Using the mouse to draw a selection box

Screen space in Unity has its origin (0,0) at the bottom left of the screen. This is inconsistent with the Rect struct, which has its origin at the top left. Input.mousePosition gives us the position of the mouse in screen space, so we have to be careful when using mouse positions to create a Rect. Knowing this, it is fairly easy to create a Rect from 2 screen space (mouse) positions:

We can then write a simple script that allows us to draw a selection box with the mouse:

Selecting Units

In order to determine which objects are within the bounds of the selection box, we need to bring both the selectable objects and the selection box into the same space.

Your fist idea might be to run these tests in world space, but I personally prefer doing it in post-projection (viewport) space because it can be a bit tricky to convert a selection box into world space if your camera uses a perspective projection. With a perspective projection, the world shape of the selection box is a frustum. You’d have to calculate the correct viewprojection matrix, extract the frustum planes, and then test against all 6 planes.

Getting the viewport bounds for a selection box is much easier:

And testing if a game object is within the bounds is trivial:

Highlighting selected units using projectors

There are many different ways to highlight units. In RTS games it seems to be pretty popular to place a small circle below selected units, so that’s what we are going to do.

In order for the circles to work well with sloped terrain we are going to use projectors (rather than just drawing a circle sprite below selected units). Projectors can project materials onto other geometry, but they need a special type of shader in order to do so. The Unity 5 standard assets contain both a Project/Light shader and a Project/Multiply shader, but unfortunately neither of those shaders are appropriate for what we want to do: Add circles to geometry below selected units. We’ll have to write our own projector shader.

It will take 2 parameters: a cookie texture (alpha mask) that defines which pixels are going to be affected, and a tint color, which will define the color of the affected pixels:

Set up shader states for projection, but use additive blending:

Combine the alpha mask and the tint color to determine the final color:

Using this shader, we can set up a projector like we initially intended. In the following example I use a simple circular alpha mask to project a circle below a unit. Note the import settings I used for the alpha mask. It needs to be a cookie texture with light type spotlight (this sets the wrap mode to clamp) and alpha from grayscale needs to be checked.
Projector Settings
While we are now capable of rendering our selection circles, there are still a few issues that we need to address.



Ignoring layers

By default projectors project onto everything. In our situation, we want projectors to ignore other units in order to avoid the behaviour seen in this image. This can be achieved using the Ignore Layers property of the projector. Personally, I like to have a ground layer which contains all of the terrain, and I simply ignore all other layers in the projector.




Projections can sometimes appear on objects outside of the projectors frustum. In this example, a projection appears on the terrain above the unit. This happens because the terrains bounding box intersects the projectors frustum (even though its geometry does not). Granted, this scenario is fairly unlikely in an RTS game, but it is easily solved by introducing an attenuation factor. This will also fade out the projection when a unit stands close to a cliff.

Example Project

I have created a Unity 5 project that implements everything discussed in this post. I also extended the unit selection component to preview unit selection and output all selected units. You can find the download link below.

Download Links

Precomputed Realtime Global Illumination in Unity 5

With its most recent update, Unity has received a massive overhaul to its lighting and rendering system. Unity 5 features both baked and real-time GI, using Geomerics Enlighten.

Baked GI

Baked GI uses a path tracer to simulate indirect lighting ahead of time. The information is then stored in lightmaps, which are used at run-time to light up the scene. The result is realistic and high quality lighting. Performance-wise, it is fairly inexpensive at run-time (but takes a long time to bake).

Since lighting is baked ahead of time, this requires both the meshes and the light sources to be static. Therefore baked GI will not work for scenes relying heavily on dynamic lighting, or scenes featuring procedural geometry and/or dynamic level design.

Precomputed Realtime GI

Precomputed Realtime GI tackles one of these issues. It is able to simulate indirect lighting for dynamic light sources, by precomputing how light can bounce throughout the scene.

To test it out I loaded up the Crytek Sponza, and added a dynamic directional light.

Here are the settings that I used:

Realtime GI Settings

Note that I have disabled ambient lighting completely. There is only 1 directional light source, and there are no reflection probes. All visible indirect lighting is a result of the precomputed realtime GI. There are some minor lighting artifacts, but overall I am rather pleased with both the quality and the performance. I’m probably going to make extensive use of this feature in future projects that use dynamic lighting.

The quality of the indirect light is directly related to the “Realtime Resolution” parameter, and so is the time required for baking. In a production environment you probably want to set the realtime resolution higher than I have. During development you can lower the resolution to improve bake times.

There are a few caveats with precomputed realtime GI though:

  • Realtime indirect bounce light shadowing is only supported for directional lights. You will get a warning in Unity if you try to do this with another type of light source. In the following example you can see that indirect bounce light from the point light spheres bleeds through the curtains.
  • If you make changes to the lighting within your scene at runtime, you may need to update some of your reflection probes to reflect these changes (which can be very expensive). This problem is not directly related to realtime GI, but rather to dynamic lighting in general. Unity offers some ways to deal with this through scripting, and I’ll probably make another blog post about using reflection probes in dynamic environments.

Combined GI

You can use both Baked GI and Precomputed Realtime GI simultaneously. Static geometry will use baked GI for static lights and realtime GI for dynamic lights. In most situations you’ll probably achieve the best result by using both.