
Introduction
I recently wrote an overlay effect to highlight units through walls, using the following criteria:
- Ability to see through walls
- Support alpha blending
- A thin, crisp outline
- Grouping (multiple objects share the same outline if they touch)
I thought that this was a good opportunity to share not only how it works, but also my thought process behind developing the effect. Now, if all you want to do is overlay a color over an object in Unity, you could just render the object with an unlit shader. However, in order to produce more complex effects, such as outlines, or interaction with other highlighted objects, we need to get a little more creative.
Command Buffers
First we need to determine when to render our effect. Ideally it should be rendered either right before or right after other post processing effects, depending on whether we want other effects, such as “Bloom” to be applied over our effect.
We can do so using Command Buffers, which give us full control over when to render the effect.
A Command Buffer holds a list of rendering commands to be executed by unity at a specific point in the render pipeline. For more information see Extending the Built-in Render Pipeline with Command Buffers.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
// Add to any ONE active GameObject public class OverlayRenderer : MonoBehaviour { // Settings private Shader _shader; private Material _material; private Camera _camera; private CommandBuffer _commandBuffer; public CameraEvent _cameraEvent = CameraEvent.BeforeImageEffects; public GameObject _objectToHighlight; public void OnEnable() { if (_camera == null) _camera = Camera.main; _shader = Shader.Find("Hidden/OverlayShader"); _material = new Material(_shader); _commandBuffer = new CommandBuffer(); _camera = Camera.main; _camera.AddCommandBuffer(_cameraEvent, _commandBuffer); } public void OnDisable() { if (_commandBuffer != null && _camera != null) { _camera.RemoveCommandBuffer(_cameraEvent, _commandBuffer); _commandBuffer.Release(); _commandBuffer = null; } } void LateUpdate() { if(_commandBuffer != null) { // Command buffers are not cleared automatically _commandBuffer.Clear(); // Overlay effect code goes here } } } |
Alternatively, if you are using the Post Processing v2 Package (aka PPv2), you can write a custom post processor:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 |
[Serializable] [PostProcess(typeof(OverlayEffectRenderer), PostProcessEvent.BeforeStack, "Overlay Effect", false)] public sealed class OverlayEffect : PostProcessEffectSettings { // Effect settings go here public ColorParameter _color = new ColorParameter() { value = new Color(0, 1, 0, 1) }; } public sealed class OverlayEffectRenderer : PostProcessEffectRenderer { private Shader _shader; private Material _material; public override void Init() { base.Init(); _shader = Shader.Find("Hidden/OverlayShader"); _material = new Material(_shader); } public override void Render(PostProcessRenderContext context) { var cmd = context.command; var sheet = context.propertySheets.Get(_shader); // Copy the existing image. We will be drawing on top of it. cmd.BlitFullscreenTriangle(context.source, context.destination); // Overlay effect code goes here } } |
This has the advantage of managing the command buffer for you, plus it provides many useful utility methods and a nice inspector for the shader variables. It does come with some caveats (primarily with TAA) which I’ll address later.
Shader Setup
I’ll keep the shader very bare bones for now. All it does is output the color red.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 |
Shader "OverlayShader" { HLSLINCLUDE #include "UnityCG.cginc" // This provides access to the vertices of the mesh being rendered struct appdata { float4 vertex : POSITION; float2 uv : TEXCOORD0; }; struct v2f { float4 vertex : SV_POSITION; }; // Vertex shader v2f Vert(appdata v) { v2f o; o.vertex = UnityObjectToClipPos(v.vertex); return o; } // Fragment shader (also called Pixel shader) float4 Frag(v2f i) : SV_Target { // For now, just output the color red float4 col = float4(1, 0, 0, 1); return col; } ENDHLSL SubShader { Pass { ZTest LEqual // Do not render "behind" existing pixels ZWrite Off // Do not write to the depth buffer Cull Back // Do not render triangles pointing away from the camera Blend SrcAlpha OneMinusSrcAlpha // Enable alpha blending // Run a shader program with the specified vertex and fragment shaders HLSLPROGRAM #pragma vertex Vert #pragma fragment Frag ENDHLSL } } } |
Drawing with Command Buffers
We need to instruct unity to render a mesh that we want to be highlighted. We do so by adding a CommandBuffer.DrawMesh command to the command buffer.
1 |
CommandBuffer.DrawMesh(Mesh mesh, Matrix4x4 matrix, Material material, int submeshIndex, int shaderPass) |
This instructs Unity to render a single mesh using the provided material (our custom shader).
However we most likely want Unity to render all meshes within a GameObject and its children, not just a single mesh.
Let’s write a small extension method to render all the meshes of a GameObject (and its children) in the correct position:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
public static class CommandBufferExtensions { private static List _meshFilters = new List(); public static void DrawAllMeshes(this CommandBuffer cmd, GameObject gameObject, Material material, int pass) { _meshFilters.Clear(); gameObject.GetComponentsInChildren(_meshFilters); foreach (var meshFilter in _meshFilters) { // Static objects may use static batching, preventing us from accessing their default mesh if (!meshFilter.gameObject.isStatic) { var mesh = meshFilter.sharedMesh; // Render all submeshes for (int i = 0; i < mesh.subMeshCount; i++) cmd.DrawMesh(mesh, meshFilter.transform.localToWorldMatrix, material, i, pass); } } } } |
Static batching might otherwise merge its “meshFilter.sharedMesh” with other objects.
We’re now at a point where we can run our first test.
To do so I added a GameObject field to the OverlayRenderer component, and linked it with the Unity safety hat™. (this is just for testing)
1 2 3 4 5 6 7 8 9 10 11 |
public class OverlayRenderer : MonoBehaviour //[...] public GameObject _objectToHighlight; void LateUpdate() { //[...] _commandBuffer.Clear(); _commandBuffer.DrawAllMeshes(_objectToHighlight, _material, 0); } //[...] |

Neat. But in case you happen to use the PPV2 method, and have Termporal Anti Aliasing enabled,
you will notice that the overlay image is flickering/shaking:
This is caused by TAA, which jitters the camera’s projection matrix during normal rendering. It then uses a complex algorithm to compare the current samples with samples from previous frames (hence the name “temporal”) in order to produce an anti-aliased image.
The issue in our case is that PostProcessEvent.BeforeStack and PostProcessEvent.AfterStack are rendered after TAA. Thus our hat is rendered with the jittered projection matrix, but it is never handled by the TAA algorithm, and thus remains jittered.
If you do want to use TAA in your project (if you use PPv2 you probably do), there are ways to work around this.
- You could use PostProcessEvent.BeforeTransparent to render it before TAA is appplied, however this can cause weird interactions with transparent objects.
1[PostProcess(typeof(OverlayEffectRenderer), PostProcessEvent.BeforeTransparent, "Overlay Effect", false)]
- Pass a non-jittered projection matrix to the shader. This will stop the jittering, but will also prevent the overlay effect from being anti-aliased by TAA
1cmd.SetGlobalMatrix(ShaderIDs._NonJitteredProjection, GL.GetGPUProjectionMatrix(context.camera.nonJitteredProjectionMatrix, true));1234567float4x4 _NonJitteredProjection;float4 ComputeClipPos(float4 pos){float4x4 vp = mul(_NonJitteredProjection, UNITY_MATRIX_V);return mul(vp, mul(UNITY_MATRIX_M, pos));}
- Use a custom command buffer to render during CameraEvent.BeforeImageEffects (see first example), which comes after transparent object rendering, but before TAA is applied.
This method is compatible with TAA, and works regardless of whether PPv2 is being used or not. You’ll lose the nice utility functions from PPv2. But you can always look up how they’re implemented.
I prefer this method, I can do what I want, when I want, and it doesn’t interfere with post processing1public CameraEvent _cameraEvent = CameraEvent.BeforeImageEffects;
Mouse Picking
Let’s use mouse picking to highlight the object at the mouse cursor, instead of selecting it in the inspector.
First, I will mark objects that can be highlighted using a component. You can also use tags or layers. The advantage of using a component is that we can attach extra information. (for now I’ll keep it empty)
1 |
public class OverlaySelectable : MonoBehaviour {} |
These objects will also need a collider for Physics.Raycast to work.
Let’s add both to scene objects that we want to display a mouse over effect for, such as the Unity safety hat™
Next, we add basic mouse picking.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 |
RaycastHit[] raycastHits = new RaycastHit[16]; bool GetGameObjectAtMousePointer(out GameObject gameObject) { gameObject = null; float distanceToObject = float.MaxValue; // Get all colliders at the mouse pointer // Use RaycastNonAlloc instead of RaycastAll to avoid garbage, as this test runs every update var hitCount = Physics.RaycastNonAlloc(_camera.ScreenPointToRay(Input.mousePosition), raycastHits); for (int i = 0; i < hitCount; i++) { var collider = raycastHits[i].collider; var distance = raycastHits[i].distance; // Check if this object is marked as OverlaySelectable, and if it is closer than previous hits // Use GetComponentInParent to allow the colliders to be added to child objects var selectable = collider.GetComponentInParent(); if (selectable != null && distance < distanceToObject) { gameObject = selectable.gameObject; distanceToObject = distance; } } return gameObject != null; } void LateUpdate() { _commandBuffer.Clear(); if (GetGameObjectAtMousePointer(out GameObject gameObject) && _commandBuffer != null) { _commandBuffer.DrawAllMeshes(gameObject, _material, 0); } } |
Outlines
There are many different methods for creating outlines. I’ll showcase my favorite method, which produces thin, crisp, pixel perfect, camera independent outlines. This method does not rely on vertex displacement (Inverted Hull).
Instead, it uses a fragment shader for per-pixel edge detection.
In order to determine whether a pixel is on the edge of an object, we need to access its neighbors.
Let’s zoom in really close (5×5 pixels) on the edge of a highlighted object. (top left image)
The pixel to be tested is marked in white. We’ll use a 3×3 box pattern to check if any of the neighbor pixels (a total of 8 texture samples) do not belong to the current object.
Top left: original image, with the pixel to be tested marked as white.
Bottom left: Neighbor pixels belonging to the same object marked as green, and other neighbor pixels marked as red.
If any of the tested pixel does not belong to the current object, the pixel is an edge/outline pixel.
We can only test for this, if all pixels are marked ahead of time.
This means that we need to render the effect in 2 passes.
Pass 1: Marking every pixel belonging to the highlighted object
We will do so by using a new RenderTexture to store the needed information.
Since we only store an id (initially just a boolean) we don’t need a full RGBA texture. An 8 bit single channel texture is perfectly sufficient as it provides 256 distinct values.
We can request a Temporary Render Texture from Unity. This is a neat feature designed to help with situations exactly like this, and means we don’t have to worry about things like resizing the render texture when the resolution changes.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
void LateUpdate() { _commandBuffer.Clear(); if (GetGameObjectAtMousePointer(out GameObject gameObject) && _commandBuffer != null) { // Request an 8 bit single channel texture without depth buffer. RenderTexture overlayIDTexture = RenderTexture.GetTemporary(_camera.pixelWidth, _camera.pixelHeight, 0, RenderTextureFormat.R8); // Bind the temporary render texture, but keep using the camera's depth buffer _commandBuffer.SetRenderTarget(overlayIDTexture, BuiltinRenderTextureType.Depth); // Always clear temporary RenderTextures before use, their content is random. _commandBuffer.ClearRenderTarget(false, true, Color.clear, 1.0f); // [Pass 0] _commandBuffer.DrawAllMeshes(gameObject, _material, 0); // [Pass 1 code will go here] // [...] // Don't forget to release the temporary render texture RenderTexture.ReleaseTemporary(overlayIDTexture); } } |
The shader code for this pass will be almost identical to what we’ve done before. Only the fragment shader has changed.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
Pass // Pass 0, writes to overlayIDTexture { // Fragment Shader. Writes to overlayIDTexture. It only has one channel so we return a float instead of float4 float FragWriteOverlayID(v2f i) : SV_Target { return 1.0; } ZTest LEqual ZWrite Off Cull Back Blend Off // <-- Turn off alpha blending !! HLSLPROGRAM #pragma vertex Vert #pragma fragment FragWriteOverlayID ENDHLSL } |
overlayIDTexture pixels are now 1.0 if they belong to the object, and 0.0 if they do not.
Pass 2: Using the collected information to highlight edge pixels
For the second pass we use the information collected during the first pass to distinguish between outline and fill.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 |
void LateUpdate() { _commandBuffer.Clear(); if (GetGameObjectAtMousePointer(out GameObject gameObject) && _commandBuffer != null) { // Request an 8 bit single channel texture without depth buffer. RenderTexture overlayIDTexture = RenderTexture.GetTemporary(_camera.pixelWidth, _camera.pixelHeight, 0, RenderTextureFormat.R8); // Bind the temporary render texture, but keep using the camera's depth buffer _commandBuffer.SetRenderTarget(overlayIDTexture, BuiltinRenderTextureType.Depth); // Always clear temporary RenderTextures before use, their content is random. _commandBuffer.ClearRenderTarget(false, true, Color.clear, 1.0f); // [Pass 1] _commandBuffer.DrawAllMeshes(gameObject, _material, 0); // [Pass 2] // Bind the camera render target _commandBuffer.SetRenderTarget(BuiltinRenderTextureType.CameraTarget, BuiltinRenderTextureType.Depth); _commandBuffer.SetGlobalTexture("_OverlayIDTexture", overlayIDTexture); _commandBuffer.SetGlobalVector("_OutlineColor", _OutlineColor); _commandBuffer.SetGlobalVector("_FillColor", _FillColor); _commandBuffer.DrawAllMeshes(gameObject, _material, 1); // Don't forget to release the temporary render texture RenderTexture.ReleaseTemporary(overlayIDTexture); } } |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
// - - - - - // - x x x - // - x c x - // - x x x - // - - - - - static const int samples_Rect8_Count = 8; static const float2 samples_Rect8[8] = { float2( 1, 0 ), float2(-1, 0 ), float2( 1, 1 ), float2(-1,-1 ), float2( 0, 1 ), float2( 0,-1 ), float2(-1, 1 ), float2( 1,-1 ) }; UNITY_DECLARE_TEX2D(_OverlayIDTexture); // The texture generated in Pass 1 float4 _OutlineColor; float4 _FillColor; bool IsEdgePixel(int2 screenPos, const int sampleCount, const float2 offsets[8] ) { // Get overlay id of the pixel currently being rendered float center = _OverlayIDTexture.Load(int3(screenPos.xy, 0)).r; // Compare it to all neighbors using the sample offsets for(int i = 0; i < sampleCount; i++) { float neighbor = _OverlayIDTexture.Load(int3(screenPos.xy + offsets[i], 0)).r; if(neighbor != center) { // This is an edge pixel! use outline color return true; } } return false; } float4 FragOverlay(UNITY_VPOS_TYPE screenPos : VPOS) : SV_Target { bool isEdgePixel = IsEdgePixel(screenPos.xy, samples_Rect8_Count, samples_Rect8); if(isEdgePixel) return _OutlineColor; else return _FillColor; } |
_OutlineColor=(255,255,255,255), _FillColor=(0,157,255,101)
You can, of course, use other sampling patterns. My favorite pattern also uses 8 taps, but they’re arranged in a diamond shape rather than the box shape in the previous example. The outline will be slightly thicker, but in my opinion, provides a more pleasant result.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
// - - x - - // - x - x - // x - c - x // - x - x - // - - x - - static const int samples_Diamond8_Count = 8; static const float2 samples_Diamond8[8] = { float2( 2, 0 ), float2(-2, 0 ), float2( 1, 1 ), float2(-1,-1 ), float2( 0, 2 ), float2( 0,-2 ), float2(-1, 1 ), float2( 1,-1 ) }; |
Left: 8 Tap box pattern. Right: 8 Tap diamond pattern.
Skinned Meshes
Currently, our overlay renderer is not compatible with skinned meshes.
1 |
cmd.DrawMesh(skinnedMesh.sharedMesh, skinnedMesh.transform.localToWorldMatrix, material, i, pass); |
This won’t work because SkinnedMeshRenderer.sharedMesh will always return the bind pose of the mesh, resulting in the following behaviour:
Yes, you can use it to acquire a new mesh every update, but constantly removing and introducing new mesh colliders is a huge strain on the physics engine. Don’t do this.
For rendering, use CommandBuffer.DrawRenderer instead:
1 2 3 4 5 6 7 |
foreach (var skinnedMesh in _skinnedMeshes) { var mesh = skinnedMesh.sharedMesh; // Render all submeshes for (int i = 0; i < mesh.subMeshCount; i++) cmd.DrawRenderer(skinnedMesh, material, i, pass); } |
As for the collider, it is best to rely on multiple primitive colliders (box, sphere, capsule) instead.
For characters, you can often times get away with using a single capsule collider. If you need more precision, you can attach additional colliders to individual bones. For example, you could approximate an arm using 2 capsule colliders.
Depth Testing
By modifying the depth test in the shader we can achieve multiple useful effects, such as looking through walls:
Left: Normal Depth Testing
1 2 3 4 5 6 7 8 9 10 |
Pass // Pass 0: Write overlay ID { ZTest LEqual // [...] } Pass // Pass 1: Render overlay { ZTest LEqual // [...] } |
Center: Ignore the depth buffer.
1 2 3 4 5 6 7 8 9 10 |
Pass // Pass 0: Write overlay ID { ZTest Always // [...] } Pass // Pass 1: Render overlay { ZTest Always // [...] } |
This allows us to “look through” objects in front of the character. It does however also look through the character model itself, resulting in a layered look. This occurs because some pixels are handled multiple times (for example, the torso behind the right arm. It’s especially obvious with the hair). A “clean” non-layered look can be achieved by making sure every pixel is only handled once.
Right: Same as center, but mark pixels using the stencil buffer, so they are only rendered once.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
Pass // Pass 0: Write overlay ID { Stencil { Ref 1 Comp always Pass replace } ZTest Always // [...] } Pass // Pass 1: Render overlay { Stencil { Ref 1 Comp equal Pass IncrWrap } ZTest Always // [...] } |
In the first pass (Pass 0) we simply set the stencil mask for every pixel to 1.
In the second pass, we only render when the stencil mask equals 1. We also increment it by 1 if we do so. This way each pixel can only get rendered once.
Grouping
Left: Matching group IDs. Right: Different group IDs
Currently, in the second pass, we compare the group id of a pixel to that of its neighbors. Up until now that id has always been 0 (no overlay) or 1 (overlay). If multiple objects are rendered with the same group id, they will share a single outline. If we want them separated, all we need to do is give them different group IDs. Let’s modify our first pass shader in order to write a group id to the output texture, rather than just 1 or 0. Identical
1 2 3 4 5 6 7 8 |
// Range [0, 255] float _GroupID; float FragWriteOverlayID(v2f i) : SV_Target { // Map range [0, 255] to [0.0f, 1.0f] return _GroupID / 255.0f; } |
Notes
While there is much more to be said, this post is already a lot longer than I initially intended. It is also a decent point in time to provide a very basic implementation of the shader (before I’ll add more complex features). I’ll cut it off here and post a “part 2” in the near future, where I’ll go into more advanced topics / features.
Downloads
Here’s a minimal implementation of the current state of this overlay effect.
It highlights objects when hovering the mouse over them.
- OverlayRenderer.cs (Add to a single empty gameobject)
- OverlaySelectable.cs (Add to objects you want a mouse over effect for. The object needs a collider.)
- OverlayShader.shader