Designing a Post Processing Effects System with Multiple Screen Shaders in Unity 2020

If you’re like me, you might have the desire to design screen shaders/post processing effects by directly writing the shader code, without using any plugins that rely on render pipelines you don’t have in your project. While not too complicated, I didn’t find a ton of information on how to design a system that allows you to manage, and more importantly, stack screen effects. So here’s how I implemented it for Runeblight. I designed this in Unity 2020.1, but I’d imagine that similar solutions will work for most other versions.

Writing a Screen Shader

Screen shaders are shaders that don’t effect geometry or rendering in 3D space, but rather manipulate a texture, in this case, the texture being a copy of what the active camera is displaying. So for screen shaders, we’re essentially writing fragment shaders specifically for the screen’s pixels.

Sinusoidal Effect Shader

Here’s an example of a shader I like that creates a warping effect on the screen, which makes the player seem like they’re dizzy. This can be our first screen effect.

Shader "Custom/Sinusoidal"
{
	Properties
	{
		_MainTex("Base (RGB)", 2D) = "white" {}
		_Speed("Speed", Range(0,1)) = 0
		_Intensity("Intensity", Range(0,1)) = 0
	}
	SubShader
	{
		Pass 
		{
			CGPROGRAM
			#pragma vertex vert_img
			#pragma fragment frag

			#include "UnityCG.cginc"

			uniform sampler2D _MainTex;
			half _Speed;
			half _Intensity;
			
			float4 frag(v2f_img i) : COLOR
			{
              // Moves the U and V, set U equal to V and vice versa, then translating it based on time
              i.uv.x += sin(i.uv.y + _Time.y * _Speed) * _Intensity;
              i.uv.y += sin(i.uv.x + _Time.y * _Speed) * _Intensity;

                                
              // Simply moving the U and V of the screen texture will cause the texture to render off of the screen, creating a tearing effect
              // This scales the texture from the center based on the intensity, preventing the texture from going fully off the screen
              i.uv = (i.uv - 0.5) * (1 - _Intensity * 1.55) + 0.5;

                                                                                       
              float4 c = tex2D(_MainTex, i.uv);

              return c;
			}

			ENDCG
		}
	}
}


Once you have the shader written, we need to actually apply this to the player’s screen. While the following code is not going to anywhere near final, and won’t allow for multiple shaders, it will allow us to render just this one.

using UnityEngine;

public class SinusoidalEffect : MonoBehaviour
{
    public float Speed;
    public float Intensity;
    private Material _material;

    private void Awake()
    {
        // Creates a new empty material that uses our shader
        _material = new Material(Shader.Find("Custom/Sinusoidal"));
    }
    
    private void OnRenderImage(RenderTexture source, RenderTexture destination)
    {
        // If the speed or intensity is zero, no need to apply shader effects
        if (Speed == 0f || Intensity == 0f)
        {
            Graphics.Blit(source, destination);

            return;
        }

        // Otherwise, set the variables
        _material.SetFloat("_speed", Speed);
        _material.SetFloat("_intensity", Speed);

        // Blit is a function that copies the source texture to the destination texture,
        // and the third argument is a new shader to use. If destination is null, the
        // destination will simply be the screen. In this case, destination will be where
        // rendering is occurring. It's usually null since we're rendering to the screen
        Graphics.Blit(source, destination, _material);
    }
}

Since OnRenderImage gets called when the component is attached to a game object with a camera, so you’ll likely want to attach this to your main camera.

If you run your game, and increase the speed and intensity in the editor, you’ll get something like this:

Colorize Shader

Shader "Custom/Colorize"
{
	Properties
	{
		_MainTex("Base (RGB)", 2D) = "white" {}
		_r("Red", Range(0, 1)) = 1
		_g("Green", Range(0, 1)) = 1
		_b("Blue", Range(0, 1)) = 1
	}
	SubShader
	{
		Pass
		{
			CGPROGRAM
			#pragma vertex vert_img
			#pragma fragment frag

			#include "UnityCG.cginc"

			uniform sampler2D _MainTex;
			uniform float _r;
			uniform float _g;
			uniform float _b;

			float4 frag(v2f_img i) : COLOR
			{
				float4 c = tex2D(_MainTex, i.uv);

     			float4 result = c;
				result.rgb.r *= _r;
				result.rgb.g *= _g;
				result.rgb.b *= _b;

				return result;
			}

			ENDCG
		}
	}
}

This shader tints multiples each pixel’s color value by an input color, which can be used to tint the screen a certain color, or maker it lighter or darker. You’ll notice that it isn’t too different from the first shader, only containing the frag function. If you want, you can write some code similar to the C# above to test this shader out, but now that we have both, let’s focus on trying to render them at the same time.

Applying Multiple Effects

With the code as it is, it’s not really possible to apply both of these shaders at the same time. We could duplicate the Graphics.Blit line, and have two material fields, but if we Blit the input texture to the screen twice, we’re essentially copying the same input texture to the screen once over the other, so which ever one comes last is what appears on the screen. There’s also another consideration – what if we want to apply more than two effects? What if we wanted to apply many effects?

  • First, create a data structure that could be used to hold multiple “screen shaders.” This could consist of just strings representing the shader’s name, or a class we’ve designed, which is what I’ll be doing. I’d recommend a List<T>, as it will be good for adding or removing shaders during runtime, as well as in the editor.
  • Next, we need to apply multiple shaders to the same texture. We can do this by getting the input texture, and then, instead of storing it, copy the result back into the input, and run it through again. Once all the shaders have been applied, Blit the final texture to the screen.
using System.Collections.Generic;
using UnityEngine;

public class CameraPostProcesser : MonoBehaviour
{
    public static CameraPostProcesser Main { get; private set; }
    public readonly List<PostProcessingEffect> RenderingEffects = new List<PostProcessingEffect>();
    private RenderTexture _tempDestination;

    protected virtual void Awake()
    {
        if (Main == null)
        {
            Main = this;
        }
    }

    protected virtual void OnRenderImage(RenderTexture source, RenderTexture destination)
    {
        if (RenderingEffects.Count == 0)
        {
            Graphics.Blit(source, destination);

            return;
        }

        if (_tempDestination == null)
        {
            _tempDestination = new RenderTexture(source);
        }
            
        foreach (var effect in RenderingEffects)
        {
            if (effect.Material == null)
            {
                effect.Material = new Material(Shader.Find(effect.ShaderName));
            }

            effect.UpdateMaterial(effect.Material);

            Graphics.Blit(source, _tempDestination, effect.Material);
            Graphics.Blit(_tempDestination, source);
        }

        Graphics.Blit(_tempDestination, destination);
    }

    public void AddEffect(PostProcessingEffect effect)
    {
        RenderingEffects.Add(effect);
    }

    public void RemoveEffect(PostProcessingEffect effect)
    {
        RenderingEffects.Remove(effect);
    }

    public void ClearEffects()
    {
        RenderingEffects.Clear();
    }
}

A few notes on the code above:

  • I’m using a singleton pattern so shaders can easily be added and removed from anywhere.
  • I designed an abstract class for effects, so each shader can have its own C# class, which is below.

The most important part of the code is what happens in the OnRenderImage. In order to have all of the effects be applied, we need to copy the input with the shader into a temporary location, which just gets copied back into the input, and do that for as many effects as we have.

We use a field as this destination for a reason. Graphics.Blit expects the destination to be not null, otherwise, we’d just be copying to the screen. So we need something in the field, which I just chose to be a copy of the texture based on the first input, so it’s the same size as the screen, and also shares other properties with it. There’s a reason we use a field, and only set it once. We could just use a variable here, like var tempDestination = new RenderTexture(). The issue with this is that we’d be creating a new render texture every single frame, which quickly overloads the garbage collector, as the textures are designed to be overwritten and immediately disposed of, when all we essentially need is a non-null reference to a render texture instance.

So in the loop, it first copies the source into the temp location with the shader, then copies the result into the source. It loops until it has done that for every texture, and then then copies the final texture in the temp location to the screen.

Finally, every time it loops, it calls an UpdateMaterial method on the effect, which either creates the material if it’s the first time being rendered, or it tells the material to set some of the shader variables. This allows us to change values while an effect is being rendered, and have it update immediately.

Here’s the rest of the code:

using UnityEngine;

public abstract class PostProcessingEffect
{
    public Material Material { get; set; }
    public abstract string ShaderName { get; }

    public abstract Material UpdateMaterial(Material material);
}
using UnityEngine;

public class Colorize : PostProcessingEffect
{
    public Color Color { get; set; }
    public override string ShaderName => "Custom/Colorize";

    public Colorize(Color color)
    {
        Color = color;
    }

    public override Material UpdateMaterial(Material material)
    {
        material.SetFloat("_r", Color.r);
        material.SetFloat("_g", Color.g);
        material.SetFloat("_b", Color.b);

        return material;
    }
}
using UnityEngine;

public class SinusoidalEffect : PostProcessingEffect
{
    public float Speed { get; set; }
    public float Intensity { get; set; }
    public override string ShaderName => "Custom/Sinusoidal";

    public SineWaveDistort(float speed, float intensity)
    {
        Speed = speed;
        Intensity = intensity;
    }

    public override Material UpdateMaterial(Material material)
    {
        material.SetFloat("_Speed", Speed);
        material.SetFloat("_Intensity", Intensity);

        return material;
    }
}

Now we have a system where we can create multiple instances of post processing screen effects in code, allowing us to add, remove, and update them as needed. Simply call AddEffect with an instance of one or more of the effects we’ve created to see the results:

// You can put this anywhere for testing

if (Input.GetKeyDown(KeyCode.L))
{
    CameraPostProcesser.AddEffect(new Colorize(new Color(1.5f, 0.15f, 0.15f)));
}

if (Input.GetKeyDown(KeyCode.K))
{
    CameraPostProcesser.AddEffect(new SineWaveDistort(0.5f, 0.1f));
}

if (Input.GetKeyDown(KeyCode.H))
{
    CameraPostProcesser.ClearEffects();
}

And here’s the results, multiple screen shaders, with little overhead, that can be quickly added and removed.

Here’s an example of adding and removing both Colorize and SinusoidalEffect on the fly.
I found a practical application to be an underwater effect. Tinting the screen darker with a shade of blueish-green, and adding a small amount of sinusoidal warping.