Posts Tagged ‘Scriptable Rendering Pipeline’

Lighting the Maze


I started adding lighting support to the little maze project that I’m using to learn Unity’s Scriptable Rendering Pipeline (SRP).

I’ve been following catlikecoding’s tutorials on SRP to help as a starting point and navigate some of the quirks, and see how lighting can be handled.


If you’re a Unity developer, check out catlikecoding. Their tutorials are great.


It took a few days to get through the ones I wanted to learn about. But once I got them working, I started to experiment.

I’ve always found a good way to learn anything is to follow the steps laid out, and then divert off and experiment. Giving yourself a chance to learn by making your own mistakes.

For the little maze, I wanted to see if I could maintain the pixel art aesthetic in the lighting. Trying to make shadows match the pixel art.

Making shadows align to the texture pixels, can’t be done by just turning off filtering apparently.

It still needs some work but I’m happy with how it turned out.

So how’s it done?

I take the world-space position of the pixel on the screen and convert it into a world-space pixel position, which is then used by the lighting functions to calculate the shadow.

The position of each pixel to be used by the lighting calculation and exaggerated for sake of looking kind of neat.

Below is a little code snippet of how it’s calculated on the shader.

half pixelSize = GetShadowPixel();
float3 pixel_pos = floor(input.worldPosition / pixelSize) * pixelSize;

More can be done to calculate a better position, which I’ll return to once I’ve made more progress. But being able to manipulate values, and knowing how the code using it will be executed, always feels good. It feels less like a black box.

Though it wasn’t without problems.

Exact alignment with world-space pixel position causes noisy pixels.

There were issues with flickering, aligning shadows to the pixels, and light passing through seemingly solid meshes which took a bit of experimenting to solve.


There is room for improvement, but so far I’m happy with how it’s looking.

Using the new lighting in a day night cycle.

The lighting, shadows, and ambient color are driven by a day-night cycle component, which I rewrote from the old maze project.

Unity’s rendering pipeline feels a little less mysterious to me now. And I’m looking forward to exploring the limits of what I can do as I continue to work with it.

But next up, I’m going to dive a little deeper into supporting more lighting features. I’d like to add more light sources like a lantern or sconce to light up the maze at night. It should be fun.

Revisiting Maze Generation


I started a small side project recently, revisiting an old project and its systems for generating procedural mazes. It was one of my first projects in Unity from many years ago, and my first foray into procedurally generated content for games.

That was some time ago. This was when Unity didn’t have many of the Quality of Life features that it has now. Like the Package Manager, Nested Prefabs, Assembly Definitions, or even C# namespaces. So a lot of “good ideas” I had around that time didn’t age well and I had to spend a few days re-doing.

With this little project, I’ve decided to dive into Unity’s Scriptable Rendering Pipeline and apply what I’m learning to an actual game production.

I’m also secretly trying to gain a little insight into how it works to solve a few problems on The Very Organized Thief, without disrupting its development too much.

After a bit of digging through some ancient code, I’ve managed to get the procedural maze generation running again.

A generated maze. The art used was taken from the original project and updated to work with the new maze generation system.

So how does it work.

First, I generate a map by using a Depth First algorithm. Mapping out the corridors and walls and storing it as high-level data.

The maze data generated before adding the visuals. The coloring is from an existing system created to detect “islands” which was not made to work for mazes.

I then do a second pass using that data to determine more specific features, such as how it should look, where the start and end should be, and what brush it should use to generate the final look.

Then I generate the visuals, determining what brush to use and its orientation so I can place the correct 3d model. I can also add variation at this point based on the desired frequency, which is how I’m adding the pits and the turns with bushes.

The tileset brush which let’s you control what prefab to generate and possible variations.

Once it’s done all that, I then generate pathfinding data which I can then use for AI.

The blue paths are the generated Navigation Meshes.

One of the greatest additions to Unity in recent years that makes authoring procedural brushes easier, has been Nested Prefabs.

In older versions of Unity, if you constructed a new prefab out of other prefabs, the other prefabs would become “baked” into the new prefab. So if you needed a specific object, like a bush or something, that was shared across multiple prefabs. You had to go into each prefab that used it and manually modify that bush. It was tedious and not very fun.

But with Nested Prefabs, you can create a prefab of a bush, and nest it inside another prefab. So if I make changes or add features to the bush, all other prefabs that have a nested version, are updated too. Very useful, and without the tedium.


So far it’s good enough and should give me enough to play with while coming up with a better shader to start learning more about Unity’s Scriptable Rendering Pipeline.

It’s very easy to learn something, but without some kind of project or game to use it on, it can be difficult to apply what you learn to larger systems, especially when you don’t know how it should work entirely.

You have to start somewhere.

Everything is very flat, but that should change once I start adding lighting support again and working out how to control the rendering pipeline in Unity.