Posts Tagged ‘Unity’

Texture Generation


This past week I’ve been experimenting with Texture Generation to learn more about procedural generation and apply what I learn to The Maze Where The Minotaur Lives.

To start out, I created a simple editor window in Unity to let me create and test new Texture Generators easily, providing a simple way to adjust their values and see the results.

The editor window to help create and test new texture generators.

The first generator was a basic checker pattern. I wanted to figure out the basic interfaces needed to create the window editor and how to generate textures, before moving onto anything more complicated.

A simple black and white checker pattern.

I then experimented with generating gradients, creating a UV gradient, and a diagonal color gradient.

UV Gradient
Black and white diagonal gradient.

Next, I tried a variety of noise algorithms. Using Unity’s Random class, to generate a random texture.

Random noise generated using Unity’s Random.value.

I also added support for Value and Perlin noise.

Value Noise
Perlin2D Noise

I’ll be using these textures to help generate and visualize variations that will be used in The Maze Where the Minotaur Lives for walls and other effects.

Improving Performance

When adjusting the settings for the Perlin noise, there is a delay between the slider being changed and when the texture is generated. This is even more noticeable when adjusting the noise’s octave to higher levels.

This isn’t too big of a problem while loading and generating a map in-game, especially when the maps are small. But I couldn’t help wonder if I could tweak the code to increase the performance by using Unity’s Job System in the editor.

I have dabbled with it in the past, but I’ve never found much use for it, at least until now.

My initial attempt was to take the code and put it inside a “Job”, to see what would happen. Surprisingly, it ran slower. My best guess is that moving work to another thread and waiting for it to complete is just that, moving it somewhere else to wait for it.

I then converted the code to use an “IJobParallelFor” instead. Parallel jobs can be used to schedule batches to run the same job across multiple processes. And since the noise is calculated per pixel, it’s safe to generate the noise using batches.

There was a decent improvement in the GUI, with less delay when generating a 512×512 texture.

Even with some minor tweaks to the code and number of batches, the changes in performance were minor, with still noticeable delays in the GUI.

So I decided to take it one step further, and try out Unity’s Burst compiler.

I’ve heard and seen a few examples of it in action, and it looks a lot like black magic. So this felt like a good opportunity to try it out myself.

The top line is what black magic looks like.

Adding one line at the top of the job struct to use the Burst compiler with the job, the performance gain was incredible. The GUI now has no delays at all when adjusting the parameters for a 512×512 noise texture.

With a 1024×1024 texture, the Burst version has similar responsiveness as the CPU-bound version. With the CPU-bound version at 1024×1024 giving you enough time to make a coffee and come back.


Next, I’ll be updating the procedural framework to make use of these textures. I’m not entirely sure how long that will take with my current workload, but I’m looking forward to putting these to use and seeing what kinds of results I can get.

More Lights and Shadows


This week I worked on adding point lights, spotlights, and post-processing effects in the Unity SRP maze project.

A simple scene using a point light and a spot light to create it’s shadows. And little bloom to create some glowing highlights.

The first thing I did was create a light source in the maze, by adding lamps, as a starting point to implement point and spot light-related shadows.

The initial implementation didn’t work very well, highlighting issues with the pixelated shadow effect, creating shadows on the walls, and in corners, where there shouldn’t be any.

Point light shadows creating extra shadows on the walls behind the lamps and in the corners. The bushes don’t look that great either.

I ended up changing how the world pixel position was being calculated, taking into account the light direction. If the position is facing the light, move it towards the light, else if it’s not, move it away.

Doing this per light corrected the issues with the shadows and also helped created more accurate looking lighting.

Modifying the world pixel position per light creates more accurate shadows. The bushes look nicer too.

I also worked on post-processing effects to understand how to integrate them into SRP.

Since the lighting from the lamps looked a little plain, I wanted to make them stand out by having them glow from a simple bloom effect.

The effect is done by resizing the screen image and blurring the pixels in several passes. The bright and dark areas of the blurred image are then exaggerated, which is then combined with the final screen to create the glow.

A little bit of bloom adds a lot to the lighting.

You can think of it as adding filters and effects in an image editing tool like Photoshop or GIMP.

Just for fun, I experimented with a pixelated post-processing effect.

Pixelating the final screen which includes the bloom effect. Looks pretty neat.
An even more pixelated version.

The effect defeats the purpose of maintaining the pixel art shadows since it hides them. But it was a fun experiment to help me understand how to combine post-processing effects in SRP.


I’ve always felt that I’ve had to leave it up to Unity with how some things work. Making guesses, or even coming up with solutions that feel more like a hack than an actual solution.

But working at this level of code in SRP (not quite low and not quite high) to come up with a solution for my own rendering requirements, has been a nice and welcome change.

Next, I’m turning my attention to gameplay and improving the maze generation to make this into a playable game and encourage more specific effects for me to create in SRP.

Lighting the Maze


I started adding lighting support to the little maze project that I’m using to learn Unity’s Scriptable Rendering Pipeline (SRP).

I’ve been following catlikecoding’s tutorials on SRP to help as a starting point and navigate some of the quirks, and see how lighting can be handled.


If you’re a Unity developer, check out catlikecoding. Their tutorials are great.


It took a few days to get through the ones I wanted to learn about. But once I got them working, I started to experiment.

I’ve always found a good way to learn anything is to follow the steps laid out, and then divert off and experiment. Giving yourself a chance to learn by making your own mistakes.

For the little maze, I wanted to see if I could maintain the pixel art aesthetic in the lighting. Trying to make shadows match the pixel art.

Making shadows align to the texture pixels, can’t be done by just turning off filtering apparently.

It still needs some work but I’m happy with how it turned out.

So how’s it done?

I take the world-space position of the pixel on the screen and convert it into a world-space pixel position, which is then used by the lighting functions to calculate the shadow.

The position of each pixel to be used by the lighting calculation and exaggerated for sake of looking kind of neat.

Below is a little code snippet of how it’s calculated on the shader.

half pixelSize = GetShadowPixel();
float3 pixel_pos = floor(input.worldPosition / pixelSize) * pixelSize;

More can be done to calculate a better position, which I’ll return to once I’ve made more progress. But being able to manipulate values, and knowing how the code using it will be executed, always feels good. It feels less like a black box.

Though it wasn’t without problems.

Exact alignment with world-space pixel position causes noisy pixels.

There were issues with flickering, aligning shadows to the pixels, and light passing through seemingly solid meshes which took a bit of experimenting to solve.


There is room for improvement, but so far I’m happy with how it’s looking.

Using the new lighting in a day night cycle.

The lighting, shadows, and ambient color are driven by a day-night cycle component, which I rewrote from the old maze project.

Unity’s rendering pipeline feels a little less mysterious to me now. And I’m looking forward to exploring the limits of what I can do as I continue to work with it.

But next up, I’m going to dive a little deeper into supporting more lighting features. I’d like to add more light sources like a lantern or sconce to light up the maze at night. It should be fun.

Revisiting Maze Generation


I started a small side project recently, revisiting an old project and its systems for generating procedural mazes. It was one of my first projects in Unity from many years ago, and my first foray into procedurally generated content for games.

That was some time ago. This was when Unity didn’t have many of the Quality of Life features that it has now. Like the Package Manager, Nested Prefabs, Assembly Definitions, or even C# namespaces. So a lot of “good ideas” I had around that time didn’t age well and I had to spend a few days re-doing.

With this little project, I’ve decided to dive into Unity’s Scriptable Rendering Pipeline and apply what I’m learning to an actual game production.

I’m also secretly trying to gain a little insight into how it works to solve a few problems on The Very Organized Thief, without disrupting its development too much.

After a bit of digging through some ancient code, I’ve managed to get the procedural maze generation running again.

A generated maze. The art used was taken from the original project and updated to work with the new maze generation system.

So how does it work.

First, I generate a map by using a Depth First algorithm. Mapping out the corridors and walls and storing it as high-level data.

The maze data generated before adding the visuals. The coloring is from an existing system created to detect “islands” which was not made to work for mazes.

I then do a second pass using that data to determine more specific features, such as how it should look, where the start and end should be, and what brush it should use to generate the final look.

Then I generate the visuals, determining what brush to use and its orientation so I can place the correct 3d model. I can also add variation at this point based on the desired frequency, which is how I’m adding the pits and the turns with bushes.

The tileset brush which let’s you control what prefab to generate and possible variations.

Once it’s done all that, I then generate pathfinding data which I can then use for AI.

The blue paths are the generated Navigation Meshes.

One of the greatest additions to Unity in recent years that makes authoring procedural brushes easier, has been Nested Prefabs.

In older versions of Unity, if you constructed a new prefab out of other prefabs, the other prefabs would become “baked” into the new prefab. So if you needed a specific object, like a bush or something, that was shared across multiple prefabs. You had to go into each prefab that used it and manually modify that bush. It was tedious and not very fun.

But with Nested Prefabs, you can create a prefab of a bush, and nest it inside another prefab. So if I make changes or add features to the bush, all other prefabs that have a nested version, are updated too. Very useful, and without the tedium.


So far it’s good enough and should give me enough to play with while coming up with a better shader to start learning more about Unity’s Scriptable Rendering Pipeline.

It’s very easy to learn something, but without some kind of project or game to use it on, it can be difficult to apply what you learn to larger systems, especially when you don’t know how it should work entirely.

You have to start somewhere.

Everything is very flat, but that should change once I start adding lighting support again and working out how to control the rendering pipeline in Unity.