Monthly Archives: July 2016


Substance Designer – First Impressions

This post describes my first impressions with procedural texturing software – Substance Designer. To make the impression more personal, I am diving in without watching any tutorials or reading documentation. The only exposure I have so far is attending a GDC 2016 talk by Naughty Dog and few conversations with friends/colleagues.

If you have never heard of Substance Designer –
What is Substance Designer?
How to get it?
How is it used in the games industry?

I’ll go about creating a simple and programmer-art like substance and record my thoughts. During the process I may take few detours to explain UX, rendering techniques, procedural terms and performance related concerns. Hopefully, all of those will be relevant to the main topic here – Procedural Texturing.

Alright, I launched the app and created a new substance from welcome screen.

The substance template I picked is Empty. But you can pick from other Physically Based ones, based on textures you need to export. In the end, you are picking which maps you will export and then author the information feeding to those maps.

2016-07-23 09_39_13-

The image above is Designer’s UI. It seems pretty straightforward, if you have used any node-based editor (including Unreal’s Material and/or Blueprint editors).

It seems logical to start with defining what I need to export. So, right-click brings up options to add nodes, and I am adding Output nodes (like basecolor below).


For this test I decided to export a custom set of maps for a custom material, why not go off the rails from get go :). Below are the maps I am exporting.


Its time to import some maps as input.
And when I imported a PSD file, three interesting things happened.

  • Designer prompted me to choose from the layers I would like to include in the import, which is nice.
  • As I had one RGB layer and an adjustment layer on top, Designer created two Bitmap nodes. This is also nice but a bit unexpected with adjustment layers.006
    (Edit – During 2nd import I realized, had I chosen just the Layer I wanted then Designer wouldn’t have created two Bitmaps.)
  • And finally, as width of my PSD was not in power of 2, Designer squashed the Bitmap to fit it in nearest power of 2. This is a smart behavior, but I would like to have an option to either crop or squash/stretch when importing a resource. Or maybe just not change my texture dimensions at all.

Why we create game textures in Power of 2 dimensions?

This is due to something called cache thrashing in computing. Basically, the memory in GPU (or CPU) holds texture samples in blocks. This cache of memory is fed blocks of texture data in optimal way. When a texture fits in Power of 2 dimensions, the cache is either fully filled with current texture or unfilled and waiting for the next one. Let’s say that cache is filled 10% of its capacity. In that case, the machine will finish rendering the frame, flush all its data and refill it for next frame. Hence wasting precious memory and cycles. Texture memory is very important resource in GPU Rendering. Also the fact that in previous gen consoles, it was prohibited to use textures with arbitrary dimensions. Now a days in many game engines, its technically possible to have non-power-of-2 textures, but due to above reasons its best to stay with power of 2.

Next step is to remove seams and make the concrete tile-able, and Designer has a node for it – Make it Tile 🙂



After the input is tile-able, I am starting to prepare each output texture, starting with base color (or Albedo). To prepare Albedo, I need to remove all “lighting information” from my source/input. Designer provides nodes to remove Low Frequency and High Frequency lighting from the source. So I get this –


The base color looks very flat and uninteresting, right? That’s intentional and due to PBR requirement. More on PBR, shortly.

Normal map took a bit more than preparing Albedo, but most of the operations were logical and straightforward. As shown below, the nodes used are Levels, Contrast, Blur, Blend etc… Mostly what you’ll use in Photoshop to extract information in a Height Map. And this Height map is then fed to Normal node, which outputs Normal map from Height map (i.e. the main reason CrazyBump exists).


While trying to prepare Normal map, I took a detour to Pixel Processor node. I must say that this is the only part of Designer (so far), which didn’t act as I predicted and at the end I got nothing out of it.

What is Pixel Processor?

It’s a sub-node in your main graph, in which you can build your own “Math” for, processing pixels. You can write your own Generators or take some node as input and work on it per-pixel. Basically, it allows lowest level of operations you can possible do to pixels. e.g. Add/Subtract, Multiply/Divide and many others.

And for the final “Others Map” (temporary name), I am merging various grayscale maps into one to be later split and used in shader.


Once I export all three maps and build a shader/material, my meshes are ready to be shaded as per defined PBR rules. Which brings me to –

What is PBR and why do it?

I am for sure late to PBR advocacy party but the shortest and direct reason is that – “Because now you can finally do PBR, so why not do it? Unless your shading style is Non-Photo realistic.”
PBR stands for Physically Based Rendering, which gives you more accurate and predictable results as opposed to non-PBR we were doing in previous gen.

Two things we were doing wrong before, because there was no choice.

  1. Lighting was “baked” in Diffuse Texture. Think about this – you are supposed to calculate diffuse lighting at runtime based on color map, normal map etc… but we were already making textures with lighting, so the lighting was wrong all the time. It looked right, but was falling apart with changing time of day and other lighting conditions – indoor/outdoor.
  2. We had fancy sliders to “fake” specularity and other phenomena. To make a material look right, we sometimes crank up or down these sliders. So much so that at times more light is bouncing off the object than total amount coming from light source. Hence breaking law of Energy Conservation. So what is the big deal?
    Well, place another instance of same object in different lighting condition and you are in need of an instance of material to adjust the slider again.

With new gen consoles GPU got faster, so we can do more calculations for lighting/shading accuracy. GPU memory got bigger, so we can pack more textures per material. And thus grew need to author textures/materials, which follow Physics of light and ensures artists that a material (wood, leather, fabric, metal…) will behave as it should in any condition (day, night, indoor, outdoor, rain, snow, sunny …)

Final point to add – PBR is all the more reason to use Substance Designer.
With PBR requirements, the amount of textures to be produced has increased considerably. And Substance Designer is the right kind of tool in artists’ toolbox to smartly handle such complex tasks.

In conclusion to my first impressions –

  1. Substance Designer is fast, stable and versatile tool for task of Procedural Texturing.
  2. There is much more to it than what I tried. e.g. Generators, Mesh Adaptors, Pre-authored and shareable PBR materials, Pixel processors etc…
  3. Substance files (.sbs) are inherently XML documents. So all the TDs/Tech Artists, go break it apart for whatever pipeline needs you might have.
  4. There are substance for plugins for all popular 3D package and Unreal/Unity3D. So you can take full advantage of Procedural maps with your shader, lighting, in-engine.

What it may do to improve –

  1. The graph can use some color coding, grouping enhancements (think Unreal material editor).
  2. Pixel processor is really simple but look cryptic in beginning. Also I couldn’t figure out a way to share my processor nodes, but I think there must be a way.
    And finally, for all coders it would be great to have a pixel processor node to directly write math inside (think Vex Wrangle nodes in Houdini), with some basic interpreter-style code compiling.
  3. I couldn’t figure out a way to have multiple 2D views, so I can compare outputs at different stages – side by side.

All being said, here are tutorials to get going.


Essential Houdini – Part 1

This is a two part series to introduce core concepts of procedural content creation in Houdini.

This is not meant to be a “how-to” or “101-tutorial” style series.
It presents the most essential understanding of Procedural approach.

Before we start, feel free to go through –
What is Houdini?
How to get it?
How is it used in various industries?

And be sure to watch some basic tutorials to be familiar with how to interact in Houdini.

If you have prior programming (or scripting) experience in any language, some concepts like functions, classes and data can be easily grasped. However, I will try as much as I can to simplify them for Artists.

So, first of all, Contexts.

Houdini (or other packages like Maya/Max) are packed with many computer graphics toolsets/techniques. In all packages, these CG techniques are divided into categories like –

  1. Geometry
  2. Animation
  3. Particles
  4. Rigid/Soft body Simulation
  5. Compositing
  6. Rendering and more…

Houdini groups them under various Contexts like SOPs, POPs, ROPs etc…
They are short for Surface OPerators, Particle OPerators, Rendering OPerators…
For now, just remember that such categorization exists. The significance of these separations will be clear shortly.

Now, to understand Houdini (or almost any procedural system for that matter), you need to internalize two ideas –
Attributes (Data) and Operators (Functions).

Attributes = What data the system (or any Context) has.
Consider a cube below which has some attributes associated with it.2016-07-21 14_42_07-untitled.hip - Houdini FX 15.5.523

Its attributes are Positions & Normals per Point (or vertex if you prefer Maya/Max terminology, however vertex has a different meaning in Houdini).

Operation = What do you do with system’s data.
With that cube, I am going to add 2 units to Y-position of points whose numbers range from 4 to 7 (point numbers are shown in above image).
The language used here may sound very descriptive for a trivial operation but as you can see below, this is exactly how you execute such operation in Houdini.

At the moment, don’t worry about the expression language and focus on the concept that I gave a very low level instruction to Houdini to operate on cube’s geometry attributes.

Fundamentally and technically, this is the core workflow.

There are bunch of attributes with some values and using node graphs you instruct Houdini to operate on those attribute values.

If you have used Maya or 3ds Max before, a helpful analogy is construction graph or modifier stack. Except that Houdini’s networks are insanely more flexible and powerful compared to other two packages.

In short –

2016-07-21 17_20_14-untitled.hip - Houdini FX 15.5.523


Now going back to Contexts, the example above showcases Surface OPerators, as we are modifying attributes of a surface (geometry). That geometry can be Polygon, NURBS, Curves… And typical attributes are point positions, normals, point/triangle colors, UVs etc…

Another type of context is Compositing OPerators, using which you can work on images (as per below).

2016-07-21 17_33_46-untitled.hip - Houdini FX 15.5.523

COPs operate on images (or sequence of images), hence the attributes you normally deal with are pixels and per-channel values (think Photoshop).

Now, for the cool stuff.
Almost all parameters of all operators in your network are modifiable as needed.
What does that mean? Example below –

As you can see, everything you create in Houdini is non-destructive, meaning that almost all parameters of the operators are available to modify.

Part 2 goes more in depth on why this is very important and real magic of Houdini.

But for now, moving on to another cool aspect 🙂

Contexts in Houdini can interchange certain data in certain ways.
Meaning that, you can prepare a model using SOPs and then send it to Particle OPerators for its surface to be used as emitter. Or send it to Dynamics OPerators to take part in rigid body simulation. While still keeping the modifiable aspect (shown above) of each context.

Below is an example of SOPs to POPs.

In conclusion of first part,

  1. There is data – Attributes with values.
  2. Operators act on those attributes. Each operation adds, modifies, deletes, transfers… attribute values. Even geometry like quads, triangles, points are essentially data for Houdini.
  3. There are various contexts with their own kind of Attributes and Operators.
  4. Everything is non-destructive and almost everything is modifiable.
  5. Contexts can transfer data between one another.

If you have any questions on this part or find something confusing or misleading, feel free to mention in the comments.



Essential Houdini – Part 2

This is a two part series on core fundamentals of procedural content creation in Houdini.
Please be sure to read Part 1 first.

When learning Houdini for asset creation, or learning any other procedural system for artistic purposes, a frequent advice is – to “rewire” your brain. This is especially mentioned when coming from other 3D packages like Maya or 3ds Max.
This part explains what “rewiring” means.

Generally speaking, traditional or digital art is created from outside in. Artists start with big shapes/block outs/Silhouettes and then work there way in to define details, patterns, features etc…
This is partly due to how human visual system works. We see silhouettes, shapes before inner details. And also because that’s how an artist can define cohesive vision before diving into nitty-gritty parts.

On the other hand, software systems are generally created inside out. Meaning, components or Lego blocks are identified and coded which have plugs to fit with other components.
The reason for this approach is to promote reuse of components, distribution of tasks and realistic tracking of work being done. Just to note that even with this approach there is a high level vision initially laid out by system architect.

Houdini provides systematic ways to approach artistic creation.

For example, let’s say that you want to model a chair.

Instead of straightforward building it using polygon tools, you break it down in parts like legs, arms, seat and back-support – each part having its own specification and controls. When combined, they give you a chair.
Similarly if it was a building, it can be broken down into entrance, windows, outer walls, main facade and more…


Now, this looks like an overly complicated process to build just a model.
That’s because it is. Then why do it?

Approaching asset creation this way allows two major benefits (among few others) as opposed to diving in and creating one unique piece.

Once you have setup the system correctly, there is possibility of creating many different variations of the same asset type.

As we build the asset in terms of Lego blocks, adding more details/features to existing system is possible while keeping all the previous systems functional.
Just a note here that, fast iterations are possible as long as you don’t try to make fundamental changes to the asset. I mean, extending chair system to output a sofa or even a seesaw works but not if its being morphed into bike.

I have tried to present and advocate procedural way of defining and creating assets in theses posts and I hope its somewhat clear.
Now for actually learning Houdini, Go Procedural has many good tutorials to get started.