This post describes my first impressions with procedural texturing software – Substance Designer. To make the impression more personal, I am diving in without watching any tutorials or reading documentation. The only exposure I have so far is attending a GDC 2016 talk by Naughty Dog and few conversations with friends/colleagues.
I’ll go about creating a simple and programmer-art like substance and record my thoughts. During the process I may take few detours to explain UX, rendering techniques, procedural terms and performance related concerns. Hopefully, all of those will be relevant to the main topic here – Procedural Texturing.
Alright, I launched the app and created a new substance from welcome screen.
The substance template I picked is Empty. But you can pick from other Physically Based ones, based on textures you need to export. In the end, you are picking which maps you will export and then author the information feeding to those maps.
The image above is Designer’s UI. It seems pretty straightforward, if you have used any node-based editor (including Unreal’s Material and/or Blueprint editors).
It seems logical to start with defining what I need to export. So, right-click brings up options to add nodes, and I am adding Output nodes (like basecolor below).
For this test I decided to export a custom set of maps for a custom material, why not go off the rails from get go🙂. Below are the maps I am exporting.
Its time to import some maps as input.
And when I imported a PSD file, three interesting things happened.
- Designer prompted me to choose from the layers I would like to include in the import, which is nice.
- As I had one RGB layer and an adjustment layer on top, Designer created two Bitmap nodes. This is also nice but a bit unexpected with adjustment layers.
(Edit – During 2nd import I realized, had I chosen just the Layer I wanted then Designer wouldn’t have created two Bitmaps.)
- And finally, as width of my PSD was not in power of 2, Designer squashed the Bitmap to fit it in nearest power of 2. This is a smart behavior, but I would like to have an option to either crop or squash/stretch when importing a resource. Or maybe just not change my texture dimensions at all.
Why we create game textures in Power of 2 dimensions?
This is due to something called cache thrashing in computing. Basically, the memory in GPU (or CPU) holds texture samples in blocks. This cache of memory is fed blocks of texture data in optimal way. When a texture fits in Power of 2 dimensions, the cache is either fully filled with current texture or unfilled and waiting for the next one. Let’s say that cache is filled 10% of its capacity. In that case, the machine will finish rendering the frame, flush all its data and refill it for next frame. Hence wasting precious memory and cycles. Texture memory is very important resource in GPU Rendering. Also the fact that in previous gen consoles, it was prohibited to use textures with arbitrary dimensions. Now a days in many game engines, its technically possible to have non-power-of-2 textures, but due to above reasons its best to stay with power of 2.
Next step is to remove seams and make the concrete tile-able, and Designer has a node for it – Make it Tile🙂
After the input is tile-able, I am starting to prepare each output texture, starting with base color (or Albedo). To prepare Albedo, I need to remove all “lighting information” from my source/input. Designer provides nodes to remove Low Frequency and High Frequency lighting from the source. So I get this –
The base color looks very flat and uninteresting, right? That’s intentional and due to PBR requirement. More on PBR, shortly.
Normal map took a bit more than preparing Albedo, but most of the operations were logical and straightforward. As shown below, the nodes used are Levels, Contrast, Blur, Blend etc… Mostly what you’ll use in Photoshop to extract information in a Height Map. And this Height map is then fed to Normal node, which outputs Normal map from Height map (i.e. the main reason CrazyBump exists).
While trying to prepare Normal map, I took a detour to Pixel Processor node. I must say that this is the only part of Designer (so far), which didn’t act as I predicted and at the end I got nothing out of it.
What is Pixel Processor?
It’s a sub-node in your main graph, in which you can build your own “Math” for, processing pixels. You can write your own Generators or take some node as input and work on it per-pixel. Basically, it allows lowest level of operations you can possible do to pixels. e.g. Add/Subtract, Multiply/Divide and many others.
And for the final “Others Map” (temporary name), I am merging various grayscale maps into one to be later split and used in shader.
Once I export all three maps and build a shader/material, my meshes are ready to be shaded as per defined PBR rules. Which brings me to –
What is PBR and why do it?
I am for sure late to PBR advocacy party but the shortest and direct reason is that – “Because now you can finally do PBR, so why not do it? Unless your shading style is Non-Photo realistic.”
PBR stands for Physically Based Rendering, which gives you more accurate and predictable results as opposed to non-PBR we were doing in previous gen.
Two things we were doing wrong before, because there was no choice.
- Lighting was “baked” in Diffuse Texture. Think about this – you are supposed to calculate diffuse lighting at runtime based on color map, normal map etc… but we were already making textures with lighting, so the lighting was wrong all the time. It looked right, but was falling apart with changing time of day and other lighting conditions – indoor/outdoor.
- We had fancy sliders to “fake” specularity and other phenomena. To make a material look right, we sometimes crank up or down these sliders. So much so that at times more light is bouncing off the object than total amount coming from light source. Hence breaking law of Energy Conservation. So what is the big deal?
Well, place another instance of same object in different lighting condition and you are in need of an instance of material to adjust the slider again.
With new gen consoles GPU got faster, so we can do more calculations for lighting/shading accuracy. GPU memory got bigger, so we can pack more textures per material. And thus grew need to author textures/materials, which follow Physics of light and ensures artists that a material (wood, leather, fabric, metal…) will behave as it should in any condition (day, night, indoor, outdoor, rain, snow, sunny …)
Final point to add – PBR is all the more reason to use Substance Designer.
With PBR requirements, the amount of textures to be produced has increased considerably. And Substance Designer is the right kind of tool in artists’ toolbox to smartly handle such complex tasks.
In conclusion to my first impressions –
- Substance Designer is fast, stable and versatile tool for task of Procedural Texturing.
- There is much more to it than what I tried. e.g. Generators, Mesh Adaptors, Pre-authored and shareable PBR materials, Pixel processors etc…
- Substance files (.sbs) are inherently XML documents. So all the TDs/Tech Artists, go break it apart for whatever pipeline needs you might have.
- There are substance for plugins for all popular 3D package and Unreal/Unity3D. So you can take full advantage of Procedural maps with your shader, lighting, in-engine.
What it may do to improve –
- The graph can use some color coding, grouping enhancements (think Unreal material editor).
- Pixel processor is really simple but look cryptic in beginning. Also I couldn’t figure out a way to share my processor nodes, but I think there must be a way.
And finally, for all coders it would be great to have a pixel processor node to directly write math inside (think Vex Wrangle nodes in Houdini), with some basic interpreter-style code compiling.
- I couldn’t figure out a way to have multiple 2D views, so I can compare outputs at different stages – side by side.
All being said, here are tutorials to get going.