header

Substance Designer – First Impressions

This post describes my first impressions with procedural texturing software – Substance Designer. To make the impression more personal, I am diving in without watching any tutorials or reading documentation. The only exposure I have so far is attending a GDC 2016 talk by Naughty Dog and few conversations with friends/colleagues.

If you have never heard of Substance Designer –
What is Substance Designer?
How to get it?
How is it used in the games industry?

I’ll go about creating a simple and programmer-art like substance and record my thoughts. During the process I may take few detours to explain UX, rendering techniques, procedural terms and performance related concerns. Hopefully, all of those will be relevant to the main topic here – Procedural Texturing.


Alright, I launched the app and created a new substance from welcome screen.

The substance template I picked is Empty. But you can pick from other Physically Based ones, based on textures you need to export. In the end, you are picking which maps you will export and then author the information feeding to those maps.

2016-07-23 09_39_13-

The image above is Designer’s UI. It seems pretty straightforward, if you have used any node-based editor (including Unreal’s Material and/or Blueprint editors).

It seems logical to start with defining what I need to export. So, right-click brings up options to add nodes, and I am adding Output nodes (like basecolor below).

004

For this test I decided to export a custom set of maps for a custom material, why not go off the rails from get go🙂. Below are the maps I am exporting.

005

Its time to import some maps as input.
And when I imported a PSD file, three interesting things happened.

  • Designer prompted me to choose from the layers I would like to include in the import, which is nice.
  • As I had one RGB layer and an adjustment layer on top, Designer created two Bitmap nodes. This is also nice but a bit unexpected with adjustment layers.006
    (Edit – During 2nd import I realized, had I chosen just the Layer I wanted then Designer wouldn’t have created two Bitmaps.)
  • And finally, as width of my PSD was not in power of 2, Designer squashed the Bitmap to fit it in nearest power of 2. This is a smart behavior, but I would like to have an option to either crop or squash/stretch when importing a resource. Or maybe just not change my texture dimensions at all.

Why we create game textures in Power of 2 dimensions?

This is due to something called cache thrashing in computing. Basically, the memory in GPU (or CPU) holds texture samples in blocks. This cache of memory is fed blocks of texture data in optimal way. When a texture fits in Power of 2 dimensions, the cache is either fully filled with current texture or unfilled and waiting for the next one. Let’s say that cache is filled 10% of its capacity. In that case, the machine will finish rendering the frame, flush all its data and refill it for next frame. Hence wasting precious memory and cycles. Texture memory is very important resource in GPU Rendering. Also the fact that in previous gen consoles, it was prohibited to use textures with arbitrary dimensions. Now a days in many game engines, its technically possible to have non-power-of-2 textures, but due to above reasons its best to stay with power of 2.

Next step is to remove seams and make the concrete tile-able, and Designer has a node for it – Make it Tile🙂

007

 

After the input is tile-able, I am starting to prepare each output texture, starting with base color (or Albedo). To prepare Albedo, I need to remove all “lighting information” from my source/input. Designer provides nodes to remove Low Frequency and High Frequency lighting from the source. So I get this –

008

The base color looks very flat and uninteresting, right? That’s intentional and due to PBR requirement. More on PBR, shortly.

Normal map took a bit more than preparing Albedo, but most of the operations were logical and straightforward. As shown below, the nodes used are Levels, Contrast, Blur, Blend etc… Mostly what you’ll use in Photoshop to extract information in a Height Map. And this Height map is then fed to Normal node, which outputs Normal map from Height map (i.e. the main reason CrazyBump exists).

009

While trying to prepare Normal map, I took a detour to Pixel Processor node. I must say that this is the only part of Designer (so far), which didn’t act as I predicted and at the end I got nothing out of it.

What is Pixel Processor?

It’s a sub-node in your main graph, in which you can build your own “Math” for, processing pixels. You can write your own Generators or take some node as input and work on it per-pixel. Basically, it allows lowest level of operations you can possible do to pixels. e.g. Add/Subtract, Multiply/Divide and many others.

And for the final “Others Map” (temporary name), I am merging various grayscale maps into one to be later split and used in shader.

010

Once I export all three maps and build a shader/material, my meshes are ready to be shaded as per defined PBR rules. Which brings me to –

What is PBR and why do it?

I am for sure late to PBR advocacy party but the shortest and direct reason is that – “Because now you can finally do PBR, so why not do it? Unless your shading style is Non-Photo realistic.”
PBR stands for Physically Based Rendering, which gives you more accurate and predictable results as opposed to non-PBR we were doing in previous gen.

Two things we were doing wrong before, because there was no choice.

  1. Lighting was “baked” in Diffuse Texture. Think about this – you are supposed to calculate diffuse lighting at runtime based on color map, normal map etc… but we were already making textures with lighting, so the lighting was wrong all the time. It looked right, but was falling apart with changing time of day and other lighting conditions – indoor/outdoor.
  2. We had fancy sliders to “fake” specularity and other phenomena. To make a material look right, we sometimes crank up or down these sliders. So much so that at times more light is bouncing off the object than total amount coming from light source. Hence breaking law of Energy Conservation. So what is the big deal?
    Well, place another instance of same object in different lighting condition and you are in need of an instance of material to adjust the slider again.

With new gen consoles GPU got faster, so we can do more calculations for lighting/shading accuracy. GPU memory got bigger, so we can pack more textures per material. And thus grew need to author textures/materials, which follow Physics of light and ensures artists that a material (wood, leather, fabric, metal…) will behave as it should in any condition (day, night, indoor, outdoor, rain, snow, sunny …)

Final point to add – PBR is all the more reason to use Substance Designer.
With PBR requirements, the amount of textures to be produced has increased considerably. And Substance Designer is the right kind of tool in artists’ toolbox to smartly handle such complex tasks.

In conclusion to my first impressions –

  1. Substance Designer is fast, stable and versatile tool for task of Procedural Texturing.
  2. There is much more to it than what I tried. e.g. Generators, Mesh Adaptors, Pre-authored and shareable PBR materials, Pixel processors etc…
  3. Substance files (.sbs) are inherently XML documents. So all the TDs/Tech Artists, go break it apart for whatever pipeline needs you might have.
  4. There are substance for plugins for all popular 3D package and Unreal/Unity3D. So you can take full advantage of Procedural maps with your shader, lighting, in-engine.

What it may do to improve –

  1. The graph can use some color coding, grouping enhancements (think Unreal material editor).
  2. Pixel processor is really simple but look cryptic in beginning. Also I couldn’t figure out a way to share my processor nodes, but I think there must be a way.
    And finally, for all coders it would be great to have a pixel processor node to directly write math inside (think Vex Wrangle nodes in Houdini), with some basic interpreter-style code compiling.
  3. I couldn’t figure out a way to have multiple 2D views, so I can compare outputs at different stages – side by side.

All being said, here are tutorials to get going.

houdini_15

Essential Houdini – Part 1

This is a two part series to introduce core concepts of procedural content creation in Houdini.

This is not meant to be a “how-to” or “101-tutorial” style series.
It presents the most essential understanding of Procedural approach.

Before we start, feel free to go through –
What is Houdini?
How to get it?
How is it used in various industries?

And be sure to watch some basic tutorials to be familiar with how to interact in Houdini.

If you have prior programming (or scripting) experience in any language, some concepts like functions, classes and data can be easily grasped. However, I will try as much as I can to simplify them for Artists.


So, first of all, Contexts.

Houdini (or other packages like Maya/Max) are packed with many computer graphics toolsets/techniques. In all packages, these CG techniques are divided into categories like –

  1. Geometry
  2. Animation
  3. Particles
  4. Rigid/Soft body Simulation
  5. Compositing
  6. Rendering and more…

Houdini groups them under various Contexts like SOPs, POPs, ROPs etc…
They are short for Surface OPerators, Particle OPerators, Rendering OPerators…
For now, just remember that such categorization exists. The significance of these separations will be clear shortly.


Now, to understand Houdini (or almost any procedural system for that matter), you need to internalize two ideas –
Attributes (Data) and Operators (Functions).

Attributes = What data the system (or any Context) has.
Consider a cube below which has some attributes associated with it.2016-07-21 14_42_07-untitled.hip - Houdini FX 15.5.523

Its attributes are Positions & Normals per Point (or vertex if you prefer Maya/Max terminology, however vertex has a different meaning in Houdini).

Operation = What do you do with system’s data.
With that cube, I am going to add 2 units to Y-position of points whose numbers range from 4 to 7 (point numbers are shown in above image).
The language used here may sound very descriptive for a trivial operation but as you can see below, this is exactly how you execute such operation in Houdini.

At the moment, don’t worry about the expression language and focus on the concept that I gave a very low level instruction to Houdini to operate on cube’s geometry attributes.

Fundamentally and technically, this is the core workflow.

There are bunch of attributes with some values and using node graphs you instruct Houdini to operate on those attribute values.

If you have used Maya or 3ds Max before, a helpful analogy is construction graph or modifier stack. Except that Houdini’s networks are insanely more flexible and powerful compared to other two packages.

In short –

2016-07-21 17_20_14-untitled.hip - Houdini FX 15.5.523

 

Now going back to Contexts, the example above showcases Surface OPerators, as we are modifying attributes of a surface (geometry). That geometry can be Polygon, NURBS, Curves… And typical attributes are point positions, normals, point/triangle colors, UVs etc…

Another type of context is Compositing OPerators, using which you can work on images (as per below).

2016-07-21 17_33_46-untitled.hip - Houdini FX 15.5.523

COPs operate on images (or sequence of images), hence the attributes you normally deal with are pixels and per-channel values (think Photoshop).


Now, for the cool stuff.
Almost all parameters of all operators in your network are modifiable as needed.
What does that mean? Example below –

PolyExtrude_Parameter_Changes
As you can see, everything you create in Houdini is non-destructive, meaning that almost all parameters of the operators are available to modify.

Part 2 goes more in depth on why this is very important and real magic of Houdini.


But for now, moving on to another cool aspect :)

Contexts in Houdini can interchange certain data in certain ways.
Meaning that, you can prepare a model using SOPs and then send it to Particle OPerators for its surface to be used as emitter. Or send it to Dynamics OPerators to take part in rigid body simulation. While still keeping the modifiable aspect (shown above) of each context.

Below is an example of SOPs to POPs.
SOPs_to_POPs_002


In conclusion of first part,

  1. There is data – Attributes with values.
  2. Operators act on those attributes. Each operation adds, modifies, deletes, transfers… attribute values. Even geometry like quads, triangles, points are essentially data for Houdini.
  3. There are various contexts with their own kind of Attributes and Operators.
  4. Everything is non-destructive and almost everything is modifiable.
  5. Contexts can transfer data between one another.

If you have any questions on this part or find something confusing or misleading, feel free to mention in the comments.

 

houdini14

Essential Houdini – Part 2

This is a two part series on core fundamentals of procedural content creation in Houdini.
Please be sure to read Part 1 first.


When learning Houdini for asset creation, or learning any other procedural system for artistic purposes, a frequent advice is – to “rewire” your brain. This is especially mentioned when coming from other 3D packages like Maya or 3ds Max.
This part explains what “rewiring” means.


Generally speaking, traditional or digital art is created from outside in. Artists start with big shapes/block outs/Silhouettes and then work there way in to define details, patterns, features etc…
This is partly due to how human visual system works. We see silhouettes, shapes before inner details. And also because that’s how an artist can define cohesive vision before diving into nitty-gritty parts.

On the other hand, software systems are generally created inside out. Meaning, components or Lego blocks are identified and coded which have plugs to fit with other components.
The reason for this approach is to promote reuse of components, distribution of tasks and realistic tracking of work being done. Just to note that even with this approach there is a high level vision initially laid out by system architect.

Houdini provides systematic ways to approach artistic creation.

For example, let’s say that you want to model a chair.

7b53de36b3024b9d7050476a04d18faf
Instead of straightforward building it using polygon tools, you break it down in parts like legs, arms, seat and back-support – each part having its own specification and controls. When combined, they give you a chair.
Similarly if it was a building, it can be broken down into entrance, windows, outer walls, main facade and more…

thomas_deckert_buildings_kit-of-parts_02

Now, this looks like an overly complicated process to build just a model.
That’s because it is. Then why do it?

Approaching asset creation this way allows two major benefits (among few others) as opposed to diving in and creating one unique piece.

Variations
Once you have setup the system correctly, there is possibility of creating many different variations of the same asset type.

Iterations
As we build the asset in terms of Lego blocks, adding more details/features to existing system is possible while keeping all the previous systems functional.
Just a note here that, fast iterations are possible as long as you don’t try to make fundamental changes to the asset. I mean, extending chair system to output a sofa or even a seesaw works but not if its being morphed into bike.


I have tried to present and advocate procedural way of defining and creating assets in theses posts and I hope its somewhat clear.
Now for actually learning Houdini, Go Procedural has many good tutorials to get started.

Point Cloud in Houdini

Point cloud is an amazing feature in Houdini.
Below is a very basic example showing simplest use of point cloud using VEXpression.

Setup

point_clouds_101

VEX Code in PointWrangle SOP


float maxradius = 5;
vector nb_color;
float dist;

int handle = pcopen(@OpInput2, "P", @P, maxradius, 10);

while(pciterate(handle))
{
    pcimport(handle, "Cd", nb_color);
    pcimport(handle, "point.distance", dist);
    @Cd = lerp(@Cd, nb_color, dist/maxradius);
}

 

Alright, so let’s walk through the code.
First we declare all required variables.

float maxradius = 5;
vector nb_color;
float dist;

Then we use pcopen to “open” a point cloud.
Basically we are asking to retrieve 10 points from the geometry connected in 2nd input (@OpInput2) within maximum given radius near the position (@P) of currently being processed point (of geometry in 1st input).

int handle = pcopen(@OpInput2, "P", @P, maxradius, 10);

Now, we “iterate” through all the retrieved points.
This is similar to pseudo code : for(int i = 0; i < numpointsincloud; i++).
But Houdini provides pciterate which will iterate through all the points and will return false when all in the cloud are processed.

while(pciterate(handle))

Remember that VEX runs per-point (or per-prim…), so we are not iterating through the points of our main geometry but through the points from cloud around that point, lets call it main point.
Now we import whichever parameter we want from this cloud into local variables using pcimport.
In our case, we want the color (Cd) of the points and their distance from main point.

pcimport(handle, "Cd", nb_color);
pcimport(handle, "point.distance", dist);

And finally, we use imported data to alter attribute(s) of main point.

@Cd = lerp(@Cd, nb_color, dist/maxradius);

 

Result

point_clouds_result

HOUtilities

Introducing HOUtilities, collection of small to large utilities and tools written in Python/PySide, to make Houdini artists/TDs lives easier.
Source code coming soon after adding all features for version 1.0 and proper testing.

Current utilities include –

Frequent OPs
Quick access to most frequently used OPs.
This will allow users to design their own prefs. with user defined frequent OPs, colors per OP category, context sensitive OPs, as well as auto-naming etc…

FrequentOPs

HDA Manager
HDA Manager is designed to make dealing with lots of HDAs easier.
Feature will include ability to favourite HDAs to keep them at top, saving individual as well as all unlocked HDAs and quickly fetching various information about any HDA.

HDAManager

Miscellaneous Utilities
As it suggests, these are small but sometimes handy set of utilities.
First one is to add a switch sop to selected sop and expose switch’s input parm to parent Geo or HDA as a toggle – all in a click.

switch_001 switch_002

Multiblend shader with Snow

Below are few screenshots of a shader I am working on.

It blends two sets of texture maps (Diffuse, Normal, Specular) based on vertex alpha and diffuse alpha channel. And applies snow color, normal map & specular for pixels facing upwards.


multiblend_snow_001

multiblend_snow_002

Add a switch to selected Houdini node

Here is a quick way to add a switch to currently selected Houdini Node.
For further improvements it would be great to have switch input exposed to parent HDA’s interface.
Also an option to have auto-Null input to switch sop.
i.e. “Switch to nothing” mode.


def addSwitchForDebug(self, parentOp = ""):
	'''
	Add switch op to bypass selected node and expose switch input to the parent.
	'''

	currentSel = hou.selectedNodes()
	if len(currentSel) == 1:
		current = currentSel[0]
		opName = "switch"
		parentNode = hou.node(current.path()).parent()

		# Get inputs and outputs
		outNodes = current.outputs()
		inNodes = current.inputs()

		#Create switch
		newNode = current.createOutputNode(opName)
		newNode.setName(opName, True)
		newNode.setColor(Color.Default)

		# Set correct inputs to switch node
		if len(inNodes) > 0:
			newNode.setFirstInput(inNodes[0])
			newNode.setNextInput(current)

		# Insert switch between current and its immediate output node
		if len(outNodes) > 0:
			currentOut = outNodes[0]
			currentOut.setFirstInput(newNode)

		newNode.moveToGoodPosition()