Blending in Detail

rnmBlendRepack does not work very well.

Yes, sorry, I gave bad advice earlier; you need to add 0.5 when repacking, not 1 (standard unpacking is nu = n*2 - 1; the reverse is (nu + 1)/2, which is the same as nu/2 + 0.5).

it has a strange problem ... you can see some random highlights on the unlit side of a sphere

It’s a bit hard to debug remotely, but there are a couple of potential issues that could creep in due to negative z values. Try the following functions (one at a time):

float3 UnpackNormalSafer(float4 packednormal)
{
float3 normal;
normal.xy = packednormal.wy*2 - 1;
float d = dot(normal.xy, normal.xy);
normal.z = (d <= 1) ? sqrt(1 - d) : 0;
return normalize(normal);
}

float3 rnmBlendUnpackedClampZ(float3 n1, float3 n2)
{
n1 += float3( 0, 0, 1);
n2 *= float3(-1, -1, 1);
float3 n = n1*dot(n1, n2)/n1.z - n2;
if (n.z < 0)
n = normalize(float3(n.x, n.y, 0));
return n;
}

I tried both rnmBlendUnpackedClampZ and UnpackNormalSafer and pipe that into rnmBlendUnpackedClampZ, didnt really get rid of all the highlights in the dark area,

however, if I turn on shadows, the shadows will attenuate the highlights in the dark sides, so its probably not a real issue or maybe a unity issue ?

I think those highlights being visible in rnm are actually extra normal detail that is lost in the udn one, because it depends on the light angle, moving the light slightly around one can see them come in and out.

I'm a bit confused about it though. Thank you for all the help though, I learned a lot !
I can post links to a unity project, if you would like to debug/ see it for yourself, should run in free version. Let me know.

Sure, I'd like to see it, plus it'd be a good excuse to finally try Unity.

Here is the link :

install unity, then

If you start from a new blank project then import this package:
http://www.inversethought.com/...

or just unzip this complete project and double click on test_secene.unity to open:
http://www.inversethought.com/...

Thanks. I'll try and take a look later this week.

Thank you for sharing!
I recently had to do 2-layer material blending, with an additional base map on top. RNM worked perfectly for keeping the high-frequency details of the normal maps!

RNM worked perfectly for keeping the high-frequency details of the normal maps!

Nice!

Just tried it in my engine, looks pretty great! But is there any way to have a blend factor [0-1] in there ?

Do you mean to control the 'strength' of the detail normal map? (If yes, a quick and dirty way to achieve that would be by scaling the x and y components of u, i.e.: u.xy *= strength).

Thanks, that seems to do the trick.

More elegant than my lerp(Tex0.Sample..., r, blendFactor) ;)

Lets start off with Awesome blog, and this and the specular showdown are two things I find really really nice here!
I’m thinking of making this into a photoshop filter/blend mode, never done anything such before, but would be fun and useful for artists, however:

As I couldn’t find any allowances for how your code is allowed to be used, (I’m assuming you havn’t patented it?), I was wondering if it would be allowed to use it as-is, without having to take the “idea”, and create something similar. (As copyright applies if nothing to the contrary is written.)

Lets start off with Awesome blog, and this and the specular showdown are two things I find really really nice here!

Thanks, it’s great to hear that!

I was wondering if it would be allowed to use it as-is

You’re free to use the code. Create away!

Thanks for the post. The main thing is not to argue about something. And think of how to implement the code in your shader. I’ll post the finished code for Unity 5, your formula and think a little work.

float3 n1 = UnpackNormal(tex2D(_BumpMap, IN.uv_BumpMap));
float3 n2 = UnpackNormal(tex2D(_DetailNormas, IN.uv_DetailNormas));

n1.xyz*float3(2, 2, -2) + float3(-1, -1, 1);
n2*2 - 1;

float3 r;
r.x = dot(n1.zxx,  n2.xyz);
r.y = dot(n1.yzy,  n2.xyz);
r.z = dot(n1.xyz,  n2.xyz);
r = normalize(r);
o.normal = r;

Belated thanks for this very nice article!

For the case of input normals in partial-derivative format (stored in BC5 format, for example), I’ve found that the following is faster by 1 full-rate instruction (on GCN) than normalizing first:

float3 blend_rnm_pd(float4 n1, float4 n2)
{
	float3 t = float3(n1.xy * 2 - 1, 1);
	float3 u = float3(-2 * n2.xy + 1, 1);
	float q = dot(t, t);
	float s = sqrt(q);
	t.z += s;
	float3 r = t * dot(t, u) - u * (q + s);
	return normalize(r);
}

Not a huge win, but I figured I’d mention it. Here’s the derivation in the case of a general non-normalized \mathbf{t} vector (where t_z = 1 in the PD case).

\mathbf{t}' = \frac{[t_x, t_y, \|\mathbf{t}\| + t_z]}{\|\mathbf{t}\|}

The reoriented vector \mathbf{r} is given by:

\mathbf{r} = \frac{\|\mathbf{t}\| \left( \mathbf{u}' \cdot \mathbf{t}' \right) \mathbf{t}' }{\|\mathbf{t}\| + t_z} - \mathbf{u}'
\mathbf{r} = \frac{ \left( \mathbf{u}' \cdot \mathbf{t}' \right) [t_x, t_y, \|\mathbf{t}\| + t_z]}{\|\mathbf{t}\| + t_z} - \mathbf{u}'

We now let:
\mathbf{t}^* = [t_x, t_y, \|\mathbf{t}\| + t_z]

and after multiplying both sides by \|\mathbf{t}\|(\|\mathbf{t}\| + t_z), we obtain:
\mathbf{r}^* = \left( \mathbf{u}' \cdot \mathbf{t}^* \right) \mathbf{t}^* - (\|\mathbf{t}\|^2 + t_z \|\mathbf{t}\|) \mathbf{u}'

(Note that \mathbf{r}^* is not normalized.)

Hey Jasmin!

Belated thanks for this very nice article!

Cheers! Likewise, thanks for taking the time to post this. Perhaps Colin and I should also a fresh look at instruction costs for the other variants as well.

Out of curiosity, are you using the PD format in conjunction with implicit tangent-space normal mapping, or is it for other reasons? (E.g. simplifies PD-blending of normal maps to form the ‘base’ layer, before adding detail.) I’d also be interested to hear if you’ve run into any precision issues with steep normal maps.

As you mention, we’re mainly using it because it’s amenable to runtime blending, and the limited range has not been an issue for us in practice.

No, not that I’m aware; typically it’s the opposite, where we need better precision for subtle normal maps on smooth surfaces. We have an artist-adjustable normal map strength parameter, which has a maximum value of 2.0, but artists mainly use values below 1.0 to increase precision (e.g., to give some variation to window normals, which in reality are slightly – but noticeably – non-planar).

By the way, I experimented some more with optimizing the blend_rnm_pd() from my last post. It turns out that if the non-normalized normal length is not too large (as ours are constrained to be), you can avoid the sqrt() when computing s, replacing it with a linear approximation. I’ve put together a shadertoy here which demonstrates the difference:

The error is not noticeable for the actual normal maps that I’ve tested, and really is only significant for very steep sections, where it flattens the normal slightly. In practice it seems fine, I think.

typically it’s the opposite, where we need better precision for subtle normal maps on smooth surfaces. We have an artist-adjustable normal map strength parameter, which has a
maximum value of 2.0, but artists mainly use values below 1.0 to
increase precision

Just to be sure we’re 100% on the same page, what I was alluding to earlier was that, with naive PD maps, you can’t represent angles > 45 degrees. Of course you can fix this in an automated way by rescaling the texture values so they fit into 0…1 (and store a per-texture inv. scale factor), but this probably isn’t ideal if you have a mix of very steep, and also very subtle gradients.

Is your artist-adjustable strength parameter a kind of manual alternative or in addition to this?

I experimented some more with optimizing the blend_rnm_pd() from my last
post. It turns out that if the non-normalized normal length is not too
large (as ours are constrained to be), you can avoid the sqrt() when computing s, replacing it with a linear approximation

Nice!

We don’t do any automatic rescaling (although that’s been on my todo list for quite some time – primarily motivated by color maps); we just have the manual adjustment. What I was trying to convey in my last response was that the default 45 degree[*] limit hasn’t caused any issues in practice, and we’ve been doing it this way since before I started here. (Very steep normal maps tend to not look very good anyway.)

It would be interesting to go through our source assets and count the % of normal map pixels which fall outside the [-1,1] range in X & Y; maybe I’m wrong about it not being an issue! :smile:

[*] We clamp to the unit square, so it’s 45 degrees along the X/Y axes, but ~55 degrees along the diagonals.

We don’t do any automatic rescaling (although that’s been on my todo list for quite some time – primarily motivated by color maps); we just have the manual adjustment.

Thanks for clarifying!

It would be interesting to go through our source assets and count the % of normal map pixels which fall outside the [-1,1] range in X & Y; maybe I’m wrong about it not being an issue!

Please let me know the result if you get a chance to test this out.

Hi, great work with this post btw, lots of great information!

I’ve implemented this UnpackNormalSafer and rnmBlendUnpackedClampZ method in Unity, and it works great, but I noticed one issue with it, it inverts the height of the detail normal map (like if you went and inverted the Green channel on a regular normal map in PS). Was this intended?

I had to try and fix it myself, and while it seems to work, I don’t quite understand all the math yet to know if what I did will result in poorer results. In “rnmBlendUnpackedClampZ”, I changed:

n2 *= float3(-1, -1, 1);

to

n2 += float3(0, 0, 1);

Any input would be great, thanks!