Interesting stuff. Thanks for sharing.

How about treating detail normal just as a displacement vector from [0, 0, 1]:

```
float3 n1=tex2D(base, uv)*2-1;
float3 n2=tex2D(detail, uv)*2-float3(1, 1, 2);
return n1+n2;
```

Interesting idea! Care to give it a name? :)

From a quick test in the WebGL sandbox, it seems to perform well and should be the same cost as UDN (with `normalize`

). We'll add it to the article once we've had the chance to do more analysis. Thanks!

May I ask why you used a quaternion rotation? Of course it isn’t wrong, but I would just have written it as a matrix transformation, with the matrix columns being the basis vectors \mathbf{t}, \mathbf{r} and \mathbf{t} \times \mathbf{r}. This transforms the (1, 0, 0)^T-(0, 1, 0)^T-(0, 0, 1)^T-basis of the detail normal map into the \mathbf{t}-\mathbf{r}-(\mathbf{t} \times \mathbf{r})-basis of the base normal map.

I guess that the description wasn’t clear, because \mathbf{r} is the reoriented detail normal that we want to calculate (\mathbf{u} rotated by the transform that brings \mathbf{s} onto \mathbf{t})! Jeppe’s code has the same transform in matrix form (`b1`

, `b2`

, `b3`

) but this ends up being more instructions, at least as written.

I think formula (3) should read

\mathbf{r} = \mathbf{u}(q_w^2 - \mathbf{q}_v \cdot \mathbf{q}_v) + 2\mathbf{q}_v(\mathbf{q}_v \cdot \mathbf{u}) + 2q_w\left[\mathbf{q}_v \times \mathbf{u}\right]

(i.e. the second - (minus) and second to last \mathbf{q}_v have changed). This is only a typing mistake, as the following formulas are still correct. Also the reference to Watt’s book is referring to page 360 and 361, if anyone wonders.

I’ve recalculated everything myself and can confirm the correctness of your results. However, I couldn’t constructively figure out how you got the simplified formula (4); I could verify its correctness, but I couldn’t figure out how you can come to equation (4) elegantly. Would you mind writing down the simplification steps after substitution?

Thank you! You're absolutely right and I apologise for any confusion. I really don't know how those errors crept in! I do know that our original derivation didn't use the Watt formula, but then we found a reference to it in a paper and realised it would bypass some steps (which I can see I checked in Mathematica with the correct maths).

I have the simplification steps as a series of shader modifications, but the whole thing is rather verbose, so bear with me whilst I work up a more succinct version. In the meantime, I'll update the article with your correction.

Update: I've added the steps to the Appendix. Hopefully there aren't any mistakes! :)

This is a great post. I've already integrated it into our engine for wrinkle map blending and the artists love the results. Previously they got used to baking in the fine pores into the wrinkle map data with is a workflow headache whereas now we only need to store the large scale bumps only. This also helps massively with retargeting the maps over mutiple characters.

For completeness I would love to see the ALU comparisons when compiled to scalar operations only as this can be a better comparison for some GPU architectures.

As Stephen said, the unfolding ended up being less instructions. :)

Moreover, we chose quaternions because they allow to easily represent the unique rotation from one vector to another (here, between the reference normal [0,0,1] and the base normal), via the shortest arc. Once this quaternion is defined, we can easily transform the detail normal and get the resulting (here, reoriented) normal.

I needed this just a few weeks before, it's great I found this article! It's an awesome post.

Well, the code for RNM turned out to be much simpler and faster than I expected from a method based on quaternions! :) I see that your derivation assumes t and u come in as normalized vectors; but what if you store your normal maps in partial-derivative format? Is it fastest to just normalize them first and apply RNM as usual, or is there an alternate simplification that uses the partial derivatives directly?

Possibly. I'd have to think about this! I'm a bit doubtful since some simplifications came from the assumption that t is a unit vector, but we'll see.

If you're normalising up front, then at least you don't need the normalise at the end (if you use the original form with the division by t.z), so it's probably not that bad.

How to replicate UDN in Photoshop?

I’ve tried to implement this code in a Unity3D Surface Shader, overlay looks outright wrong and RNM seems to do the normal blending correctly but the lighting looks totally wrong. All the other ones look correct & pretty similar, but that might be because my test normal maps are pretty simple and on a sphere.

Here is the code for overlay:

```
float overlay(float x, float y)
{
if (x < 0.5)
return 2.0*x*y;
else
return 1.0 - 2.0*(1.0 - x)*(1.0 - y);
}
// Overlay Blending
float3 overlayBlend(float3 n1, float3 n2)
{
float3 n = float3(overlay(n1.x, n2.x), overlay(n1.y, n2.y), overlay(n1.z, n2.z));
return normalize(n*2.0 - 1.0);
}
```

RNM:

```
float3 rnmBlend(float3 n1, float3 n2)
{
n1 = n1*float3( 2, 2, 2) + float3(-1, -1, 0);
n2 = n2*float3(-2, -2, 2) + float3( 1, 1, -1);
return normalize(n1*dot(n1, n2)/n1.z - n2);
}
```

and here is the pertinent surface shader part:

```
void surf (Input IN, inout SurfaceOutput o)
{
float3 main_normal = UnpackNormal(tex2D (_MainNormalTex, IN.uv_MainTex));
float3 target_normal = UnpackNormal(tex2D(_TargetNormalTex, IN.uv_MainTex));
// float3 targetBlendedNormal = linearBlend(main_normal, target_normal);
// float3 targetBlendedNormal = overlayBlend(main_normal, target_normal); //not working correctly
// float3 targetBlendedNormal = partialDerivativeBlend(main_normal, target_normal);
// float3 targetBlendedNormal = whiteoutBlend(main_normal, target_normal);
float3 targetBlendedNormal = udnBlend(main_normal, target_normal); //using this for now
// float3 targetBlendedNormal = rnmBlend(main_normal, target_normal); //not working correctly
// float3 targetBlendedNormal = unityBlend(main_normal, target_normal);
o.Normal = lerp( main_normal, targetBlendedNormal, blendAmount);
}
```

not sure why its not working well, I can post pics if required.

Sorry, I somehow missed your comment at the time. You should be able to replicate UDN in PS through layer/channel operations (mask and add the appropriate channels – with pre-scaling by 0.5 if working in 8bits), followed by normalisation using, say, the NVIDIA Texture Tools (https://developer.nvidia.com/n... ). You could probably automate this via an action script.

Ideally someone would make a plugin to do all of this (particularly for RNM, since it's less trivial); unfortunately I don't have the time nor the patience to do this myself at the moment.

All of the examples in the article assume basic packed normals as input (x, y, z scaled and biased by ~ 0.5). Is that how your normals are packed? If so, you'll want to skip those calls to UnpackNormal. If you're using a different encoding (e.g. reconstructing z based on x and y), then you'll need to either repack before doing the blending, or adjust the methods so that they work with unpacked normals. Let me know!

Here is the unpack function from unity (from UnityCG.cginc)

```
inline fixed3 UnpackNormal(fixed4 packednormal)
{
#if defined(SHADER_API_GLES) && defined(SHADER_API_MOBILE)
return packednormal.xyz * 2 - 1;
#else
fixed3 normal;
normal.xy = packednormal.wy * 2 - 1;
normal.z = sqrt(1 - normal.x*normal.x - normal.y * normal.y);
return normal;
#endif
}
```

since I’m on windows I’m probably hitting the #else part which I think is what you mentioned

(e.g. reconstructing z based on x and y) …

I don’t think I understand the math enough to rewrite all those methods with unpacked normals. = (

Maybe repacking is easier? Something like `unpackedNormal.xyz / 2 + 1`

?

~~Yes, that repacking will work~~ Update: sorry, I wasn’t paying attention; you need `+ 0.5`

not `+ 1`

. Alternatively, you can use this for RNM with unpacked normals:

`float3 rnmBlendUnpacked(float3 n1, float3 n2)`

{

n1 += float3( 0, 0, 1);

n2 *= float3(-1, -1, 1);

return n1*dot(n1, n2)/n1.z - n2;

}

Note: I've skipped `normalize()`

at the end, because the normals will already be unit length (due to the reconstruction).

Here are my three functions:

```
half3 rnmBlend(half3 n1, half3 n2)
{
n1 = n1*float3( 2, 2, 2) + float3(-1, -1, 0);
n2 = n2*float3(-2, -2, 2) + float3( 1, 1, -1);
return normalize(n1*dot(n1, n2)/n1.z - n2);
}
half3 rnmBlendRepack(half3 n1, half3 n2)
{
n1 = n1.xyz / 2 + 1;
n2 = n2.xyz / 2 + 1;
return rnmBlend(n1, n2);
}
float3 rnmBlendUnpacked(float3 n1, float3 n2)
{
n1 += float3( 0, 0, 1);
n2 *= float3(-1, -1, 1);
return n1*dot(n1, n2)/n1.z - n2;
}
```

`rnmBlendRepack`

does not work very well.

`rnmBlendUnpacked`

looks much better, but it has a strange problem that I also notice on the “unity blend” but not on the “udn blend”. In a scene with 1 light and a sphere, you can see some random highlights on the unlit side of a sphere. Here is a screenshot: http://www.inversethought.com/…