Specular Showdown in the Wild West


Yeah, which is the reason why for a long time I used phong for the lights to make it match the envmaps. Finally I gave in because blinn just looked better and tossed the idea of perfectly matching. I don't believe I've read this paper so thanks for the reference.

I've toyed with the idea of dynamically prefiltering an envmap for the current view to create blinn shaped highlights and then using it on objects. I haven't implemented it yet but I think the idea is sound.


Thanks for the detailed reply! The object-space part threw me off a bit, but that makes sense now that you've mentioned virtual texturing.

The LOD stuff is almost clear to me. We similarly used max(glossAsLOD, hwTexLOD) on 360 *in the shader* to select the right MIP level (theoretical accuracy aside). The only thing I'm not following now is how baking this improves the hardware behaviour, since I would expect poor gradients either way. Were you just doing texCUBELod(sampler, float4(dir, glossAsLOD)) before?

Also, going back to the filtering, I'm a little surprised that you didn't find Toksvig helping much with aliasing, since there can still be plenty in the distance without it, though obviously progressively less as normals average out. Let me know how you get on with applying it as a filter to the top level; it's really great that you're putting this stuff through the wringer of production!


Hi, great article!

A quick question: Does it make sense to apply Toksvig factor on first level of mipmap of a specular power map ? Isn't should be left as artist authored it ?



I think it definitely makes sense to apply this to the base level. If you don't, the final appearance will look really different (particularly at higher specular powers) between the base and 2nd MIP levels and lead to a nasty, aliasing transition between the two.

I had to cut short the reply earlier. What I should have said is that I understand your way of thinking, as, like Brian, I started out only doing this for the lower MIPs. That looked really wrong though and it was obvious when viewing the gloss factor directly and seeing how it faded to white at the base level. You're basically breaking a core assumption of regular MIP mapping by putting different content into the top MIP. It's also essentially saying that the base level of your normal map has a uniform direction, which is probably a complete lie.


The hardware gradients were bad enough to the point that I deemed them unusable for our data. I would choose no hardware mipping to the results I got on relatively bumpy surfaces. Precomputing the LOD allows it to be smooth and at pixel res but with the obvious downsides. It also means spec from lights gets filtered as well which is nice. It then, as you said, picks LOD purely from the gloss value.

What I'm referring to with the object space comments is that all this prefiltering that I'm doing as well as what you've described is on tangent space normal maps which ignores the mesh's normals. If that could be factored in (which for me is impractical) it would give even better results.


"I’ve toyed with the idea of dynamically prefiltering an envmap for the current view to create blinn shaped highlights and then using it on objects"

Could you elaborate? I don't see how that'll work unless you meant using a stack of filtered maps (or multiple MIP taps) to approximate the changing lobe shape based on the view angle. Jan Kautz talks about this in the same course as [6]: "Reflectance Rendering with Environment Map Lighting".


Okay, this finally clicked. For some bizarre reason I thought there could still be a HW gradient/quad issue using the combined gloss value, but presumably that's just confined to texCUBE, not texCUBELod, where the LOD is being completely overridden. Derp! I'm going to conveniently blame this on the Montreal heat wave. ;)

I get the object-space idea, but when you first brought it up I thought that would be completely unmanageable (storage), and so therefore I was missing something. When you later mentioned virtual texturing, I thought that it might be more possible.


There is still a problem on some hardware that only allows texLOD setting on a per quad basis. Even on such hardware the precomputed version looked better, I'm not sure why.

My idea on getting blinn shaped highlights from a cubemap is somewhat specific to how they are used in our game. There is only one cubemap applied to everything on screen per frame and it is done deferred. Because there is only one I could do some processing on it without too much expense. The idea is to filter it with the current view in mind. Think of a photoed mirrored sphere used for VFX. That can be used as a spherical envmap. The same can be done with a rougher sphere so you can get physically accurate shading for isotropic materials. I haven't tried this at all so I could be full of shit but I think the idea is sound.


Question, what should you do if you want to apply detail normal maps? Will they need their own Toksvig maps? Will we need to somehow combine said maps at runtime, as well as a specular power uniform?

Your article has awakened my curiosity. ;)


Interesting question! In that case, you might be able to get away with stashing an extra roughness term in a spare channel of the detail normal map and then combine in the way that Marc described earlier. I'll look into this.


Hi, Steve.
Great overview of the subject! You're spot on on how important this is.

At Treyarch we did something very similar for Call of Duty:Black Ops.
I'll be presenting the details next month at Siggraph as part of the "Advances in Real-Time Rendering in Games".
We did both a gloss map preprocessing and an environment map prefiltering/ mip control.
I'll host the slides on my blog after the conference for people that won't be able to go.


Thanks Dimitar! I note from your abstract that you'll be delving into production issues, which is great to see.

We used environment map prefiltering on Conviction, but gloss maps weren't used much, so I was planning to return to the topic in conjunction with anti-aliasing. That probably won't happen before SIGGRAPH though, so I'm looking forward to seeing what you've done there!


Thanks for the texture LOD info. I would have taken a closer look myself, but I'm not in a position to test hardware differences at the moment.

Re cubemaps: the single view-space sphere map is what I thought you had in mind actually. Whilst that ought to accelerate the filtering process, it doesn't change the fact that the lobe shape for Blinn-Phong varies based on the view angle - getting tighter at grazing angles - as I said before, so you'll need to account for that somehow. It's possible that factoring V.N into the MIP adjustment will 'work', but I haven't tried that out yet.


Hi the WebGL does not compile on my Linux Box. The fix was that it had a name collision on your pow() function
this fixed it… Linux drivers are stricter I presume…

vec3 mypow(vec3 v, float p)
    return vec3(pow(v.x, p), pow(v.y, p), pow(v.z, p));

vec3 ToLinear(vec3 v) { return mypow(v,     gamma); }
vec3 ToSRGB(vec3 v)   { return mypow(v, 1.0/gamma); }

Thanks for reporting this! I had the same problem with a demo in a later post: http://blog.selfshadow.com/201..., but I forgot to fix this one.


Really good stuff. Thanks for the great write up and demos. I have a question about a comment in the filter example shader:
// NOTE: I'm only normalising here for demo purposes,
// because the source data is 8-bit/channel.
// Don't do this for the lower MIP levels!
Could you please shed some light on that?



> Could you please shed some light on that?

At the time that I wrote the code, there was patchy support for floating point formats in WebGL, so I rolled normalisation into that shader. What you really want to do is normalise once upfront, and then work with floating point data throughout the MIP generation process.

Note: if you comment out that line, you'll see that it produces noisier results because the original normals are not *exactly* unit length (due to 8bit quantisation). I was sloppy and skipped this for the main demo, but I could have performed it for the first MIP level, as the comment says.

You definitely don't want to renormalise the normals at every MIP level when computing the 'Toksvig factor'/glossiness since you won't get a true measure of the variance coming from the original normals over the footprint of that texel.

All of this is assuming that you're doing iterative downsampling of normals. You *could* always sample from the base level and grow the filter kernel for smaller MIP levels instead, but that generally doesn't lead to better results, just slower processing.

I hope that answers your question!


Super late comment. But there is a supersampling technique that exists on Nvidia hardware than can tackle nearly all temporal and specular aliasing(In most games). Sparse Grid Supersampling Anti Aliasing.

If only something like this was achievable on consoles, i'd be happy.


How would you generate the Toksvig map for the top MIP, since all of the normals are of unit length? This is assuming that the top MIP is the same resolution as the source texture, so you can’t generate a Toksvig map from a higher resolution. An example would be a clear coat on a bumpy surface, where you would get a white smoothness map, and the bumps would become “micro roughness” as you pull away.


Ah yes. I mentioned this briefly in response to Sébastien Lagarde (lagardese) but didn’t go into detail:

This is explained in more detail in a later presentation (slide 35):

Note: it’s a good idea to apply a small (3x3) filter here to simulate hardware texture filtering that will happen at runtime. This further reduces aliasing and leads to smoother mip transitions (particularly from the base mip level).

Basically, at each MIP level, you average the normal and immediate neighbours, weighted by a Gaussian kernel or similar. In practice I think I’ve used a std deviation of 0.5.

Hope that helps!