Specular Showdown in the Wild West


“You see, in this world there’s two kinds of people, my friend:
Those with loaded guns and those who dig. You dig.” - Blondie

This is a companion discussion topic for the original entry at http://blog.selfshadow.com/2011/07/22/specular-showdown/


HI Steve, congrats on new blog and great first post!

As happy as I am that you're linking RTR, a better link for [5] would be the course page: http://iryoku.com/aacourse/



Thanks Naty! I'd completely forgotten that Jorge had put up a course page, but your post has some nice background so I'll cite them both.

There are, as you know, some other great papers that I've left off as well, but I'm waiting until I've had a chance to try implementing them. Fuel for a future post hopefully!


Nice comparisons! Though I'd argue that the modification of Toksvig to compute a new power in terms of variance is actually different (and more satisfying) than his original. Maybe you should get Dan to come up with a clever name for it :).

And nice sandbox.



Another thought. Maybe you don't want your artists authoring these maps directly, for fear they'd reintroduce aliasing, but you could still bake their spec maps with this one into a single combined texture. Since spec power s == variance 1/s, and adding a base variance to an existing bump combines as v1+v2, you get sCombined = 1/(1/sBase + 1/sBump) = sBase*sBump/(sBase+sBump). You could probably store it in whatever form you already use for spec maps, and assuming you've been normalizing your BRDF right, you should get bump filtering at no extra run-time cost over an existing spec map.




Do you mean better than the original version with the old scaling? Michael was already doing things in terms of variance; I just used that to make the connection with CLEAN.

Also, how do you feel about using CLEAN to compute a gloss map? It seems to match well (vs. the version Dan presented), although it's not quite as well behaved at grazing angles due to filtering differences. I'll update the demo to include this soon.


Yes! I guess I wasn't as clear as I could have been, but that's where I was already going with:

"The data itself could be packed alongside a specular mask or used to modulate an existing gloss map. I could even imagine the map being used directly as a starting point for artists to paint on top of, but in that case care will be needed to avoid reintroducing aliasing!"


Yeah, I know he was already talking in terms of variance (mentioned that even in the LEAN paper :)). I mean the new scaling. Makes it clear that at its heart it's just a variance estimation technique.

I think using CLEAN to compute the gloss map should work reasonably well, especially if you build all of the levels using the CLEAN math, rather than incrementally. That way you could compute the variance at high precision, but store it in a precision-friendly final representation. Then at an individual level in any of the texels, you'll have exactly the right result. In between texels and in between levels, the interpolation will be wrong, but hopefully not too bad. I'm not surprised it (or any method storing a non-linear value into a texture) would break down most at glancing angles. You just need to look at one of the long cylinder texture filtering tests to see how hard it is to get major anisotropic filtering right. It's not unlike the problems of filtering gamma'd textures vs. filtering linear textures then applying gamma.


Right, but you don't have them paint on top of this map. You have them paint a base specularity map, and have a tool that takes that plus normals to produce a combined texture to use as the game asset.


Curses, the comments system limits the amount of nesting. (With good reason I see!)

Marc 4.22pm:
Indeed. That's what I was doing offline - keeping everything in floating point and quantising/compressing at the end. That's partly why I'm not showing CLEAN in the demo at the moment: the support for FP targets in WebGL isn't great and I wanted to keep the data footprint down.

Marc 4.24pm:
Yes, absolutely. That's what I meant with "used to modulate an existing gloss map" - honest!

Some developers might not like that tying together, but on a previous title we generally authored diffuse, spec and normals together as a single unit.

Anyway, thanks for making that point clearer; I might go and update the post to spell that out more. At the time, I was wary of sounding authoritative on the tools side having not put this into production, so I gave it short shrift. It maybe demands a whole post in its own right.


Nice article Steve! What you describe with baking the Toksvig factor into the gloss map is what I use in Prey 2 so I can say it works in production.

Another thing I use to help aliasing is something I got from how our environment mapping works where the gloss value controls the mip of the env map fetch. I precompute what the hardware calculated lod would be if the texture was viewed flat at 1:1 resolution. I min this with the Toksvig gloss value and store that as the final gloss map. It would be preferable to do this in screen space instead (use the hardware lod) but the 2x2 pixel gradients make it look pretty crappy. These two combined helped tremendously with specular aliasing. If I could use object space normals instead of tangent space normals I think it would fix the remaining problems.

Baking CLEAN mapping into gloss is an interesting idea. I'll have to look at that paper again.


I figured that I couldn't be the only one doing this, so thanks for sharing! Is the baking done automatically, entirely under the hood (on top of a painted map) then? I kind of like the idea of being able to post-tweak even if that runs the risk of aliasing.

I'm not sure I'm 100% following the LOD trick. Presumably you're looking at the rate of change of normals to calculate the LOD. What sort of function are you then using to go from LOD to gloss scale factor?

Talking of cubemaps, that's another area I want to look into next. Have you found that using this gloss map works pretty well out-of-the-box with cubemap MIP adjustment? That's my expectation, but I haven't gone and tried it yet.

Edit: Wait, I guess your LOD->gloss_factor function is just the inverse of whatever you're normally doing for cubemap MIP adjustment. Silly me.


By definition, any method that uses just MIP biasing can't be completely correct. Symmetric normal distributions induce an asymmetric distribution of reflection vectors. That's why Blinn-Phong can reproduce elongated "headlight on a wet road" highlights from a point source, while Phong cannot. None the less, MIP biasing the cube map is better than not, and a number of games use it. If you don't already know it, check out Andreas Schilling's "Antialiasing of Environment Maps".


The baking is done when the textures are processed and compressed into the virtual texture file. Input is normal, diffuse, spec, gloss textures. Output is VTEX file that is used by runtime.

Now that I'm looking at my code I think it's a bit different than you describe. The Toksvig factor only affects the mipmaps that are generated because it uses the denormalization from mipping. The LOD trick affects all levels and is applied as a filter kernel. Now that I'm recalling this more (I wrote the code maybe a year ago) the Toksvig factor is for minification accuracy, such as bumpy surface looking rough at a distance, but doesn't do much to help aliasing. The LOD trick does the heavy lifting to reduce aliasing. I have it set a bit heavy handed to squash the sparklies because we had a ton. My LOD trick is doing a very similar function to your Toksvig filter. I'll have to try this and compare.

To further explain this LOD trick if it isn't clear yet, I'm precalculating what the hardware texLOD would be when the envmap would be looked up by a reflection vector from the normal map. I calculate it as if the view vector is the vertex normal and the normal map is at pixel resolution. Then since I use the gloss to choose the mip level of the envmap I modify the gloss value to use a higher mip if it needs it. In other words min( gloss, glossFromTexLOD ). Doing this means I don't have to rely on the shitty 2x2 pixel hardware LOD for envmaps and can also use it for blinn highlights but I have to sacrifice view dependence.

I use blinn power = exp2( gloss * c0 + c1 ) and envmap mip = ( 1 - gloss ) * 5. The two are eyeballed to match the close as I can. Yes, so long as you match the amount a light source blurs in each the same data works well for both. The problems I have left are high geometric curvature causing aliasing not caught by the LOD trick due to not having object space normals (think metal railings) and not being able to set LOD on a per pixel basis that I've mentioned before.


Yeah, which is the reason why for a long time I used phong for the lights to make it match the envmaps. Finally I gave in because blinn just looked better and tossed the idea of perfectly matching. I don't believe I've read this paper so thanks for the reference.

I've toyed with the idea of dynamically prefiltering an envmap for the current view to create blinn shaped highlights and then using it on objects. I haven't implemented it yet but I think the idea is sound.


Thanks for the detailed reply! The object-space part threw me off a bit, but that makes sense now that you've mentioned virtual texturing.

The LOD stuff is almost clear to me. We similarly used max(glossAsLOD, hwTexLOD) on 360 *in the shader* to select the right MIP level (theoretical accuracy aside). The only thing I'm not following now is how baking this improves the hardware behaviour, since I would expect poor gradients either way. Were you just doing texCUBELod(sampler, float4(dir, glossAsLOD)) before?

Also, going back to the filtering, I'm a little surprised that you didn't find Toksvig helping much with aliasing, since there can still be plenty in the distance without it, though obviously progressively less as normals average out. Let me know how you get on with applying it as a filter to the top level; it's really great that you're putting this stuff through the wringer of production!


Hi, great article!

A quick question: Does it make sense to apply Toksvig factor on first level of mipmap of a specular power map ? Isn't should be left as artist authored it ?



I think it definitely makes sense to apply this to the base level. If you don't, the final appearance will look really different (particularly at higher specular powers) between the base and 2nd MIP levels and lead to a nasty, aliasing transition between the two.

I had to cut short the reply earlier. What I should have said is that I understand your way of thinking, as, like Brian, I started out only doing this for the lower MIPs. That looked really wrong though and it was obvious when viewing the gloss factor directly and seeing how it faded to white at the base level. You're basically breaking a core assumption of regular MIP mapping by putting different content into the top MIP. It's also essentially saying that the base level of your normal map has a uniform direction, which is probably a complete lie.


The hardware gradients were bad enough to the point that I deemed them unusable for our data. I would choose no hardware mipping to the results I got on relatively bumpy surfaces. Precomputing the LOD allows it to be smooth and at pixel res but with the obvious downsides. It also means spec from lights gets filtered as well which is nice. It then, as you said, picks LOD purely from the gloss value.

What I'm referring to with the object space comments is that all this prefiltering that I'm doing as well as what you've described is on tangent space normal maps which ignores the mesh's normals. If that could be factored in (which for me is impractical) it would give even better results.


"I’ve toyed with the idea of dynamically prefiltering an envmap for the current view to create blinn shaped highlights and then using it on objects"

Could you elaborate? I don't see how that'll work unless you meant using a stack of filtered maps (or multiple MIP taps) to approximate the changing lobe shape based on the view angle. Jan Kautz talks about this in the same course as [6]: "Reflectance Rendering with Environment Map Lighting".