Specular Showdown in the Wild West

Okay, this finally clicked. For some bizarre reason I thought there could still be a HW gradient/quad issue using the combined gloss value, but presumably that's just confined to texCUBE, not texCUBELod, where the LOD is being completely overridden. Derp! I'm going to conveniently blame this on the Montreal heat wave. ;)

I get the object-space idea, but when you first brought it up I thought that would be completely unmanageable (storage), and so therefore I was missing something. When you later mentioned virtual texturing, I thought that it might be more possible.

Steve:
There is still a problem on some hardware that only allows texLOD setting on a per quad basis. Even on such hardware the precomputed version looked better, I'm not sure why.

My idea on getting blinn shaped highlights from a cubemap is somewhat specific to how they are used in our game. There is only one cubemap applied to everything on screen per frame and it is done deferred. Because there is only one I could do some processing on it without too much expense. The idea is to filter it with the current view in mind. Think of a photoed mirrored sphere used for VFX. That can be used as a spherical envmap. The same can be done with a rougher sphere so you can get physically accurate shading for isotropic materials. I haven't tried this at all so I could be full of shit but I think the idea is sound.

Question, what should you do if you want to apply detail normal maps? Will they need their own Toksvig maps? Will we need to somehow combine said maps at runtime, as well as a specular power uniform?

Your article has awakened my curiosity. ;)

Interesting question! In that case, you might be able to get away with stashing an extra roughness term in a spare channel of the detail normal map and then combine in the way that Marc described earlier. I'll look into this.

Hi, Steve.
Great overview of the subject! You're spot on on how important this is.

At Treyarch we did something very similar for Call of Duty:Black Ops.
I'll be presenting the details next month at Siggraph as part of the "Advances in Real-Time Rendering in Games".
We did both a gloss map preprocessing and an environment map prefiltering/ mip control.
I'll host the slides on my blog after the conference for people that won't be able to go.

Thanks Dimitar! I note from your abstract that you'll be delving into production issues, which is great to see.

We used environment map prefiltering on Conviction, but gloss maps weren't used much, so I was planning to return to the topic in conjunction with anti-aliasing. That probably won't happen before SIGGRAPH though, so I'm looking forward to seeing what you've done there!

Thanks for the texture LOD info. I would have taken a closer look myself, but I'm not in a position to test hardware differences at the moment.

Re cubemaps: the single view-space sphere map is what I thought you had in mind actually. Whilst that ought to accelerate the filtering process, it doesn't change the fact that the lobe shape for Blinn-Phong varies based on the view angle - getting tighter at grazing angles - as I said before, so you'll need to account for that somehow. It's possible that factoring V.N into the MIP adjustment will 'work', but I haven't tried that out yet.

Hi the WebGL does not compile on my Linux Box. The fix was that it had a name collision on your pow() function
this fixed it… Linux drivers are stricter I presume…

vec3 mypow(vec3 v, float p)
{
    return vec3(pow(v.x, p), pow(v.y, p), pow(v.z, p));
}

vec3 ToLinear(vec3 v) { return mypow(v,     gamma); }
vec3 ToSRGB(vec3 v)   { return mypow(v, 1.0/gamma); }

Thanks for reporting this! I had the same problem with a demo in a later post: http://blog.selfshadow.com/201..., but I forgot to fix this one.

Really good stuff. Thanks for the great write up and demos. I have a question about a comment in the filter example shader:
// NOTE: I'm only normalising here for demo purposes,
// because the source data is 8-bit/channel.
// Don't do this for the lower MIP levels!
Could you please shed some light on that?

Cheers!

> Could you please shed some light on that?

At the time that I wrote the code, there was patchy support for floating point formats in WebGL, so I rolled normalisation into that shader. What you really want to do is normalise once upfront, and then work with floating point data throughout the MIP generation process.

Note: if you comment out that line, you'll see that it produces noisier results because the original normals are not *exactly* unit length (due to 8bit quantisation). I was sloppy and skipped this for the main demo, but I could have performed it for the first MIP level, as the comment says.

You definitely don't want to renormalise the normals at every MIP level when computing the 'Toksvig factor'/glossiness since you won't get a true measure of the variance coming from the original normals over the footprint of that texel.

All of this is assuming that you're doing iterative downsampling of normals. You *could* always sample from the base level and grow the filter kernel for smaller MIP levels instead, but that generally doesn't lead to better results, just slower processing.

I hope that answers your question!

Super late comment. But there is a supersampling technique that exists on Nvidia hardware than can tackle nearly all temporal and specular aliasing(In most games). Sparse Grid Supersampling Anti Aliasing.

If only something like this was achievable on consoles, i'd be happy.

How would you generate the Toksvig map for the top MIP, since all of the normals are of unit length? This is assuming that the top MIP is the same resolution as the source texture, so you can’t generate a Toksvig map from a higher resolution. An example would be a clear coat on a bumpy surface, where you would get a white smoothness map, and the bumps would become “micro roughness” as you pull away.

Ah yes. I mentioned this briefly in response to Sébastien Lagarde (lagardese) but didn’t go into detail:

This is explained in more detail in a later presentation (slide 35):

Note: it’s a good idea to apply a small (3x3) filter here to simulate hardware texture filtering that will happen at runtime. This further reduces aliasing and leads to smoother mip transitions (particularly from the base mip level).

Basically, at each MIP level, you average the normal and immediate neighbours, weighted by a Gaussian kernel or similar. In practice I think I’ve used a std deviation of 0.5.

Hope that helps!