> Could you please shed some light on that?
At the time that I wrote the code, there was patchy support for floating point formats in WebGL, so I rolled normalisation into that shader. What you really want to do is normalise once upfront, and then work with floating point data throughout the MIP generation process.
Note: if you comment out that line, you'll see that it produces noisier results because the original normals are not *exactly* unit length (due to 8bit quantisation). I was sloppy and skipped this for the main demo, but I could have performed it for the first MIP level, as the comment says.
You definitely don't want to renormalise the normals at every MIP level when computing the 'Toksvig factor'/glossiness since you won't get a true measure of the variance coming from the original normals over the footprint of that texel.
All of this is assuming that you're doing iterative downsampling of normals. You *could* always sample from the base level and grow the filter kernel for smaller MIP levels instead, but that generally doesn't lead to better results, just slower processing.
I hope that answers your question!