Jump to content

Next Shader Development


DanW58
 Share

Recommended Posts

INTRO:

To recap, the "psychic" shader is abandoned;  it will never happen.  During discussions in that thread, it was agreed that the way to go is to complete the eye-candy "metal detecting" shader (which now detects human skin, as well, and applies dielectric shimmer to it) and keep it as a better shader for existing art assets.  This shader is now complete and ready for review and adoption.  I include here how I described it in the D3555 update:

Quote

See this forum thread for development history:
https://wildfiregames.com/forum/topic/36330-a-psychic-shader-mod-development-begins/?do=findComment&comment=417003
The model_common.fs and model_common.vs shaders are updated; the first extensively.
The shader is now organized into subroutines, and is designed to maximize eye-candy from existing art assets. It includes two ad-hoc detection algorithms that identify metallic materials and human skin respectively. It incorporates science-based Fresnel reflectivity and refraction coefficients, to be able to represent paint, varnishes, etc.
Where the shader decides that a metallic material was intended, it adjusts diffuse and specular colors to better represent metallic reflectivity.
Where the shader decides that human skin is being drawn, it changes the index of refraction from the 1.0 default to 1.5, that of skin; and sets specular power to 17.0, making the arms and legs of human models glow in the sun the way human skin naturally reflects sunlight.
The shader also implements a pseudo environment mapping (using ambient light with skywards anisotropy) as well as a very rudimentary specular occlusion and ground reflection, based on AO value; making the reflections on metallic objects inside patios look like bubbles of light always facing the camera. The sharpness of occlusion boundaries is controlled by the material's specular power, as it should be.
Another attribute of this shader is that it corrects the intensity of specular highlights (sun reflections) on the basis of specular power; --i.e, the smaller the light spots, the brighter they look (even if it goes deeply into saturation), all rigorously math-based.
This shader is NOT intended as a shader to target new art to, which is why the patch is called "prohibited_shader.patch". New art, as discussed in the tread linked above, is to target the next version of this shader, which will have detections removed, and instead will interface with a new texture stack able to communicate artistic intent clearly.

The Next Shader last mentioned is the subject of this forum topic.

New art assets will NOT need re-interpretation by the shader;  no need for "detections" of materials;  and it will embrace a more comprehensive texture stack capable of specifying all the important parameters needed to describe a material, such as metal/non-metal, metallic specularity in the case of a metal;  index of refraction and surface purity in the case of a non-metal.  We should also make sure that each channel has appropriate bit depth as per the criticality of its value accuracy or resolution.  For example, specular power, in my experience, is a parameter well worth of its own texture channel, and it needs good resolution, as our eyes are quite sensitive to subtle changes in specular power across a surface.  Other channels are less critical, such as rgb diffuse color.

Another goal I'm setting for myself, with the new shader(s) is to reduce their number by making them capable of serving all the needs provided by a multitude of shaders presently.

Another goal of mine is to get rid of as many conditional compilation switches from the shaders, with the following criteria:

   Conditional compilation is okay to have when it pertains to user settings in graphics options, which are settings affecting all materials generally.

   Conditional compilation is NOT okay to change compilation per-object or per-draw call, based on object parameters.  Why?  For one thing, the confusing rats' nest of switches in the shader becomes intractable.  For another, it is effectively changing the shader from one call to the next, causing shader reloads, which are expensive in terms of performance.

My tentative ambition is to produce two shaders:  one for solids, and one for transparencies (excluding water;  the current water shader is beautiful;  I mean "glass" literally).

 

SHADER CAPABILITIES (FOR ARTISTS):

Without getting too technical, the "Next Shader" we call it for now, will be able to depict a wide range of materials realistically.  If you want a vase made of terracotta, there will be a way to produce a terracotta look.  If you want a simple paint, or a high gloss paint, or a painted surface with a clear varnish on top, you will be able to specify that exactly.  The look of glossy plant leaves, the natural sheen of human skin in the sun, all of these will be distinctly representable.  Cotton clothes, silk clothes, leather, granite...  And there will be a huge library of materials that all artists can draw from, so that all artists are working on an equal footing vis a vis, for example, the albedo of the the assets produced.  For an example of where albedo is not consistent, at present, Ptolemaic women have a good skin tone, but Greek and Roman women's skins are so white they saturate when they are in the sun, even as other assets look too dark.  This kind of inconsistency will be avoided.

You may notice I have not mentioned metals.  The reason for that is that metals CAN be represented correctly by the current shaders using diffuse and specular colors, but hardly a soul on this planet knows how to do it right.  So metallic representation will not be a "new" capability, per se; but inclusion of most metals in the materials library will be.

And the most important feature of this shader is that it will be physics- and optics-based;  not a bunch of manually adjusted steampunk data pipes and hacks.

However, it is important to state what it will NOT be capable of:

  1. Although it will try to have some clever tricks to achieve things like specular occlusion (reflections of objects blocking the environment),  it is not intended to be a ray-tracing shader.
  2. It will NOT use auto-updating environment boxes with moving ground reflections or anything of the sort.  There's better things to spend GPU cycles on.  The ground reflection will just be a flat color.
  3. It will NOT feature sub-surface scattering.  Too expensive, and there are some cheap hacks that can be made to look almost like SSS.
  4. It will NOT feature second light bounces.  Too expensive for what it's worth.
  5. It will NOT include things like hair or other complex material-specific shading.
  6. It will NOT support anisotropic filtering. Vynil records had not been invented yet, in the first century, anyways

Possible new features:

Not only shadows (as the present shader does) but also environmental shadowing (this may need props).

Detail textures:  These are textures that typically contain tile-able noise, and which are mapped to repeat over large surfaces, such as a map, modulating the diffuse texture, or the normalmap, or the AO, at a very low gain, almost unnoticeable,  but make it seem like the textures used are of much greater resolution than they are. That's why they are called "detail textures;  they would seem to "add detail".  Very effective trick, but they need to have their gain modulated from the main texture via a texture channel, otherwise their monotony can make them noticeable.  Long story.

Possible test version of the shader, for artists, that detects impossible or unlikely material representations and shows them red or purple on the screen.

Possibly have HDR...

 

One thing this shader will NOT indulge on:

Screen-space ambient occlusion.  These are algorithms that produce a fake hint of ambient occlusion by iterative screen-space blurs of Z depth -tricks.  They are HACKS.

 

In my next post I will discuss texture packing, and give a few random examples for how materials will be represented.

 

OTHER...

I paste below a couple of posts from the "psychic" shader as a way of not losing them, as they are relevant here:

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Problem Number 1:

One issue that I seem to be the only guy in the world to ever have pondered, is the fact that the Sun is NOT a point-source;  it has a size.  I don't mean the real size, but the apparent size:  its diameter, in degrees.  If we were on Mercury, it would be 1.4 degrees.  From Venus, (if you could see through the darned clouds), it would be 0.8 deg.  From Mars it looks a tiny 0.4 degrees diameter.  From Earth: 0.53 degrees.

This should be taken into account in Phong specular shading.  The Phong light spot distribution (which is a hack;  it is NOT physics-based) is (R dot V)^n, where R is the reflection vector, V is the view vector, the dot operation yields the cosine of the angle between them, and n is the specular power of the surface, where 5 is very rough, 50 is kind of egg-shell, and 500 is pretty smooth.  Given an angle between between our eye vector and the ray of light reflecting from the spot we are looking at, if the alignment is perfect, the cosine of 0 degrees is 1, and so we see maximal reflection.  If the angle is not zero, but it is small, say 1 degree, the cosine of 1 degree is 0.9998477, a very small decrement from 1.0,  but if the specular power of the material finish is 1000 (very polished), the reflection at 1 degree will fall by 14% from the 0 degree spot-on.  But with a perfect mirror, --a spec-power of infinity--, a 1 degree deviation (or any deviation at all) causes the reflected light to fall to zero.  But that is assuming a point-source light...

If what is reflecting off a surface is not a point source, however, the minimum specular highlight spot size is the size of our light source.  This can be modeled by limiting our specular power to the power that would produce that same spot size from a point source.  But this limiting should be smooth;  not sharp...

 

Problem Number 2:

This is horrendous graphical mistake I keep seeing again and again:  As the specular power of a surface finish varies, specular spotlights change in size, and that is correct;  but the intensity of the light should vary with the inverse of the spotlight's size in terms of solid angle.  If the reflected light is not so modulated, it means that a rough surface reflects more light (more Watts or Candelas) than a smooth surface, all other things being the same, which is absurd.  As the specular spots get bigger, they should get dimmer;  as they get smaller, they should get brighter.

But the question will immediately come up:  "Won't that cause saturation for small spot-lights?"

The answer is yes, of course it may.  So what?  That's not the problem of the optics algorithm;  it is a hardware limitation, and there are many ways to deal with it...  You can take a saturated, super-bright reflection of the sun off a sword, and spread it around the screen like a lens-flare;  you can dim the whole screen to simulate your own temporary blindness...  Whatever we do, it is an after-processing;  it is not the business of the rendering pipeline to take care of hardware limitations.  Our light model is linear, as it should be, and as physics-based as we can get away with.   If a light value is 100 times greater than the screen can display, so be it!  Looking at the reflection of the sun off a chromed, smooth surface is not much less bright than looking at the sun directly.  Of course it cannot be represented.  Again, so what?!

So the question is, how much should we set the brightness multiplier as specular power goes up?  And also, at what specular power should the brightness multiplier be 1?

 

Research:

The two problems have a common underlying problem, namely, finding formulas that relate specular powers to spot sizes, where the latter need to be expressed in terms of conical half-angle and solid angle.

If we define the "size" of a specular highlight as  the angle at which the reflection's intensity falls by 50%, then for spec power = 1, using Phong shading, (R dot V)^n, the angle is where R dot V falls to 0.5.  R dot V is the cosine of the angle, so the angle is,

          SpotlightRadius(power=1) = arccos( 0.5 ) = 60 degrees.

Note that the distribution is equivalent to diffuse shading, except that diffuse shading falls to half intensity at 60 degrees from the spot where the surface normal and the light vector align, whereas a specular power of 1 spotlight falls to half intensity at 60 degrees from the mid vector between the light and view vectors, to the surface normal.  But the overall distributions are equivalent.  We can right away answer one of our questions above, and say that,

                 Specular power of 1.0 should have a light adjustment multiplier of 1.0

How this multiplier should increase with specular power is yet to be found...

But so, to continue, what should be our formula for spotlight radius as a function of specular power?  For a perfectly reflective material,

        Ispec/Iincident = (R dot V)^n

 If we care about the 50% fall-off point, we can write,

        0.5 = (R dot V)^n

        (R dot V) = 0.5^(1/n)

So our spot size, in radians radius terms :)

                SpotRadius = arccos( 0.5^(1/n) )

We are making progress!

Now, specular power to solid angle:

Measured in steradians, the formula for solid angle from cone half-angle (radial angle) is,

        omega = 2pi * (1 - cos(tetha))

But there are 2pi steradians in a hemisphere, so, measured in hemispheres, the formula becomes,

        omega = 1 - cos(tetha)

If we substitute our spot radius formula above, we get

        omega = 1 - cos( arccos( 0.5^(1/n) ) )

which simplifies to,

                SpotSizeInHemispheres = 1 - 0.5^(1/n)

where n is the specular power.  Now we are REALLY making progress...

Our adjustment factor for specular spotlights should be inversely proportional to the solid angle of the spots, so,

        AdjFactor = k * 1 / ( 1 - 0.5^(1/n) )

with a k to be determined such that AdjFactor is 1 when spec power is 1.  What does our right-hand side yield at power 1?

1/1 = 1.  0.5^1 = 0.5.  1 - 0.5 = 0.5.  1/0.5 = 2.  So k needs to be 0.5

So, our final formula is,

                BrightnessAdjustmentFactor = 0.5 / ( 1 - 0.5^(1/n) )

where n is specular power.

Almost done.  One final magic ring we need to find is what is the shininess equivalent for the Sun's apparent size.

We know that its apparent diameter is 0.53 degrees.  So, its apparent radius is 0.265 degrees.

Multiply by pi/180 and...

                SunApparentRadius = 0.004625 radians

Good to know, but we need a formula to translate that into a shininess equivalent.

Well, we just need to flip our second formula around.  We said,

        SpotRadius(radians) = arccos( 0.5^(1/n) )

so,

        cos( SpotRadius ) = 0.5^(1/n)

        ln( cos(SpotRadius) ) = ln( 0.5^(1/n) )

        ln( cos(SpotRadius) ) = (1/n) * ln( 0.5 )

                n = ln( 0.5 ) / ln( cos(SpotRadius) )

Plugging in our value,

        ln( 0.5 ) = -0.69314718

        cos( 0.004625 radians ) = 0.99998930471

        ln( cos( 0.004625 radians ) ) = -1.0695347 E-5

Finally,

                SunSizeSpecPwrEquiv = 64808  ...  (make that 65k :brow: )

So, we really don't need to be concerned, except for ridiculously high spec power surfaces;  but it's good to know, finally.

 

EDIT:  Just so you know, when I worked on this, decades ago, I obviously made a huge math error somewhere, and I ended up with a Sun size derived specular power limitation to about 70 I think it was.  I smooth-limited incoming specular power by computing n = n*70/(n+70).  I knew it was wrong;  the spotlights on flat surfaces were huge.  What was cool about it was the perfect circular shape of those highlights.  It was like looking at a reflection of the Sun, literally;  except that it was so big it looked like I was looking at this reflection through a telescope.

EDIT2:  One thing to notice here is the absurd non-linearity of the relevance of spec power;  maybe we should consider encoding the inverse of the square root of spec power, instead.  This way we have a way to express infinite (perfect surface;  what's wrong with that?) by writing 0.  We can express the maximum shimmery surface as 1, to get power 1.  At 0.5 we get 4.  At 0.1 we get 100.  We could even encode fourth root of 1/n. 

 

Edited Sunday at 09:39 by DanW58
 
 

@hyperion  Another way to go about it, that you might care to consider, is to incorporate this shader now (if it works with all existing assets), but to also include a new shader with NO metal detection, and encourage artists to target the new shader.  Different texture stack, channels for spec power, index of refraction and detail texture modulation, etc.  So this shader and the new one would be totally incompatible, textures-wise, and even uniforms-wise.  This path removes the concern about people getting unexpected results with new models.  New models would NOT target the metal detection shader.  It would be against the law.

I could come up with the new shader in a couple of days;  I got most of the code already.  So, even if there are no assets using them, people can start targetting it before version 25.

In this case, I'd cancel the "psychic" shader project.

 

Channels needed for a Good shader:

Specular texture: rgba with 8-bit alpha

  1. specular red
  2. specular green
  3. specular blue
  4. specular_power  (1 to infinity, encoded as fourth root of 1/spec_power)is_metal

Diffuse texture: rgba with 1-bit alpha  (rgb encoding <metals> / <non-metals>)

  1. diffuse red /  purity_of_surface ((0.9 means 10% of surface is diffuse particles exposed, for plastics))
  2. diffuse green / index_of_refraction (0~4 range) ((1~4, really, but reserve 0 for metals))
  3. diffuse blue / detail_texture_modulator
  4. is_metal

Normalmap:  rgb(a)

  1. u
  2. v
  3. w
  4. optional height, for parallax

Getting Blender to produce them should be no issue.

Note that I've inverted the first and second textures, specular first, with the diffuse becoming optional...  For metals, the first texture alone would suffice, as diffuse can be calculated from specular color in the shader.  Artists who want to depict dirt or rust on the metal, they can provide the diffuse texture, of course;  but in the absence of diffuse the shader would treat specular as metal color and auto the diffuse.  Also, when providing diffuse texture for a metal, it could be understood to blend with autogenerated metallic diffuse;  so you only need to paint a bit of rust here, a bit of green mold there, on a black background, and the shader will replace black with metal diffuse color.

 

 

Edited by DanW58
  • Like 5
Link to comment
Share on other sites

  • DanW58 changed the title to Next Shader Development

The texture stack analysis begins...

As I said to hyperion before, I analyze needs first, then look for how to fulfill them.  Therefore, starting this analysis from looking at what Blender has to offer is anathema to my need to establish what we're looking for in the first place.  Not that I will not look at what Blender, or any parties, have to offer;  and not that I'm unwilling to compromise somewhat for the sake of pre-packaged convenience;  but, to me, looking at what is available without analyzing what's needed first is a no-no.

Let's start with the boring stuff:  we have diffuse.rgb and specular.rgb.  These two trinities MUST be mapped to rgb channels of textures in standard manner.  Why?  Because the parties that have come up with various compression and representation algorithms and formats know what they are doing;  they have taken a good look at what is more or less important to color perception;  so say a DDS texture typically has different numbers of bits assigned to the three channels for a reason.   We certainly would not want to swap the green and blue channels and send red to the alpha channel, or any such horror.

What I am unpleasantly and permanently surprised by is the lack of a texture format (unless I've missed it) where a texture is saved from high precision (float) RGB to compressed RGB normalized (scaled up in brightness) to make the most efficient use of available bits of precision, but packed together with the scaling factor needed by the shader to put it back down to the intended brightness.  Maybe it is already done this way and I'm not aware of it?  If not, shame on them!

It is clear to me that despite so many years of gaming technology evolution, progress is as slow as molasses.  Age of Dragons, I was just reading, use their own format for normalmaps, namely a dds file where the Z component is removed (recomputed on the fly), the U goes to the alpha channel, and V goes to all 3 RGB channels.  Curiously, I had come up with that very idea, that exact same solution, 20 years ago, namely to try to get more bits of precision from DDS.  What we decided back then, after looking at such options, was to give up on compression for normal maps;  we ended up using a PNG.  RGB for standard normalmap encoding, and alpha channel for height, if required.  So, our texture pack was all compressed EXCEPT for normalmaps;  and perhaps we could go that way too, or adopt Age of Dragons' solution, though the latter doesn't give you as much compression as you'd think, considering it only packs two channels, instead of up to four for the PNG solution.  And you KNOW you can trust a PNG in terms of quality.

In brief summary of non-metal requirements:  We need Index of refraction and surface purity.  Index of refraction is an optical quality that determines how much light bounces off a clear material surface (reflects), and how much of it enters (refracts), depending on the angle of incidence;  and how much the angle changes when light refracts into the material.  In typical rendering of paints and other "two layer" materials, you compute how much light reflects first; becomes your non-metallic specular;  then you compute what happens to the rest of the light, the light refracting.  It presumably meets a colored particle, becomes colored by the diffuse color of the layer underneath, then attempts to come out of the transparent medium again... BUT MAY bounce back in, be colored again, and make another run for it.  A good shader models all this.  Anyways, the Surface Purity would be 1.0 for high quality car paints, and as low as 0.5 for the dullest plastic.  It tells the shader what percentage of the surface of the material is purely clear, glossy material, versus exposing pigment particles.  In a plastic, pigments and the clear medium are not layered but rather mixed.

ATTENTION ARTISTS, good stuff for you below...

Another channel needed is the specular power channel, which is shared by metallic and non-metallic rendering, as it describes the roughness of a reflecting surface regardless of whether it is reflecting metallically or dielectrically.  The name "specular power" might throw you off... It has nothing to do with horses, Watts, politics, or even light intensities;  it simply refers to the core math formula used in Phong shading:  cos(angle)^(sp),  coded as pow( dot(half_vec,normal), specPower ).... dot(x,y) is a multiplication of two vectors ("dot product"), term by term across, with the results added up, which in the case of unit-vectors represents the cosine of the angle between them;  normal is the normal to the surface (for the current pixel, actually interpolated by the vertex shader from nearby vertex normals);  half_vec is the median between the incident light vector and the view vector...  So, if the half-vector aligns well with the surface normal it means we have a well aligned reflection and the cosine of the angle is very close to one, which when raised to the 42nd power (multiplied by itself 42 times), or whatever the specular power of the material is, will still be close to one, and give you a bright reflection.  With a low specular power (a very rough surface), you can play with the angle quite a bit and still see light reflected.  If the surface is highly polished (high specular power), even a small deviation of the angle will cause the reflection intensity to drop, due to the high exponent (sharper reflections).  Note however that the Phong shading algorithm has no basis in physics or optics.  It's just a hack that looks good.  For non-programmers, you may still be scratching your head about WHEN is all this math happening...  It may surprise you to know it is done per-pixel.  In fact, in my last shader, I have enough code to fill several pages of math, with several uses of the pow(x,y) function, and it is all done per-pixel rendered, per render-frame.  The way modern GPU's meet the challenge is by having many complete shader units (a thousand of them is not unheard of) working in parallel, each assigned to one pixel at a time.  if there are 3 million pixels on your screen, it can be covered by 1000 shader engines processing 3,000 pixels each.

... specially this:

But back to the specular power channel and why it is so important.  As I've mentioned elsewhere, the simplest way to represent a metal is with some color in specular and black in diffuse.  However, you do that and it doesn't look like metal at all, and you wonder what's going on.  Heck, you might even resort to taking photos of stainless steel sheets, and throwing them into the game, and it would only look good when looking at the metal statically;  the moment you move, you realize it is a photograph.   The look of metal is a dynamic behavior;  not a static one.  Walk by a sheet of stainless steel, looking at the blurry reflections off it as you walk slowly by it.  What do you see?  You see lots of variations in the reflectivity pattern;  but NOT in the intensity.  The intensity of reflectivity,  the specular color of the metal, doesn't change much at all as you walk.  What does change, then?  What changes is small, subtle variations of surface roughness.  And this is how you can represent EXACTLY that:  Add even a very subtle, ultra low gain random scatter noise to the (50% gray, say) specular power of your model of stainless steel;  now, as you walk by it in-game you will see those very subtle light shimmers you see when you walk by a sheet of stainless steel.  In fact, it will look exactly like a sheet of stainless steel.  You will see it in how reflected lights play on it.  THAT IS THE POWER OF SPECULAR POWER !!!

For another example, say you have a wooden table in a room that you have painstakingly modeled and texture, but it doesn't look photorealistic no matter...  Well, take the texture you use to depict the wood grain, dim it, and throw it into the specular power channel.  Now, when you are looking at a light reflecting off the table, the brighter areas in the specpower texture will also look brighter on reflection, but the image of the light will be smaller for them;  while the darker zones of the wood fiber pattern in specpower will give a dimmer but wider image of the light.  The overall effect is a crawling reflection that strongly brings out the fibers along the edges of light reflections, but not elsewhere, making the fibers look far more real than if you tried to slap a 4k by 4k normalmap on them, while using a single channel!

For another example, say you have the same table, and you want to show a round mark from a glass or mug that once sat on that table.  You draw a circle, fudge it, whatever;  but where do you put it?  In the diffuse texture it will look like a paint mark.  In the specular texture it will look like either a silver deposition mark or like black ink.  No matter what you do it looks "too much"...  Well, try throwing it in the specpower channel...  If your circle makes the channel brighter, it will look like a circle of water or oil on the table.  If it makes the channel darker it will look like a sugar mark.  Either way, you won't even see the mark on the table unless and until you see variations in the reflectivity of a light source, as you walk around the table, which is as it should be.  THAT IS THE POWER OF SPECULAR POWER !!!

So, having discussed its use, let me say that the packing of this channel in the texture pack is rather critical.  Specular power can range from as low as 1.0 for an intentionally shimmery material, think Hollywood star dresses, to about a million to honestly represent the polish of a mirror.  The minimum needed is 8 bits of precision.  The good news is that it's not absolute precision critical;  more a matter of ability to locally represent subtle variations.  I'm not looking into ranges, linearities and representations for the texture channels yet,  but I will just mention that the standard way to pack specular power leaves MUCH to be desired.  Mapping specular powers from 1.0 to 256.0 as 0-255 char integers has been done, but it's not very useful.  The difference between 6th and 7th power is HUGE.  The difference between 249th and 250th power is unnoticeable.  Furthermore, a difference between 1000th power and 1050th power is noticeable and useful, but a system limited to 256.0 power can't even represent it.  As I've suggested elsewhere, I think a 1.0/sqrt(sqrt(specpower)) would work much better.  But we'll discuss such subtleties later.

Other channels needed:

Alpha?  Let me say something about alpha:  Alpha blending is for the birds.  Nothing in the real world has "alpha".  You don't go to a hardware store or a glass shop and ask for "alpha of 0.7 material, please!"  Glass has transparency that changes with view angle by Fresnel law.  Only in fiction there's alpha, such as when people are teleporting, and not only does it not happen in real life, but in fact could never happen.  For transparency, a "glass shader" (transparent materials shader) is what's needed.  Furthermore, the right draw order for transparent materials and objects is from back to front, wheras the more efficient draw order for solid (opaque) materials is front to back;  so you don't want to mix opaque and transparent in the same material object.  If you have a house with glass windows, the windows should be separated from the house at render time, and segregated to transparent and opaque rendering stages respectively.  All that to say I'm not a big fan of having an alpha channel, at all.  However, there's something similar to alpha blending that involves no blending, called "alpha testing", typically uses a single bit alpha, and can be used during opaque rendering from front to back, as areas with alpha zero are not written to the buffer at all, no Z, nothing.  This is perfect for making textures for plant leaf bunches, for example.  And so I AM a big fan of having a one-bit alpha channel somewhere for use as a hole-maker;  and I don't want to jump the gun here, but just to remind myself of the thought, it should be packed together with the diffuse RGB in the traditional way, rather that be thrown onto another texture,  as there is plenty support in real time as well as off-line tools for standard RGBA bundling.

There is another alpha channel presently being used, I think in the specular texture, that governs "self-lighting", which is a misleading term;  it actually means a reflective to emissive control channel.  I don't know what it is used for, but probably needs to stay there.

Another "single bit alpha channel" candidate that's been discussed and pretty much agreed on is "metal/non-metal".  I've no idea where it could go.  Having this single bit channel helps save a ton of texture space by allowing us to re-use the specular texture for dielectric constant and purity.  There is a third channel left over, however.  Perhaps good for thickness.

One thing we will NOT be able to do with this packing, using the metal/non-metal channel, is representing passivated oxide metals.  Think of a modded Harley Davidson, with LOTS of chrome.  If the exhaust pipe out of the cylinders, leading to the muffler/tail-pipe, is itself heavily chrome-plated, you will see something very peculiar happen to it.  In areas close to the cylinder, the pipe will shine with rainbow color tinted reflections as you move around it.  The reason for this is that the heat of the cylinder causes chrome to oxidize further and faster than cool chrome parts, forming a thicker layer of chromium oxide, which is a composite material with non-metal characteristics:  transparent, having a dielectric constant.  The angle at which light refracts into the oxide layer varies with angle AND with wavelength;  so each color channel will follow different paths within the dielectric layer, and sometimes come back in phase, sometimes out of phase.  I modeled this in a shader 20 years ago.  Unfortunately, the diffuse texture plus dielectric layer model is good for paints, plastics, and many things;  but modelling thick passivating rust requires specular plus dielectric.  It would have been good for sun glints off swords giving rainbow tinted lense-flares...

EDIT:  There is another way we might actually be able to pull off the above trick, and that is, if instead of using the metal/non-metal channel to switch between using the second texture for specular or non-metallic channels, we use it to interpret the first texture as diffuse or as specular.  The reason this is possible is that non-metals don't have a "specular color", and metals don't really have s diffuse color either...  And you might ask, okay, how do I put rust on steel, then?  Well, rust is a non-metal, optically...  Now, alpha-blending rust on steel may be a bit of a problem with this scheme, though...  But anyways, if we were to do this, we'd have our index of refraction channel regardless of whether the bottom layer is diffuse or metallic.  This would allow us to show metals like chromium and aluminium, which rust transparently, it would allow us to have rainbow reflecting steels, and would even allow us to have a rust-proof lacker layer applied on top of bronze or copper.  And actually, it is not entirely inconceivable, in that case, for the metal/non-metal channel to have a value between 0 and 1, thus effectively mixing two paradigms ... food for thought! ... in which case it could be thrown in the blue channel of the 2nd texture.  For artists, this could boil down to a choice between simple representation of a diffuse color material, or a specular metal, or to represent both, and have a mix mask.  Thus, you could represent red iron oxide by itself, diffuse, spec power, index of refraction, whatever;  and another set of textures representing the clean metal, then provide a gray-scale texture with blotches of blackness and grey representing rust, over a white background representing clean metal, press a button and voila!, the resulting first texture alternates between rust diffuse and grey specular color, and the blue channel in the second texture is telling the shader "this red color is rust", and "that gray color is metal".  It could look photo-realistic while using a minimum of channels to represent.

 

I have to go, so I'll post this partial analysis for now and continue later.  One other channel that immediately comes to mind is detail texture modulation.

I think we are over the limit of what we can pack in 3 textures, already;  I have a strong feeling this is going to be a 4 texture packing.  That's "packing", though;  at the artistic level there could be a dozen.  Other texture channes for consideration are the RGB for a damage texture, to alpha-blend as a unit takes hits... blood dripping, etc.

 

 

 

 

Edited by DanW58
  • Like 1
Link to comment
Share on other sites

Lots of text, so I will only pick a few points out.

8 hours ago, DanW58 said:

One thing this shader will NOT indulge on:

Screen-space ambient occlusion.  These are algorithms that produce a fake hint of ambient occlusion by iterative screen-space blurs of Z depth -tricks.  They are HACKS.

AO baking is also a hack, an even older one at that. There is a reason why ssao was developed and why there is even hardware support for it. I agree this is orthogonal to a new shader/stack project and can be tackled later by someone if they feel like it without interfering here. So leaving it out for now seems reasonable.

 

27 minutes ago, DanW58 said:

Alpha?  Let me say something about alpha:  Alpha is for the birds.  Nothing in the real world has "alpha".  You don't go to a hardware store or a glass shop and ask for "alpha of 0.7 material, please!"  Glass has transparency that changes with view angle by Fresnel law

If you hold a leaf towards the sun some light leaks trough, so "transparency" isn't just about glass. Whether to use alpha channel or something else is another question.

 

---

So I did a bit of research, there are basically two workflows for pbr. So the first thing to do is to figure which to pick for 0ad. One is specular/gloss, the other metal/roughness. The latter seems to dominate and the former is sort of considered old school. Converting between them is reasonably possible and 3d software sellers seem to support both. I favour metal/roughness for the simple reason that it seems more common in industry and more intuitive for non physicists. I have no strong opinion but this is something that should be discussed with core developers first before jumping into implementing the shader / specifying the stack.

 

Next is what channels/information is needed.

  1. AO map seems gone in your list.
  2. A player_colour bit. If I'm not mistaken alpha is currently used for this.
  3. You mention glass I mentioned a leaf, there are fantasy mods and you talked about a space mod. So the specification for the stack should handle this sort of materials, whether to implement them now, later or never is not that relevant.
  4. Materials that emit light, lava for instance or fluorescent paint/engine on a spaceship.
  5. Clear coating?
  6. Others?

 

Encoding/bit-depth: If there is an industry standard for which channels get packaged together and their order as well as their depth this should be picked. Requires some research but will certainly help with tooling / interoperability.

 

UV: Probably should only ever support 1

Link to comment
Share on other sites

26 minutes ago, hyperion said:

AO baking is also a hack

How?  And which one?  There are two types of AO bakinkg:  a) were all rays are counted equally, and b) where rays are multiplied by ray dot normal before accumulating.  I think both are pretty scientific.  The first represents exact unoccluded visibility.  The second represents diffusely reflected omnidirectional light.  The fist is good for things like probabilistic occlusion hacks.  The second looks the most real when applied to models.  Do you mean it's a hack because of the finite samples?  Or because it does not take all of the environment into account?  Or because it isn't dynamic?

I mean, you could say all of graphics is a hack, since photons move from light sources to eyes, but in graphics we project from the eye, back to light sources...

 

26 minutes ago, hyperion said:

If you hold a leaf towards the sun some light leaks trough, so "transparency" isn't just about glass.

Ah, but that's not "transparency"; it's "translucency".  But I get what you mean.  If I grab a piece of tinted glass in front of my face and DON'T change the angle, I get constant transparency rate, and therefore something similar to Alpha.  But in any case, what I meant is that Alpha is useless in 3D graphics unless you want to model a teleporter.

26 minutes ago, hyperion said:

AO map seems gone in your list.

No, it is just the fact that it's in a second UV channel.  I'm considering only textures pertaining to material representation, for now.

26 minutes ago, hyperion said:

A player_colour bit. If I'm not mistaken alpha is currently used for this.

Isn't player color something the CPU holds?   Oh, I know what you mean, a channel for where to show it, right?

26 minutes ago, hyperion said:

You mention glass I mentioned a leaf, there are fantasy mods and you talked about a space mod. So the specification for the stack should handle this sort of materials, whether to implement them now, later or never is not that relevant.

Touche.  I'll make sure my glass shader will be able to do glass, leaves, plasma, fire and Alpha things.

26 minutes ago, hyperion said:

Materials that emit light, lava for instance or fluorescent paint/engine on a spaceship.

Sure, the emissive texture was my favorite texture work in my Vegastrike days.  The real value of it is not just glowing things that can be produced without a texture, but self-illumination.  Having  parts of a ship receive light from other parts of ship,  that adds a TON of realism.

26 minutes ago, hyperion said:

Clear coating?

That's covered already.  Index of refraction, purity of surface and spec power all maxed out will give you the glossiest clear coating in the galaxy.

 

I don't understand the part of

26 minutes ago, hyperion said:

One is specular/gloss, the other metal/roughness.

To me all of that is necessary to describe a material;  I don't see where there is a "choice" to make.

Edited by DanW58
Link to comment
Share on other sites

1 hour ago, DanW58 said:
1 hour ago, hyperion said:

AO baking is also a hack

How?

It's static.

 

1 hour ago, DanW58 said:

I don't understand the part of

2 hours ago, hyperion said:

One is specular/gloss, the other metal/roughness.

To me all of that is necessary to describe a material;  I don't see where there is a "choice" to make.

quick google: https://forums.unrealengine.com/development-discussion/rendering/14157-why-did-u4-use-roughness-metallic-vs-specular-glossiness

basically what can be taken from there metal/roughness model:

  • taken from Disney
  • more intuitve
  • saves two channels
  • doesn't permit physically impossible materials
  • Thanks 1
Link to comment
Share on other sites

23 hours ago, DanW58 said:
23 hours ago, hyperion said:

If you hold a leaf towards the sun some light leaks trough, so "transparency" isn't just about glass.

Ah, but that's not "transparency"; it's "translucency".  But I get what you mean.  If I grab a piece of tinted glass in front of my face and DON'T change the angle, I get constant transparency rate, and therefore something similar to Alpha.  But in any case, what I meant is that Alpha is useless in 3D graphics unless you want to model a teleporter.

Khronos calls it transmission,

https://github.com/KhronosGroup/glTF/blob/master/extensions/2.0/Khronos/KHR_materials_transmission/README.md

Link to comment
Share on other sites

While my first AO in Blender 2.9 is baking, I finally get the time to read on this.

I absolutely love this post by an Unreal engine developer answering questions in that thread you linked me:

Quote

The material model is based off of Disney's. I've had experience in the past with a physically based model that was DiffuseColor, SpecularColor, and Gloss. There is nothing more or less physical about it. It is just a different interface to the artist. In my experience there are things about it that are problematic that I intended to solve.
Gloss to some people isn't obvious in how it works. What does gloss mean? Although high gloss and low gloss are perfectly clear to me on more than one occasion there has been confusion or debate about which produces sharper or blurrier reflections. Roughness is clear to everyone. It is immediately understandable and clear as to what effect it has on light. The unfortunate thing is that it is opposite the intuitive intensity of specular reflectance. This means that roughness maps look inverted visually. For this reason some engines have gone with "smoothness" instead, which if I were to do it again I would strongly consider.
DiffuseColor and SpecularColor have a complex relationship that requires a great deal of artist training and is very error prone. Artists need to be taught that metals have black diffuse and colored specular, nonmetals have noncolored specular of about 4%. What is that in sRGB space? These sound simple but trust me, making sure the textures followed rules like this is a long and difficult process. Having the parameter Metallic is much simpler. Is this metallic, yes, no? Now there is nothing to learn or screw up except setting metallic to something not 0 or 1. The learning process with this material model I have seen go much better.
An additional advantage is storage savings. Materials can often assume constants and not require textures for Metallic or Specular. The GBuffer gains one channel by storing Metallic and Specular instead of SpecularColor.
There is a downside being that it disallows some nonphysical materials with diffuse and colored specular. Occasionally that can cause issues but most of the time this is a good thing.
There are many others going the metallic route, Frostbite and The Order come to mind. The Disney presentation made a really big splash.

So, there is NO difference between specular/gloss and metal/roughness;  it is a nomenclature.  Pure semantics.  Specular and Metal are the same thing. Gloss is the same as roughness, though they sound like opposites, namely specular power.

Anyways, I'm starting to really like the idea of having a single "color" texture, where a "metal" channel causes the color to go from diffuse to specular.  Not only does it not permit incorrect materials at 0 and 1 for "metal";  it even makes sense for intermediate values.  Namely, a perfect metallic surface would indeed be black in specular, but as you introduce the types of imperfections that may cause photons to bounce more than once to reflect back out, you are introducing a small measure of diffuse color, which has a higher saturation than the specular color, which the shader can calculate.  So, 0.9 for "metal" would give you a less than perfect metallic surface.  At the opposite end, diffuse materials are usually colored by metallic content;  most dielectrics are colorless;  and while most of the light may take multiple bounces to come out, not all of it does, and in fact telescope makers have been looking for a true diffuse material to paint the inside of telescopes with, and it doesn't exist.  I can model that on the shader too;  it is not therefore ludicrous to have 0.1 for "metal".  And if we treat this as a continuum, then we don't need to worry silly about filtering artifacts with our metal channel.

And we save a whopping 3 channels!!!

EDIT:  In fact, I think I'm starting to see an even more interesting use of the metal channel.  At  the middle values, like from .3 to .7,  it could represent various types of rocks.  The materials used for "stone" and "metal" could come from the .333 and .667 set points of the "metal" channel, in fact...

 

Edited by DanW58
Link to comment
Share on other sites

11 hours ago, DanW58 said:

So, there is NO difference between specular/gloss and metal/roughness;  it is a nomenclature.  Pure semantics.  Specular and Metal are the same thing. Gloss is the same as roughness, though they sound like opposites, namely specular power.

There is no difference in that both can be used to depict materials realistically, but the difference in implementation and ability is obviously more than just semantics.

 

 

11 hours ago, DanW58 said:

EDIT:  In fact, I think I'm starting to see an even more interesting use of the metal channel.  At  the middle values, like from .3 to .7,  it could represent various types of rocks.  The materials used for "stone" and "metal" could come from the .333 and .667 set points of the "metal" channel, in fact...

I wouldn't start assigning random interpretations to channels but stand on the shoulders of giants like Disney, Adobe, etc. Do like the others do.

 

https://www.khronos.org/blog/art-pipeline-for-gltf

Describes a set of textures and assignment of channels under "texture requirements"

Quote

The glTF format supports the following textures for the Metallic-Roughness PBR model:

  • Base Color
  • Metallic
  • Roughness
  • Normal
  • Ambient Occlusion
  • Emission

The format does expect the textures in a specific format, however. The Base Color, Normal, and Emission textures are saved as individual files, preferably a PNG as the format is lossless. The Ambient Occlusion, Roughness, and Metallic textures are saved in a single channel-packed texture to reduce the number of texture loads. The textures need to be packed into the channels of your texture as follows:

  • Red: Ambient Occlusion
  • Green: Roughness
  • Blue: Metallic

Going forward, I will refer to this channel-packed texture as the ORM texture for Occlusion, Roughness, and Metallic.

The set is obviously sufficient to depict realistic stone as can be seen in

https://www.turbosquid.com/Search/Index.cfm?file_type=1022&keyword=stone&media_typeid=2

 

Link to comment
Share on other sites

Hyperion, I'm not insisting we do things differently, necessarily.  I just want to establish requirements first;  even design a packing in full detail;  THEN compare to other solutions.  I like to think things through from first principles, just like Elon Musk;  I don't assume others know better than me.  Sometimes it is the case;  but I like to discover it, rather than assume it.  If I'm going to sit on Disney's shoulder, I want to know EXACTLY what advantage it's going to give me.  Am I going to get free scripts that generate all the textures, which otherwise I will have to write my own?  Or is it compatibility with rock and stone libraries that go for $50 a virtual bag, when I can get real stone at Dollarama for $1.25?

Quote

The best paths to export a glTF right now are Substance Painter, Blender, and Autodesk 3ds Max. Since we were just looking at creating textures for glTF in Substance Painter, we will start there.

I don't have Substance Painter, or 3ds Max.

So all I'm asking is let me go at this the long but sure way.  Think it through in detail.  Afterwards we compare to something else and figure out if we should go with that instead.  I'm frankly tired of hearing about this and that being the ultimate crap sort of marketing, which often amount to 1.5 good ideas under the hood, and a dozen new problems.  In electronics, there was the EDA revolution in CAD systems, with probably several million hours of sales teams going around manufacturers to explain what a marvelous new thing EDA was, and the entire world jumped on it, and all the CAD companies became repeaters of the EDA mantra, and here we are today, CAD systems are the same they always was;  EDA was complete vapor-ware.  There was ISO-9000 and how it was going to revolutionize industry with deep, company wide quality philosophy, and where does it all stand?  Companies have to rush once a year to "put something together to show the ISO guy when he comes".  Just a huge pile of hogwash.

The only effect of these "revolutions" is to actually prevent evolution in the very direction they claim to promote it.  In the case of ISO-9000, for example, it prevents good documentation practices.  Twice in my career I tried to implement a GOOD documentation system (I do have a working definition) but was told that whatever I did could not conflict with ISO-9000's (dictatorial) ways.

I was writing software, at one time, to automate going between parts-lists from my CAD tool and stock control, and was told it was not necessary since the tools coming soon would have EDA and automate all that... Never happened.

In this case it may have happened already;  but I want to see if the solution is exactly right, or not.  And if it isn't exactly right, is there enough advantage in following them to offset the lack(s) or shortcoming(s)?  But if you want me to NOT think, just blindly follow, it is not going to work.  I've never been, and never will be, a blind follower;  and appeal to authority does not appeal to me.

EDIT:  I'm reading through the page you linked, trying to understand where they are at.  One thing that strikes me already is the poor writing, such as "The format does expect the textures in a specific format".  It says that base color, normal and emission are saved as individual files, but does not explain why.  It insinuates it is due to precision, saying they are PNG, but I can't imagine color or emission are so sensitive to compression.  Normalmap yes...

 

Quote

 

Texture Requirements

The glTF format supports the following textures for the Metallic-Roughness PBR model:

  • Base Color
  • Metallic
  • Roughness
  • Normal
  • Ambient Occlusion
  • Emission

The format does expect the textures in a specific format, however. The Base Color, Normal, and Emission textures are saved as individual files, preferably a PNG as the format is lossless. The Ambient Occlusion, Roughness, and Metallic textures are saved in a single channel-packed texture to reduce the number of texture loads. The textures need to be packed into the channels of your texture as follows:

  • Red: Ambient Occlusion
  • Green: Roughness
  • Blue: Metallic

Going forward, I will refer to this channel-packed texture as the ORM texture for Occlusion, Roughness, and Metallic.

 

 So, if I try to piece together this steaming pile of confusion, they basically have the same thing I was suggesting as far as a single RGB for diffuse or specular, controlled by a fourth channel, call it "metal" or "specular", matters not.

Then roughness (or gloss) ((or as that game developer suggested SHOULD have been named, "smoothness")) is the specular power.  PERIOD.

Now, @hyperion , here is the crux of the matter and why we cannot use their solution.  Why, in fact, it is poorly thought out:

We have two UV layouts in the models.  A great idea.

1)  The first is for material textures that are shared game-wide.  In this layout, islands of unwrap can be freely overlapped, increasing efficiency.

2)  The second is for ambient occlusion, where not so much resolution and detail are needed, but the UV islands MUST NOT overlap each other.  I presume this second UV layout is the one to use also for normalmap, as well as for the emissive texture.  ALL of these must be unique, non-overlapping.

If Disney, or whoever came up with this glTF would have spent some time THINKING, they would have realized that all material related files, and all non-material related files, should be kept separate.  Thus, you would pack color, metal and roughness in one texture;  and ambient occlusion, normal and emission in another.  Instead, they mixed things from these two domains randomly:  "The Ambient Occlusion, Roughness, and Metallic textures are saved in a single channel-packed texture".

And where is the Index of Refraction?

This is retarded.

In any case, we cannot use it, as our ambient occlusion is in a separate UV.

 

EDIT:  This is the only reference I can find so far on the metal/roghness/spec/gloss being anything but semantics:

Quote

The biggest different though is in the usage and why the naming is important.

Glossiness 0=not glossy, 1=glossy
Roughness 0=not rough(glossy), 1=rough (glossy)

Specular is a not well defined process, you can use it to change specular colour and intensity
Metallness is quite different from specular colour in that it defines where the specular colour inherits from the underlying albedo where the intensity is a function of the energy conservation of PBR

Which is clear as mud to me.  So "Metallness [sic] defines where the specular color inherits from the underlying albedo", eh?  So where is the texture for undelying albedo?  And where in physics does specularity need to "inherit" anything?  If anything, albedo should inherit from specularity.  When you speak of the albedo of a comet or asteroid, it combines diffuse and specular.  Diffuse inherits from specular, in a physical sense, as it is a power of specular color, given multiple bounces.  And albedo inherits from diffuse AND specular, by combining them.  So here is someone who has no idea what he is saying, pretending to explain the unexplainable, and only going half way...  What about specular?

Edited by DanW58
  • Like 1
Link to comment
Share on other sites

So, having said all that, let's get to work.  Thanks to analyzing glTF's absurdities, I now have a more clear view of the path ahead.

Texture channels need to be separated into two distinct groups:

 

MATERIAL TEXTURES

Material textures are what defines a material, irrespective of where it is used, irrespective of lighting conditions, irrespective of whether there's a bump on its surface, regardless of whether a blow torch is heating a portion of it and making it glow red.  All the material textures care about is the representation of a material, and should include at least the following eight channels:

  1. Red
  2. Green
  3. Blue
  4. Specularity (metallic)
  5. Smoothness (gloss or ANTI-roughness)
  6. Index of refraction
  7. Purity of surface layer
  8. Thickness of refractive layer (for rainbow tinted metallic reflections through passivated oxide layers)

 

OBJECT TEXTURES

Object textures are what defines an object's appearance other than material.  Object textures should be material-independent to the extent that you could completely change the material(s) an object is made of and NOT have to even touch the object textures at all.  Here are 7 to 9 channels to consider

  1. Ambient Occlusion
  2. Normal U
  3. Normal V
  4. Normal W
  5. Height (for parallax)
  6. Emissive texture (incident light, rather;  NOT material-modulated)
  7. Faction Color Map
  8. Decals and Logos  (option to consider)
  9. Damage  (option to consider)

 

UNDECIDED:

There's one channel I've been mentioning, Detail Texture Modulation, which I'm now on the fence whether it should be material-oriented or object oriented...

Nah, 1 minute of thinking did it.  Object.  Why?  Because material textures can have unlimited detail due to the freedom to overlap UV islands.  Where detail is at a premium is in the second UV set.  So adding this as number 7.

EDIT:  And even if we considered having detail textures for materials, which we can still do, it would probably not be necessary to modulate them.

 

OBJECT TEXTURES (revised)

Object textures are what defines an object's appearance other than material.  Object textures should be material-independent to the extent that you could completely change the material(s) an object is made of and NOT have to even touch the object textures at all.  Here are 8 to 10 channels to consider:

  1. Ambient Occlusion
  2. Normal U
  3. Normal V
  4. Normal W
  5. Height (for parallax)
  6. Emissive texture (incident light, rather;  NOT material-modulated)
  7. Detail texture modulation
  8. Faction Color Map
  9. Decals and Logos  (option to consider)
  10. Damage  (option to consider)

 

EDIT:  Another thing to consider, for materials, is generative textures.  Some algorithms can produce 3D textures, such that you could cut a stone and find what the texture looks like inside.  Not much use for this in 0ad, but worth keeping in mind for future expansion in use of the engine.

Edited by DanW58
  • Like 1
Link to comment
Share on other sites

darn!  I forgot to include a single-bit alpha-test channel.  This is surprising:  We need alpha testing for both materials AND objects. Why?  For example, you could consider chicken wire fencing a "material" and use alpha-testing for the spaces between wires.  Or you could make a whole set of tree-leaves into a material and apply this material to trees having big leaves made of it;  but so the material needs alpha-testing.  Whereas the object may need alpha testing for all sorts of reasons, windows, bullet-holes, you name it...

 

NOTE:  Some readers might be alarmed by the number of textures being discussed.  Keep in mind this is the full set, which in many cases may not be used.  The only thing I ask, though, is that we use the same shader regardless of whether the object uses or not this texture or that.  There should be in the game a folder for commonly used textures, and in this folder there should be a BLACK.PNG, WHITE.PNG, NONM.PNG (NO normal map, e.g 0.5, 0.5, 1.0 blue), etceteras, all 1x1 texel in size.  When an object doesn't need a normal map, it just calls for NONM.PNG.  Period!  No need to change shaders for every combination of textures.  If the oject doesn't need an emmissive texture, it calls BLACK.PNG.  If it doesn't need an AO it calls GRAY50AO.PNG.  Etceteras.

 

ANOTHER NOTE:  I like clear terms.  I like "smoothness" for specular power much more than I like roughness or glossiness.  Roughness is the inverse of specular power, so it is counter-intuitive.  Glossiness is a gross misnomer, as it has more to do with reflectivity than with surface geometry.  Glossiness could be a good name for the Index of Refraction channel.  "Smoothness" is PERFECT.  Number two:  "Metal" or "Specular"?  Both can lead to confusion.  How about "MSpec"?  It has the advantage that you cannot assume you know what it is;  you have to RTFM.  Number three:  Indeed I really want to use "glossiness" for Index of Refraction, but the imbeciles took it over... :angry:  If I use it for IOR, now, it will confuse people familiar with the bad systems.  darn!  ...  Well, FARK THEM!  "Gloss" it is.  NOT "Glossiness";  just "Gloss" :brow:  That will be the name for IOR.  (And for anyone about to argue the term, consider this:  You go to the paint store and see a can of "high gloss" varnish.  Does that mean that the surface will be very flat?  No, the surface will be as flat as you paint it.  It means it has a top of the range index of refraction, so it reflects a lot.  So I'm not being harsh calling them imbeciles;  they ARE TOTAL imbeciles,  twisting the meaning of words --deliberately, probably, just to look authoritative and powerful by twisting words AND getting away with it;  as well as to mesmerize and confuse and be pretentious.)

 

One thing that looking at that tlGF stuff reminded me of is that multi-channel textures exist.  We only have one UV coordinate for material, and one UV coordinate for AO, per fragment shader execution, so it is a bit wasteful to have so many separate texture fetches.  If we packed all 8 material channels in one texture, and all 8 object channels in another (AO, normalmap, etc), we could have just two texture fetches (in addition to environment map fetches, of course).  That might have better performance.  I've never used multi-layer textures, however;  I have no idea with regards to their support by older hardware...  It shouldn't be too bad, though, as I remember these types of double decker whooper textures existed when I was in this business 20 years ago.  But for now I will try to proceed as if I'm targeting a bunch of standard texture formats.

 

In any case, some angles we must look at before moving forward too fast are the precision, dynamic range and mipmap concerns for each of the channels.

 

MIPMAPS:

(For those who may not know what mip-maps are:  These are like a set of smaller textures packed together with a texture, where each is half the size of the previous.  Imagine you take your image in GIMP and scale it 50% and save under a new name;  then scale again 50% and save under another name, until you get to a 1x1 texture, then package all those scalings in one file.  At game-time, if you zoom out or move away from an object, pixels on the screen would fall on multi-texel strides on the texture, which would introduce aliasing problems,  so you pick the the level of scaling that gives you a stride of 1 texel.  Or rather, you pick the two scalings with the closest to 1 texel strides, and interpolate between them, as well as in the U and V directions in the texture, which is why this technique is called tri-linear filtering.)

Having said that, mipmaps sometimes cause artifacts, rather than prevent them.  Just to give you a simple example, normalmaps are kind of infamous vis-a-vis mip-mapping and filtering artifacts.  Ambient occlusion isn't negatively impacted by mipmapping at all, but it doesn't always benefit from it, either.   Height map (for parallax) can suffer disastrously from mipmapping...

Notice a pattern?  It seems like all the textures in our Object Textures set are mip-map un-friendly, or at least mip-map skeptical ...  Perhaps it's worth considering NOT having mipmaps for Object textures;  but stick to bi-linear filtering for them.  Food for thought...

I can't think of any of the material channels having any serious problems with mip-mapping, on the other hand.  Can you?

 

DYNAMIC RANGE:

I don't mean HDRI here.  Color channels are all 0~1; no material can be albedo of 2.  HDRI deals with dynamic range of LIGHT;  here we are dealing with MATERIALS.

Big difference.

The dynamic ranges to consider are for such parameters that need to be mapped to a 0.0~1.0 range.  So, it does not apply to red, green or blue.  It does not apply to a "boolean" such as "is_metal".  It does not apply to the "surface purity" channel, which goes from pure diffuse (0.0 purity), to pure dielectric specularity (1.0 purity).

It does, however, apply to...

Gloss, our term for Index of Refraction.  This is going to be a hard choice here, though...  Indexes of refraction for materials go, PRACTICALLY, from 1 to 3.  Anything outside this range is for SciFi really.  But there are exceptions.  Some people argue that metals have negative indices of refraction.  If we were to take that at heart, we'd get rid of the MSpec channel, and have Gloss extend down to -1.0.  I'm not sure what results we'd get.  But it is not a good choice because metals can have a passivated oxide layer that acts as a non-metal, having a positive IOR, of some given thickness;  and a negative IOR to represent metal would then be unable to represent metals with transparent oxide surface layers.  So the bottom of the range for IOR is 1.0, same IOR as for air and vacuum.  Water is 1.5.  Glass is 1.5 also, which is why it is so hard to see a piece of glass under water.  I think diamond is 2.  I heard some materials going a little over 3.0.  Then again, there are some laboratory made materials with IOR's as high as 7, or so I've heard.  I think that in the interest of simplicity and compromise (and peace) I'm going to set the range to 1.0~5.0, mapped to 0.0~1.0.  Thus,

IOR = ( Gloss * 4.0 ) + 1.0

Gloss = clamp( ( IOR - 1.0 ) / 4.0, 0.0, 1.0 )

I don't thing Gloss will need too many bits of precision, however, for two reasons:  Variations in IOR in a material would not be too apparently noticeable, and they don't often occur, whether naturally or artificially.

Thickness of passivated layer, for rainbow tinted specularities, I would tentatively say that the range of interest is up to 10 to 20 wavelengths of red light.  650nm x 10 = 6500nm = 6.5 microns.  So, from 6.5 to 13 microns.  More than that thickness we get into a range where the rainbow tint cycles would cycle too fast with changing angle to even notice.  So, tentatively I say 0 to 10 microns maps in 0.0~1.0 range.

You may be wondering why we need a channel for thickness of this passivated oxide layer.  We do because whenever you see such rainbow reflections they seem to proceed at different speeds, slowly near areas without the effect, faster away from them;  and to produce a realistic effect we'd have to make it do something similar.  So, if you are modelling a magical sword made by Indra, you might give it a thick layer on the sides, but thinning towards the edge, and with multiple thinning blobs along the length, and blurred the heck out of, for good measure.  This should result in fine bands of rainbow color on the sides (as you turn the sword in reflecting a light), slowing to wide and less changing color bands towards the sharp edge and blobs.

Smoothness is a very special case, because here we need to map a range from zero to infinity with a miserable 8 bits of precision.  I tackled this in some other thread and came to the conclusion that a pretty good mapping would be one minus the inverse of the fourth root of specular power.  Let's try it to see if it works:

To texture:  Smoothness = 1.0 - 1.0 / sqrt( sqrt( SpecularPower ) )

From texture:  SpecularPower = pow( 1.0 / ( 1.0 - Smoothness ), 4.0 )

So, an encoding of 255 for maximum smoothness comes to... 1-255/256 = 1/256;  1/(1/256) = 256;  256^4 = 64k^2 = Too much.

In fact, the highest specular power we could ever need is 65000, as that is the specular power that represents a sun reflection of the right diameter, if the sun is modeled as a point source, as per the calculations at the bottom of the first post in this forum thread.  So I'm going to change the formula to just a square, instead of a fourth power.

To texture:  Smoothness = 1.0 - 1.0 / sqrt( SpecularPower ))

From texture:  SpecularPower = pow( 1.0 / ( 1.0 - Smoothness ), 2.0 )

This channel, however, is probably THE most precision-sensitive channel in the whole package.

 

PRECISION (OR RESOLUTION):

For RGB, I defer to the designers of the DDS texture format(s).  5,6,5 bits, or whatever they did, is fine with me.  Color is overrated, anyways.

Alpha testing by definition needs only 1 bit.

MSpec would probably do well with as little as 5 bits.

Gloss would probably do well with as little as 5 bits.

Purity would probably do well with as little as 5 bits.

Thickness of passivated layer would probably do well with as little as 5 bits.

Smoothness, OTOH, needs a full 8 bits.

Ambient occlusion needs a full 8 bits.

Normalmap also needs 8 bits, but we could save the W channel;  it can be calculated by the shader from U and V.

Emissive would probably do well with as little as 5 bits.

Faction color, decal and logo would do with 1 bit each.

 

In the next post I'm going to summarize all this in a table.

 

Edited by DanW58
  • Like 1
Link to comment
Share on other sites

MATERIAL TEXTURES REVISITED

Red_channel       5 bits   "color.dds"
Green_channel     5 bits        "
Blue_channel      5 bits        "
Alpha_test_ch     1 bit         "
FresnelGloss      5 bits   "optic.dds"
FilmThickness     5 bits        "
SurfacePurity     5 bits        "
MSpecularity      1 bit         "
Smoothness        8 bits   "smooth.dds"

I decided to forgo the idea of using fractional metallicity for rocks, and leave MSpec as a single bit channel,  --"is_metal", in other words.

Single-bit channels are not too friendly to mipmapping and filtering, but on the other hand I can't think of an example where metal and non-metal would be finely mixed in a material,  EXCEPT ROCKS ...  But even in rocks, most metals appear as (optically) non-metallic oxides.  I think we can postpone rocks.  If worse comes to worst, we could conceivably have a separate shader for rocks and soils.  Such a shader could produce the scintillation effect of grain of sand -sized chrystals with flat faces in random directions.  I once saw a shader that simulated 8 randomly oriented tiny mirror surfaces per pixel, to give a metallized paint look.  Even as I write this, it is becoming more and more clear to me that indeed, rocks and soils deserve a specialized shader.

So, as you see above, our material textures are settling nicely.  Basically, we have two DDS textures 5+5+5+1 bits,  and a monochrome 8-bit,  all with mipmaps.

I think this it for now for our Material Textures package.

Next post I will deal with our Object Textures package.

 

  • Like 1
Link to comment
Share on other sites

Okay, it's not that easy.  I just went to refresh my memory on DXT texture formats, and the 5,5,5,1 format doesn't exist.

The way it is, which is now coming back, after so many years, is that DDS takes 16 texels at a time (and they call each 16-texel bunch ... a "texel" again... boggles the mind... the world is just drowning in stupidity), places the four incoming colors in a 3D color space, and tries to find the nearest line to all four points;  then finds the closest point on the line from each of the points.  For the two outer points, it encodes the color as 5,6,5 bits for rgb, and for the two inner points it interpolates using a two bit number.   So, 16 bits for each end-color is 32 bits, plus a 2-bit index for each of the 16 texels = 32 bits.  32+32 = 64 bits per 16-texel "texel", which works out to 4 bits per texel (versus 24 bits for uncompressed sRGB).  Of course, only 4 colors are actually present in each group of 16 texels, which is rather apathetic.  Works well if the image is filtered and dull;  but produces a lot of artifacts with sharp images.  Well, not necessarily;  the issue is with color channel correlations or the lack thereof.  The great advantage of this format is that it decompresses locally.  If you are decompressing jpg, for example, you have to decompress the entire image at once, so when using jpg or png formats the videocard driver has to do the decompression, and the image is put in video memory uncompressed.  DDS format decompresses locally, so the whole texture can be loaded into graphics memory compressed, and decompressed on the fly by the gpu as it reads it,  taking a lot less space in video memory (1/4 as much space, to be exact).

The take-away is that in an image, depending on the detail and sharpness, you can have uncorrelated R, G and B data scatter, but DXT compression assumes it can bundle this scatter into a single straight line between two colors to interpolate across.  In other words, it depends on R/G/B correlation.  If you have a linear gradient, whether vertical, horizontal, or at any angle, DXT will capture that very well.  But for UNcorrelated change it can be terrible.  So, for blurry images it may work well ...

What I just described above is actually the most basic rgb format of dxt, namely DXT1, though.  There's also two formats containing alpha channels:  DXT3 and DXT5.  DXT3 and DXT5 both result in the same size texture from a given RGBA input.  They just express alpha two different ways, better suited to different circumstances:

In DXT5, the minimum and maximum alpha values in the 16-texel "texel" are expressed as 8-bit numbers;  so that's 16 bits.  Then the 16 texels are interpolated between those two extremes via 3-bit indexes, resulting in up to 8 shades of alpha per 16-texel group.  3 bits x 16 texels = 48 bits.  16+48 = 64, so the size of a DXT5 is double the size of a DXT1, but the quality of the alpha channel is much higher than for rgb, though it may suffer a bit in the interpolation.

In DXT3, each of the 16 texels gets an individual alpha value;  no fancy interpolations;  but each value is 4-bits only.  16 x 4 = 64.

DXT5 works better for smoothly changing alpha channels, or for packing smoothly changing data, such as AO.

For images with real detail in alpha, such as holes in a fence, you're much better off with DXT3.  For data-packing, any smoothly changing data benefitting from higher precision, like AO, would be best encoded as DXT5 alpha, but rapidly changing, detailed data not needing much precision, DXT3's alpha channel is much better.

But there's no such thing as a DXT format with 1-bit alpha for alpha-testing.  Actually, there probably is, as DXT has a phone-book of texture formats, but the ones commonly used are DXT1,3,5, and I wonder if there's even hardware support for the others.

Edited by DanW58
  • Like 1
Link to comment
Share on other sites

MATERIAL TEXTURES REVISITED (take 2)

Red_channel       5 bits   "albedo.dds" (DXT5)
Green_channel     6 bits        "
Blue_channel      5 bits        "
Smoothness        8 bits        "
MSpecularity      5 bits   "optics.dds" (DXT5)
SurfacePurity     6 bits        "
FresnelGloss      5 bits        "
FilmThickness     8 bits        "

I was stuck at 2.5 textures for a while, finally decided to get rid of alpha, whether alpha blending or alpha testing, from the materials pack.  Leave it to the object pack to do its own alpha.  There's really no need to define chicken wire fence as a material, and trees can instance smaller objects representing small branches full of leaves, with object texture pack level alpha.  Getting rid of alpha allowed me to use the alpha channel for Smoothness.  I was also worried about film thickness precision, as pixelation artifacts in this data can be quite noticeable in the play of rainbow tint reflections;  but then I found I had a second 8-bit alpha channel I could use for it.

I think this packing ROCKS!  (No, not for the rocks;  they will get their own shader, which will also rock...)

Really super-efficient and optimized.  Disney and all those people haven't a clue how to even think ...

 

Next post, this time, will really deal with our Object Textures package.

 

EDIT (couple of days later):

Major channel rearrangement:

Red_channel       5 bits   "albedo.dds" (DXT5)
Green_channel     6 bits        "
Blue_channel      5 bits        "
AgedColor         8 bits        "
MSpecularity      5 bits   "optics.dds" (DXT5)
SurfacePurity     6 bits        "
FresnelGloss      5 bits        "
Smoothness        8 bits        "

"Aged_Color" added;  Thickness (of dielectric rust layer) moved to the Object Texture set, Zones texture, second channel, and changed name to "Ageing".  The reasons for these changes are as follows:

  • Zones of rusting are actually "Zones" in object space;   NOT part of a (shared) material.  The "Ageing" channel, in object UV space, will allow demarcation of zones where rusting or staining occurs for an object.
  • Passivated dielectric oxides that glow with iridescence are just one type of oxide;  there are matte red, black and orange oxides.  So it is clearly adventageous to expand the usefulness of this zoning to include any type of rust or weathering we may want to show;  not just iridescence.  The AgedColor channel in the albedo texture now encodes a color for "rust", from black (0.0), to red (0.25), to orange (0.5), and then clear (1.0).  The 0.75 range is not to be used.  When rust is colored (0.0~0.5), the Ageing channel value alpha-blends this color.  When it is clear (1.0), the value in the Ageing channel encodes thickness of transparent layer.  The alpha-blending of color also reduces MSpec and Purity.  When AgedColor is clear (1.0), Ageing's value increases MSpec and Purity.
  • This arrangement improves clarity even further:  Now the albedo.dds texture has an rgb albedo, plus it encodes an optional "aged" albedo in alpha.  Smoothness, which was rightfully a part of optics, is now in optics.dds.  And Thickness, which was a means of zoning rust effects, is now in Zones.dds.
Edited by DanW58
  • Like 1
Link to comment
Share on other sites

OBJECT TEXTURES REVISITED (take 2)

Object textures are what defines an object's appearance other than material, as mentioned before.

Normal_U   8 bits  "Forms.png" (PNG sRGBA; no mipmaps)
Normal_V   8 bits       "
Normal_W   8 bits       "
Height     8 bits       "
Emit_red   5 bits  "Light.dds" (DXT5)
Emit_grn   6 bits       "
Emit_blu   5 bits       "
Occlusion  8 bits       "
Faction    5 bits  "Zones.dds" (DXT3)
DetailMod  6 bits       "
????????   5 bits       "
Alpha      4 bits       "

EUREKA!  3 Object Pack textures, with one channel to spare.  What I particularly like about it is how the channels hang together:  Normals and height are interrelated;  they are both form modifiers.  Same thing goes for Emission and Occlusion, both of which have to do with light and are "baked" in similar ways.  (I don't mean that lightbulbs and torches are baked;  I mean how they illuminate the object is something that can be and should be baked.  And this baking can be modulated by the CPU;  thus, a flame sequence for a torch can have an intensity track which in turn can be used to modulate the baked emissive.  And if you worry about actual lights in the emissive texture that should not be modulated, worry not;  I dealt with that problem a long time ago;  found a trick to decouple light sources from baked illumination. )

Finally, the Zones texture has three zonings.  Faction tells where to place faction color.  DetailMod tells where and how to add detail noise (0.5 will be NO detail;  0.0 will be max smoothness detail, 1.0 will be max AO detail.  For a ground or a very rough surface, AO detail can be best.  For a smoother surface, smoothness detail shines).  And alpha, which is also a zoning.

The reason I chose DXT3 instead of DXT5 for Zones.dds is that the alpha channel does not need ANY precision, in fact, I would have been happy with a single bit alpha for alpha testing, and instead it really needs to NOT be messed up by correlation expectations of the DXT algorithm.

And all the stuff that doesn't mipmap well at all, but require high precision and low artifacts, namely the normalmap and the height for parallax, those two are together uncompressed and bi-linearly filtered.  I'm particularly happy about keeping the normalmap uncompressed, rather than try the roundabout way of Age of Dragons, of using DXT5 RGB for U, then alpha for V, and computing W on the fly.  Like I said, I came up with that VERY idea 20 years ago, EXACTLY same idea, and implemented it, but the results were far less than satisfactory, and eventually we decided to go uncompressed and un-mip-mapped.  No need to repeat that long learning experience.  The way it is here, the normalmap format is entirely standard, unproblematic, and the alpha channel has the height channel, which is the way it's been done for ages by engines using parallax.  Why reinvent a wheel that works well?  Those two things, normalmap and height, go well together.

 

To summarize, I have two DXT5 textures for materials,  and then one uncompressed, one DXT5 and one DXT3 textures for objects

5 textures altogether, only one of them is not compressed, and the channels they pack MAKE SENSE together.  Yes!

Compare this with the glTF abomination, which uses 3 UNCOMPRESSED textures, plus one compressed texture that mixes (object) AO with (material) attribute channels, making it useless to any engine that has the good sense to appropriately separate the concerns of objects and materials with separate UV mappings.  But so, DXT3 and DXT5 being 4:1 compressed formats, this boils down to glTF being about TWICE THE SIZE of this packing (in video memory terms), and it doesn't even have such material channels as index of refraction, surface purity or passivized layer thickness;  and it doesn't have object texture channels such as height, faction, detail texture modulator or even a friggin alpha channel !!!

You get not even half the goodness, but have to pay TWICE the price, in video memory consumption, with that glTF stupid garbage ...

( And it pretentiously calls itself "physics based" but doesn't even have a channel for index of refraction.  What a joke! )

 

Here, even the names of textures, and the names of channels, make sense:   albedo.dds,  optics.dds,  forms.png,  light.dds  and  zones.dds.

Where else in the world do you get so much clarity?

"Smoothness" for specular power, "Gloss" for index of refraction,  "Purity" for surface lack of diffuse impurities,  "Thickness" for rust thickness ...

Any system you look into, nowadays, they intentionally misname things to mystify and pretend.  Nobody gives you clarity like this.  NOBODY.

 

Couple of closing notes:

I worked for years with an engine that supported only 1 UV mapping, Vegastrike, and back then I thought it was good.  But it didn't take me 5 minutes around here to realize what a great idea having 2 UV's is.  Separating the Material and Object concerns is a Win-Win-WIN, with the added advantage that pixelation concerns are minimized when two uncorrelated mappings are blended.  The ability to overlap islands in material space is invaluable, as it creates an appearance of great detail at minimal cost, and reduces artistic work enormously.  Imagine if for every building that uses brick or wood you would have to re-create, or at best copy and paste, bricks, or wood.  It is insane.  Don't throw away the best thing your engine has to offer.  Question, instead, the self-proclaimed authorities that came up with that glTF TRASH, --as it will be remembered (if at all) once all the the marketing dust settles.

 

Note that materials don't have Emit, in this system;  here temperature is an object-related thing;  not a material attribute;  such that a lava flow would have to be an object, and have emissive color from the object texture pack, and with (non-emissive) volcanic rock as the material.  Which may simply boil down to the terrain, as an object, having an emissive burgundy color painted on the lava flow, to make it glow.

Regarding translucent plant leaves, fires, plasma drives and glass, all that will be served by another shader;  the "fireglass" shader, as I called it 20 years ago (yes, I coded it back then;  but I don't have the files;  will have to code it again).  (Coming after this one ...)

 

Comments?  Opinions?

((But please, keep it to technical merits;  don't start telling me I should be a sheeple and follow what "the big guys" do;  I would rather kill myself.))

 

If nobody has anything to add, subtract or point out, I will start working on the shader tomorrow.

Regarding exporting materials from Blender, I can't see what problems there'd be.  Blender has Index of Refraction, specular power, etc., built in;  Cycles has them too.  I could come up with a set of nodes for rendering to ensure they follow the exact same math as the shader, and therefore be sure renders look exactly like what objects will look like in-game.  Easier to debug in the shader first, THEN transfer to Blender, though.

As for texturing in Blender, we could simplify the workflow by using color codes for materials.  So you select, say, 16 materials you will need, assign them color codes, then in Blender you paint using color codes, but when you hit F-12 the colors are replaced by actual materials for rendering.  Just an idea, and it would probably need a lot of manual touch-up after export due to filtering aritfacts.

I'd like to write a tool (in C++) to convert png to DXT1/3/5 better than other tools out there do.  I don't know that they DON't do what I would do, yet;  but I'm guessing they don't.  You know what dithering is?  It's basically a recursive algorithm that, given a quantization limitation in a texture, and given a higher precision values texture than is representable in the lesser texture, rather than throwing away data by rounding, tries to spred the errors a little bit among neighboring pixels, so that the overall effect is more precise.  Well, I think the same algorithm could be, and should be applied to a texture before DXT compression, except for the peculiar 5,6,5 bits of DXT color representation.  So you dither down to where you have the three lowest bits of red an blue channels at zero, and green channel's two least significant bits as zero, while maintaining texel neighborhood color accuracy as much as possible, then DXT-compress.  Furthermore, I think in some situations it may be better to NOT use nearest points in the line to pick end colors, but allow the end points to move if it helps the overall matching accuracy with two bit indexes.  I'm sure NOBODY is doing these things...  Furthermore, where the 16 points form a cloud that's very difficult to linearize, a good algorithm should skip the texel, continue with the others, and once the neighboring texels are worked out, perhaps choose the same line orientation as one of the neigbors, or average the orientations of the neighboring texels.

 

EDIT (day or two later):

Zones texture rearranged:

Faction    5 bits  "Zones.dds" (DXT3)
Ageing     6 bits       "
DetailMod  5 bits       "
Alpha      4 bits       "

The reason for this change is that Microns, (oxide layer thickness for iridescent reflections), is actually a "zone" on an "object", rather than a material characteristic.  On the other hand, it is but one type of rust metal can exhibit.  Rusts can be black, red, orange, greenish for copper, or clear.  If we use the albedo alpha channel to encode for a rust color, then this "Ageing" rust mask channel can tell where to apply it and how thick.  For colored rusts, Ageing acts like an alpha-blend.  For clear rust, it acts as thickness of dielectric film.  Perhaps it could be used to also add staining to non-metals.  The idea is that if Ageing is fully on (1.0), if AgedColor is a color (below 0.75), it will overwrite albedo and set MSpec and Purity to zero.  But if AgedColor is clear (1.0), MSpec and Purity will be maxed out.  How this would look on non-metals I don't know and at least for now I don't care.

Edited by DanW58
  • Like 2
Link to comment
Share on other sites

Changing the name of color.dds to albedo.dds, because it will actually represent the sum of diffuse and specular.

Here's the shader's texture input declarations:

uniform sampler2D TS_albedo;
uniform sampler2D TS_optics;
varying vec2 UV_material;  // First UV set.
//~~~~~~~~~~~
uniform sampler2D TS_forms;
uniform sampler2D TS_light;
uniform sampler2D TS_zones;
varying vec2 UV_object;  // Second UV set.
//~~~~~~~~~~~
uniform samplerCube skyCube;

Those "varying"s are interpolated texture coordinates coming from the vertex shader.  The uniform sampler2D things are texture reader units.  The last uniform is a cubemap reader, for the environment;  and coordinates for it don't come in a varying because they have to be calculated right here in the fragment shader.

Once in main(), the first thing we want to do is read those textures, as the operations will take a few cycles and we want to minimize dependencies.  The shader compiler would optimize dependencies anyways, but I like to keep my brains close to the silicon.

void main()
{
  vec4 temp4;
  vec3 temp3;
  vec2 temp2;
  float temp;

  // MATERIAL TEXTURES:

  // Load albedo.dds data:
  temp4 = texture2D( TS_albedo, UV_material );
  vec3 Mat_RGB_albedo = temp4.xyz; // To be split into diffuse and specular...
  float Mat_SpecularPower = 1.0 / ( 1.0 - temp4.w ), 2.0;
  Mat_SpecularPower = SpecularPower * SpecularPower; // Smoothness.

  // Load optics.dds data:
  temp4 = texture2D( TS_optics, UV_material );
  float Mat_MSpec = temp4.x; // Metallic specularity % (vs diffuse).
  float Mat_Purity = temp4.y;
  float Mat_IOR = ( temp4.z * 4.0 ) + 1.0; // Gloss to IOR.
  float Mat_Microns = 10.0 * temp4.w; // Thickness of oxide film.

  // OBJECT TEXTURES:

  // Load forms.png data:
  temp4 = texture2D( TS_forms, UV_object );
  vec3 Obj_NM_normal = normalize( vec3(2.0, 2.0, 1.0) * ( temp4.xyz - vec3(0.5, 0.5, 0.0) ) );
  float Obj_ParallaxHeight = temp4.w;  // Any scaling needed?

  // Load light.dds data:
  temp4 = texture2D( TS_light, UV_object );
  vec3 Obj_RGB_emmit = temp4.xyz; // Emissive + self-lighting.
  float Obj_AO = temp4.w;  // Occlusion.
  vec3 Obj_RGB_ao = vec3( temp4.w ); // AO as color, for convenience.

  // Load zones.dds data:
  temp4 = texture2D( TS_zones, UV_object );
  float Obj_is_Faction = temp.x; // Where to put faction color.
  float Obj_AO_detailMod = 1.0 - min( temp.x * 2.0, 1.0 );
  float Obj_SP_detailMod = max( 0.0, temp.x * 2.0 - 1.0 );
  //temp4.z is vacant for now.
  float Obj_Alpha = temp4.w;

  // VECTORS:
  .......................

  // ENVIRONMENT CUBE FETCHES:
  .......................

Deriving diffuse and specular from albedo is not as simple as a multiplying by MSpec and its complement.  In any given material, the diffuse has higher saturation than the specular color due to light making multiple bounces before reflecting back;  so, technically, diffuse color is a power of specular color.  So we need an algorithm that can yield this type of relationship, while keeping the sum of the two colors equal to albedo.  Furthermore, if the material is 90% metallic reflectivity (by which we mean single-bounce reflectivity), we'd want the smaller diffuse component to be more saturated than the albedo, and specular be same as albedo, more or less.  But if the material is 0.1 MSpec, that is, 10% single bounce, 90% multi-bounce, we'd want the now smaller specular component to have less saturation than albedo, while the diffuse component remaining about equal to albedo.  How do we achieve all of this?  Well, it took me a few trials and errors on paper, but finally I think I got it, --subject to debugging later, of course.

  // Separate albedo into diffuse and specular components:
  float Mat_MDiff = 1.0 - Mat_MSpec;
  vec3 Mat_RGB_diff = Mat_RGB_albedo * Mat_RGB_albedo; // Boost saturation.
  vec3 Mat_RGB_spec = sqrt( Mat_RGB_albedo ); // Wash saturation.
  Mat_RGB_diff = ( (Mat_MDiff * Mat_RGB_diff) + (Mat_MSpec * Mat_RGB_albedo) ) * Mat_MDiff;
  Mat_RGB_spec = ( (Mat_MSpec * Mat_RGB_spec) + (Mat_MDiff * Mat_RGB_albedo) ) * Mat_MSpec;
  temp3 = Mat_RGB_albedo / (Mat_RGB_diff + Mat_RGB_spec); // Renormalize results.
  Mat_RGB_diff = Mat_RGB_diff * temp3;
  Mat_RGB_spec = Mat_RGB_spec * temp3; // Done!

 

In the next post, we'll process all the needed vectors;  stay tuned.

 

EDIT:  Another way of looking at the glTF stuff is this:  What exactly is the difference between it and the traditional way?  The ONLY difference I see is the fact that they got rid of the diffuse and specular colors, replaced it by a "color", plus a boolean "metal" or "spec" parameter.  This is GOOD STUFF;  there's no denying that;  but is it enough to call their entire system "Physics-Based"?   How do hype and pretense compare to substance?

  • I have made that change (1), PLUS:
  • The diff and spec components are separated from albedo having a natural counter-saturation relationship, physics-based (2).
  • The "MetallicSpecularity" that I have is a continuous parameter, CAN be filtered, interpolated, adjusted;  makes sense at 0.9, physics-based (3).
  • The naming of parameters I have is far more intuitive, with "Smoothness" for spec power, "Gloss" for index of refraction.
  • In the shader, I modulate intensity of reflections inversely to spot size by a physics-base formula (4) --see first post.
  • I DO have index of refraction (Gloss), which they don't, and is good for skin and a million things, not just water;  all physics-based (5).
  • I have a way to specify surface purity (from diffuse particles) to physics-based (6) distinguish paint from plastic, for example.
  • I have a way to specify dielectric rust film thickness for physics-based chromatic reflectivity (7).
  • I separate material from object concerns, and rightfully serve dual UV paradigms.
  • Textures occupy about half as much space in video memory as in glTF
  • Channels are combined into textures in ways that make sense;  textures are named intuitively.

Final score:  7 to 1  (vis-a-vis physics-based feature count alone).

Not really the final score;  there's a lot more to it ...

In other words, THIS SYSTEM is Physics-Based;  NOT glTF.   THIS is the ONLY Physics-Based system there is, in all likelihood.

Edited by DanW58
  • Like 1
Link to comment
Share on other sites

One question you might be pondering is why I haven't put the texture data loading stuff into subroutines.  Good question!  It would be nice to do so, but the problem is that the glsl language doesn't support pointers or references, which is okay if your subroutine only needs to return a value;  but if a routine needs to modify several variables there is no way it can do so.  So, subroutines in glsl are great for functions that funnel a bunch of data into a single result;  not for functions that generate multiple outputs.  All those texture data loads are like inverted funnels, they take data from the texture and spread it to different variables.  But don't worry, we will have plenty of subroutines soon enough.

Here's our incoming vector declarations:

uniform vec3 v3_sunDir;
varying vec4 v4_normal;
varying vec3 v3_eyeVec;
varying vec3 v3_half;
varying vec4 v4_tangent;
varying vec3 v3_bitangent;

Note that some are vec4's;  that's because there are additional parameters tagged onto the fourth float, which are used for some tangent space calculations in the current shader that are way beyond my head;  probably parallax related.  I don't want to mess with it, so I will transfer the xyz parts to vec3's in due course.

So now, back in main...

vec3 renormalize( vec3 input ) // For small errors only, faster than normalize().
{
  return input * vec3( 1.5 - (0.5 * dot(input, input)) );
}

void main()
{
  // ...TEXTURE STUFF...
  .........................

  // VECTORS AND STUFF:

  // Sanitize and normalize:
  // v3_sunDir should not need renormalization
  vec3 v3_raw_normal = renormalize( vec3( v4_normal ) );
  vec3 v3_eye_vector = renormalize( v3_eyeVec );
  vec3 v3_half_vec   = renormalize( v3_half );
  // Tangent stuff ... I know nothing about it.
  // Normal-map-modulated normal:
  vec3 v3_mod_normal = renormalize( v3_raw_normal * Obj_NM_normal );
  vec3 v3_refl_view = -reflect( v3_half_vec, v3_mod_normal );
  // These numbers are precomputed, as they will be needed more than once:
  float upwardsness = v3_raw_normal.y;
  float rayDotNormal = max( 0.0, dot( -v3_sunDir, v3_mod_normal ) );
  float eyeDotNormal = max( 0.0, dot( v3_eye_vector, v3_mod_normal ) );
  vec3 fresnel_refl_color = SchlickApproximateReflectionCoefficient( eyeDotNormal, 1.0, Mat_IOR );
Edited by DanW58
  • Like 1
Link to comment
Share on other sites

There's another math issue to resolve before proceeding to the next stuff, namely the environment cube fetches.

There are two main reasons to read the env_cube:  specular reflections, and ambient light.

Ambient light?!  Well, yes;  the env_cube IS our ambient.  Reflected ambient light at any point in a surface is nothing but the light it reflects diffusely from the part of the sky its normal points to, multiplied by the ao.  Naturally!  So the AmbientLight uniform is not needed when we have an env_cube;  the slot can be used for Ground Color... ;-)

So, we have two fetches:  one for reflection, and one for ambient lighting.  But now, obviously we don't want the color of an exact point in the sky for ambient;  do we?

We want a very, VERY blurred sum of a portion of sky, --up to half of it, if the ao is 1.0.

But how do we read half the sky?

Well, that's easy, actually.  We simply read the env_cube with a lot of LOD bias.  The deepest LOD represents the entire sphere of sky with 6 texels.

If we don't have LOD's in the cube_maps, no problem, there are free tools that can generate them.  It will only take an afternoon to add LOD's to the cube-maps.

So then you read the cubemap with an LOD parameter, which is a floating point number;  the fetch is tri-linearly filtered.

 

In the case of specular reflections, we also want to calculate a bias, as such reflections are blurred by the roughness of the surface, the inverse of smoothness, of specular power.  To each specular power there is a corresponding blur radius.  And to each LOD in the env_cube there is a corresponding blur radius.  We need to compute a bias that will match them.  Now, the problem is how do we compute the bias?

In the first post in this forum thread I derived a formula to relate specular power to solid angle.  So, we need a formula that relates solid angle to cube-map LOD.

If we were to assume no filtering, solid angle for an LOD would be texel size solid angle.  However, being a cube, the solid angle for a texel in the middle of a face is larger than for a texel near a corner, which is why cube-maps MUST --ABSOLUTELY MUST-- have spherical blurring applied.  And I believe the absolute minimum effective blur diameter is 2.5 texels --of the ones in the middle of a face.  A radius of 1.25.  So, assuming our cube-maps are 1024 x 1024 x 6 at LOD zero, the angle of 1.25 texels is what?  Hmmm...

It would be the arctan of 1.25/512 = 0.0024414014 radians.

From the first post, my formula relating shininess to radius was  n = ln( 0.5 ) / ln( cos(SpotRadius) )...

cos(0.0024414014) = 0.99999701978

ln(0.99999701978) = -0.0000029802244

ln(0.5) = -0.6931471806

0.6931471806 / 0.0000029802244 = 232582.2

Yes;  that's a specular power equivalent of 232,000 and change.  Which is not impossible, in principle;  a good mirror probably has specular power in the millions, if we were to calculate it;  funny they don't use that for marketing;  it's just that we cannot represent it in the game.  I think I designed the Smoothness mapping to get 64k at 255 channel value, so no reflective surface in-game will come even close to the most detailed LOD (level 0).

How about we go the other way?  Let's find out what blur radius we need for a given spec power.  Maybe we can get away with much smaller environment cubes...

At spec power of 65536, SpotRadius = arccos( 0.5^(1/n) ) = arccos( 0.99998942347 ) = .00459924962 radians.

tan( 0.00459924964 ) = 0.00459928207

The inverse is 217.426771381 blur radiuses per half a side, or 434.85 radiuses for full side.  Okay, so we can use 1024 cube maps, just keeping the blur radius to 2.35 texels, which is good;  I was worried about not being able to blur the cube-map enough to prevent DXT compression artifacts;  it seems we HAVE to do a nice blur.  EXCELLENT!

So, I was going to say that the shader needed to be passed the cubemap size and blur radius as uniforms, but I think there is not much room to play with those parameters;  I think what these calculations are saying is that cubemaps HAVE to be 1024, AND HAVE to have a blur radius of 2.35 texels, or 0.0046 radians, at LOD 0.  So, let the shader assume these numbers, and I'll make sure our cube maps meet these specifications.

Okay, so now we know that for a specular power of 65,536 we need an LOD bias of zero.  But what is the general formula?

Stay tuned.

Edited by DanW58
  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Someone might ask, "is it necessary to derive exact formulas for things like LOD bias?"  The answer is a qualified yes.  What most engines that even use LOD bias do is just fudge it.  Just like they fudge a rough formula for Fresnel.  The problem with fudging things is that eventually you arrive at a visible inconsistency.  One possible inconsistency here is between environment mapping's reaction to specular power, and sunlight's reaction to specular power.  One blurs by LOD biasing the cubemap with trilinear filtering.  The other one blurs by computing the Phong formula.  And the two have to agree.  If they don't, your visual cortex will know it, even if your prefrontal cortex denies it.

The same goes for the light intensity relationship between diffuse and specular.  Everybody fudges that.  But the day you see a shader that doesn't fudge it, that matches them mathematically, something about it seems cinematic in quality, and you don't even know what it is.  And the light intensity between diffuse and specular match perfectly when specular power is 1.0,  and reflection falls by 1/2 at 60 degrees from the half angle, just like in diffuse it falls to 1/2 at 60 degrees between light and normal vectors.

Yes, these things are important, both in relative and absolute terms.  I know because 20 years ago I worked on all this and came up with a shader that was utterly unbelievable.  I had been working on all the math on faith that it would make a difference;  and in the end it far exceeded my wildest expectations.

THIS IS PHYSICS-BASED RENDERING, by the way;  not all those other pretentious parties out there with far more money than brains.

Anyways, let us continue:

How about we calculate blur radius for four other specular powers, namely 4096, 256, 16 and 1.0,  and see if we see a pattern?

Using the formula  Radius = arccos( 0.5^(1/n) )  where n is the spec power,

SpecPWR    radius (rads)   /2.35 blur  2/x=rez  log2(x) 10-log2x=bias
================================================================
 65,536    0.00459924964   0.00195713   1021    ~9.99     0.0
  4,096    0.01839651273   0.00782830    255.5  ~8.00     2.0
    256    0.07355492296   0.03129997     63.9   6.00     4.0
     16    0.29223187184   0.12435399     16.0   4.00     6.0
      1    1.04719755125   0.44561598      4.5  ~2.22    ~7.8

These calculations were painful, but certainly not wasted.  How in the world an arccos of a funny power comes so close to a much simpler formula, I don't know, but seeing is believing.   Bias appears to be equal to 8 minus the log2( sqrt( spec_power ) ).  But the log of a square root is 1/2 the log of the thing, so this boils down to ...

float LODbias_from_spec_power( float spec_power )
{
  return clamp( 8.0 - ( 0.5 * log2(spec_power) ), 0.0, 7.78 );
}

Now we need to get a formula from AO.  Remember that the AO is basically a measure of solid angle expressed in hemispheres.  White, in AO, means a full hemisphere of visibility, which happens to be 2 pi (6.28) steradians.  Next post ...

 

Edited by DanW58
  • Like 2
Link to comment
Share on other sites

Another question I might get, "why not just use HDRI and forget about matching Phong to LOD bias and all that?"  Good question, again.  Because HDRI cube maps don't really solve any problem except laziness, and they do so at a very high price, having to have great bit depth to expres a dynamic range of light spanning millions.  Besides, having the sun in the sky, or in the sky texture, is basically the same thing.  If you want it in the texture, you have to put it there, though.  If you want to show the same scene at different times of the day, are you going to manipulate the environment texture to move the sun?  I'm not the biggest fan of OpenGL, but lights were created for a reason.

Anyways, calculating LOD bias from AO:

From the first post in this thread we have formulas relating AO and blur radius, namely,

hemispheres = 1 - cos( radius )

radius = arccos( 1 - hemispheres )

Hemispheres IS the AO.  So we can write AO = 1 - cos( radius ).

In the table above I have a column for radius;  maybe I can calculate AO for each radius, and add it as a new column ...

SpecPWR    radius (rads)   /2.35 blur  2/x=rez  log2(x) 10-log2x=bias     AO     fudged AO
==========================================================================================
 65,536    0.00459924964   0.00195713   1021    ~9.99     0.0           0.00...    0.00...
  4,096    0.01839651273   0.00782830    255.5  ~8.00     2.0           0.00017    0.00024
    256    0.07355492296   0.03129997     63.9   6.00     4.0           0.0027     0.00390
     16    0.29223187184   0.12435399     16.0   4.00     6.0           0.042      0.06250
      1    1.04719755125   0.44561598      4.5  ~2.22    ~7.8           0.5        1.00000

Well, this is interesting... For values of AO greater than 0.5, the LOD bias saturates!   What could this mean?

Oh, I think I know what it means:  The innermost LOD of a cubemap has 6 texels.  Each represents a third of a hemisphere.  Larger solid angles than a third of a hemisphere are not represented.  So, what do we do?  I'd like the bias to be continuous;  not to be just limited after AO gets larger than 0.5.  I'd like the bias to go to 8 when AO is 1.0.

Okay, let me derive a continuous AO that biases at 8 when it is 1 and goes by the same powers as specular power, which is what it seems to do (0.042/.0027~=16; 0.0027/.00017~=16).  So, let me write a first fudge like,

float LOD_bias_from_AO( float AO )
{
  return clamp( 8.0 + (0.5 * log2(AO)), 0.0, 7.8 );
}

I added the results as an extra column titled "fudged AO".  I think the result is pretty good.

Of course the question will come up, why am I fudging a value after ranting about fudging so much.  Well, in this case I find it close enough to true value, and it is a very economical solution.  To do better, I'd have to have a conditional testing for AO being greater than 0.5, and if so doing my own filtering by interpolating between the LOD bias 8 fetch and "AmbientLight", that being the cubemap's average;  and that would be an incorrect calculation anyways.  More correct would be to take say 7 samples in a hexagonal pattern with a center, and average them;  but that's expensive.  In any case, the numbers are pretty close, except for LOD 8, which is exactly what I hoped for.  In this case, continuity will look better than absolute precision.  Or else, one other thing I can do is scale the bias effect so that LOD6 matches the true number.  To get 0.042 to give me bias of 6, or in other words, to get 2 subtracted from 8 at .042, with log2(0.042) being −4.573466862, 2/4.573466862=0.437305016, so,

float LOD_bias_from_AO( float AO )
{
  return clamp( 8.0 + (0.4373 * log2(AO)), 0.0, 7.8 );
}

Now I still get 8.0 for ao of 1.0, and for AO of 0.042 I get 6.00000845.  BINGO!!!!

Edited by DanW58
  • Like 1
Link to comment
Share on other sites

We're making progress!!!

So, add these functions before main:

float LODbias_from_spec_power( float spec_power )
{
  return clamp( 8.0 - ( 0.5 * log2(spec_power) ), 0.0, 7.78 );
}

float LOD_bias_from_AO( float AO )
{
  return clamp( 8.0 + ( 0.4373 * log2(AO) ), 0.0, 7.78 );
}

And now we can finish what we started.  These environment map fetches have to be done as early as possible, as many things depend on them.

void main()
{
  // MATERIAL TEXTURES
  ...................................
  // OBJECT TEXTURES
  ...................................
  // VECTORS ETC.
  ...................................
  vec3 RGBlight_reflSky = vec3( textureCube(skyCube, v3_reflView, LODbias_from_spec_power( Mat_specular_power ) ) );
  vec3 RGBlight_normSky = vec3( textureCube(skyCube, v3_raw_normal, LODbias_from_AO( Obj_AO ) ) );

 

This is FANTASTIC!  We have initialized everything that needs initializing;  now we are ready for the actual guts of the shader.

Before I go on, however, I will put all we've done so far into a code snippet, as the next post.

  • Like 1
Link to comment
Share on other sites

#version 777

#include "common/fog.h"
#include "common/los_fragment.h"
#include "common/shadows_fragment.h"

#if USE_OBJECTCOLOR
  uniform vec3 objectColor;
#else
#if USE_PLAYERCOLOR
  uniform vec3 playerColor;
#endif
#endif

// Textures:
uniform sampler2D TS_albedo;
uniform sampler2D TS_optics;
varying vec2 UV_material;  // First UV set.
//~~~~~~~~~~~
uniform sampler2D TS_forms;
uniform sampler2D TS_light;
uniform sampler2D TS_zones;
uniform sampler2D TS_detail;
varying vec2 UV_object;  // Second UV set.
//~~~~~~~~~~~
uniform samplerCube skyCube;

// Vectors:
uniform vec3 v3_sunDir;
varying vec4 v4_normal;
varying vec3 v3_eyeVec;
varying vec3 v3_half;
varying vec4 v4_tangent;
varying vec3 v3_bitangent;
varying vec4 v4_lighting;

// Colors:
uniform vec3 sunColor;
uniform vec3 gndColor;


// SUBROUTINES:

vec3 renormalize( vec3 input ) // For small errors only, faster than normalize().
{
  return input * vec3( 1.5 - (0.5 * dot(input, input)) );
}

float LOD_bias_from_spec_power( float spec_power )
{
  return clamp( 8.0 - ( 0.5 * log2(spec_power) ), 0.0, 7.78 );
}

float LOD_bias_from_AO( float AO )
{
  return clamp( 8.0 + (0.4373 * log2(AO)), 0.0, 7.78 );
}

vec3 SchlickApproximateReflectionCoefficient( float rayDotNormal, float From_IOR, float To_IOR )
{
  float R0 = (To_IOR - From_IOR) / (From_IOR + To_IOR);
  float angle_part = pow( 0.9*(1.0-rayDotNormal), 5.0 );
  R0 = R0 * R0;
  float RC = R0 + (1.0 - R0) * angle_part;
  return vec3(RC, RC, RC); // Returns Fresnel refl coeff as gray-scale color.
}

void main()
{
  vec4 temp4;
  vec3 temp3;
  vec2 temp2;
  float temp;


  // MATERIAL TEXTURES:

  // Load albedo.dds data:
  temp4 = texture2D( TS_albedo, UV_material );
  vec3  Mat_RGB_albedo = temp4.rgb; // To be split into diffuse and specular...
  float Mat_alpha = temp4.a;

  // Load optics.dds data:
  temp4 = texture2D( TS_optics, UV_material );
  float Mat_MSpec = temp4.r; // Metallic specularity % (vs diffuse).
  float Mat_Purity = temp4.g;
  float Mat_IOR = ( temp4.b * 4.0 ) + 1.0; // Gloss to IOR.
  float Mat_SpecularPower = 1.0 / min( 1.0 - temp4.a, 1.0/256.0 );
  Mat_SpecularPower = SpecularPower * SpecularPower; // Smoothness.


  // OBJECT TEXTURES:

  // Load forms.png data:
  temp4 = texture2D( TS_forms, UV_object );
  vec3 Obj_NM_normal = normalize( vec3(2.0, 2.0, 1.0) * ( temp4.rgb - vec3(0.5, 0.5, 0.0) ) );
  float Obj_ParallaxHeight = temp4.a;  // Any scaling needed?

  // Load light.dds data:
  temp4 = texture2D( TS_light, UV_object );
  vec3 Obj_RGB_emmit = temp4.rgb; // Emissive + self-lighting.
  float Obj_AO = temp4.a;  // Occlusion.
  vec3 Obj_RGB_ao = vec3( temp4.a ); // AO as color, for convenience.

  // Load zones.dds data:
  temp4 = texture2D( TS_zones, UV_object );
  float Obj_is_Faction = temp4.r; // Where to put faction color.
  float Obj_Microns = 10.0 * temp4.g; // Thickness of oxide film.
  float Obj_AO_detailMod = 1.0 - min( temp4.b * 2.0, 1.0 );
  float Obj_SP_detailMod = max( 0.0, temp4.b * 2.0 - 1.0 );
  float Obj_Alpha = temp4.a;

  // Load detail.dds data, and apply it:
  temp3 = texture2D( TS_detail, UV_object * vec2(11.090169945) );
  temp = dot( temp3, vec3(1.0) );
  Obj_RGB_ao = Obj_RGB_ao + vec3(0.0625 * Obj_AO_detailMod) * (temp3 - vec3(0.5));
  Mat_SpecularPower = Mat_SpecularPower * ( 1.0 + ( 0.0625 * Obj_SP_detailMod * (temp-0.5) ) );


  // VECTORS AND STUFF:

  // Sanitize and normalize:
  // v3_sunDir should not need renormalization
  vec3 v3_raw_normal = renormalize( vec3( v4_normal ) );
  vec3 v3_eye_vector = renormalize( v3_eyeVec );
  vec3 v3_half_vec   = renormalize( v3_half );
  // Tangent stuff ... I know nothing about it.
  // Normal-map-modulated normal:
  vec3 v3_mod_normal = renormalize( v3_raw_normal * Obj_NM_normal );
  vec3 v3_refl_view = -reflect( v3_half_vec, v3_mod_normal );
  // These numbers are precomputed, as they will be needed more than once:
  float upwardsness = v3_raw_normal.y;
  float rayDotNormal = max( 0.0, dot( -v3_sunDir, v3_mod_normal ) );
  float eyeDotNormal = max( 0.0, dot( v3_eye_vector, v3_mod_normal ) );
  vec3 fresnel_refl_color = SchlickApproximateReflectionCoefficient( eyeDotNormal, 1.0, Mat_IOR );


  // STUFF I KNOW NOTHING ABOUT:

  #if (USE_INSTANCING || USE_GPU_SKINNING) && (USE_PARALLAX || USE_NORMAL_MAP)
    vec3 bitangent = vec3(v4_normal.w, v4_tangent.w, v4_lighting.w);
    mat3 tbn = mat3(v4_tangent.xyz, bitangent, v4_normal.xyz);
  #endif
  #if (USE_INSTANCING || USE_GPU_SKINNING) && USE_PARALLAX
  {
    float h = Obj_ParallaxHeight;
    vec2 coord = UV_object;
    vec3 eyeDir = normalize(v_eyeVec * tbn);
    float dist = length(v_eyeVec);
    vec2 move;
    float height = 1.0;
    float scale = effectSettings.z;
    int iter = int(min(20.0, 25.0 - dist/10.0));
    if (iter > 0)
    {
      float s = 1.0/float(iter);
      float t = s;
      move = vec2(-eyeDir.x, eyeDir.y) * scale / (eyeDir.z * float(iter));
      vec2 nil = vec2(0.0);
      for (int i = 0; i < iter; ++i)
      {
        height -= t;
        t = (h < height) ? s : 0.0;
        temp2 = (h < height) ? move : nil;
        coord += temp2;
        h = texture2D(TS_forms, coord).a;
      }
      // Move back to where we collided with the surface.
      // This assumes the surface is linear between the sample point before we
      // intersect the surface and after we intersect the surface.
      float hp = texture2D(TS_forms, coord - move).a;
      coord -= move * ((h - height) / (s + h - hp));
    }
  }


  // ALBEDO AND THINGS:

  // Separate albedo into diffuse and specular components:
  float Mat_MDiff = 1.0 - Mat_MSpec;
  vec3 Mat_RGB_diff = Mat_RGB_albedo * Mat_RGB_albedo; // Boost saturation.
  vec3 Mat_RGB_spec = sqrt( Mat_RGB_albedo ); // Wash saturation.
  Mat_RGB_diff = ( (Mat_MDiff * Mat_RGB_diff) + (Mat_MSpec * Mat_RGB_albedo) ) * Mat_MDiff;
  Mat_RGB_spec = ( (Mat_MSpec * Mat_RGB_spec) + (Mat_MDiff * Mat_RGB_albedo) ) * Mat_MSpec;
  temp3 = Mat_RGB_albedo / (Mat_RGB_diff + Mat_RGB_spec); // Renormalize results.
  Mat_RGB_diff = Mat_RGB_diff * temp3;
  Mat_RGB_spec = Mat_RGB_spec * temp3; // Done!
  Mat_RGB_diff = mix( Mat_RGB_diff, playerColor, Obj_is_Faction );
                                
  // CUBE MAP FETCHINGS:
  
  vec3 RGBlight_reflSky = vec3( textureCube(skyCube, v3_reflView, LODbias_from_spec_power( Mat_specular_power ) ) );
  vec3 RGBlight_normSky = vec3( textureCube(skyCube, v3_raw_normal, LODbias_from_AO( Obj_AO ) ) );


  // THE REAL STUFF BEGINS ...

I forgot the detail textures.  Adding them above.

I just had a change of heart about where to place the dielectric film thickness channel.  I had it as part of the material textures;  but there's a problem with that:  It is not really a material, or is it?  If you want to make some parts of a sword reflect more iridescently than others, it is not the material you are adding this feature to;  it is to the sword;  you are modifying the sword itself, NOT the material that it is made of, which could be shared by many other objects.  You are also marking a zone, where passivated rusting occurs.  It seems to belong in the Zones texture, in the Objec Textures pack.

 

I'm still editing this post... (I'll delete this sentence when I'm done.)

Stay tuned ...

Edited by DanW58
  • Like 1
Link to comment
Share on other sites

 Major channel rearrangement:

Red_channel       5 bits   "albedo.dds" (DXT5)
Green_channel     6 bits        "
Blue_channel      5 bits        "
Aged_Color        8 bits        "               <<<<<<<<<<<----------***** new ******
MSpecularity      5 bits   "optics.dds" (DXT5)
SurfacePurity     6 bits        "
FresnelGloss      5 bits        "
Smoothness        8 bits        "               <<<<<<<<<<<----------***** moved ******

"Aged_Color" added;  Thickness (of dielectric rust layer) moved to the Object Texture set, Zones texture, second channel, and changed name to "Ageing".  The reasons for these changes are as follows:

  • Zones of rusting are actually "Zones" in object space;   NOT parts of a (shared) material.  The "Ageing" channel, in object UV space, will allow demarcation of zones where rusting or staining occurs for an object.
  • Passivated dielectric oxides that glow with iridescence are just one type of oxide;  there are matte red, black and orange oxides.  So it is clearly adventageous to expand the usefulness of this zoning to include any type of rust or weathering we may want to show;  not just iridescence.  The AgedColor channel in the albedo texture now encodes a color for "rust", from black (0.0), to red (0.25), to orange (0.5), and then clear (1.0).  The 0.75 range is not to be used.  When rust is colored (0.0~0.5), the Ageing channel value alpha-blends this color.  When it is clear (1.0), the value in the Ageing channel encodes thickness of transparent layer.  The alpha-blending of color also reduces MSpec and Purity.  When AgedColor is clear (1.0), Ageing's value increases MSpec and Purity.
  • This arrangement improves clarity even further:  Now the albedo.dds texture has an rgb albedo, plus it encodes an optional "aged" albedo in alpha.  Smoothness, which was rightfully a part of optics, is now in optics.dds.  And Thickness, which was a means of zoning rust effects, is now in Zones.dds.

 

Zones texture rearranged:

Faction    5 bits  "Zones.dds" (DXT3)
Ageing     6 bits       "               <<<<<<<<<<<----------****** new ******
DetailMod  5 bits       "               <<<<<<<<<<<----------***** moved ******
Alpha      4 bits       "

The reason for this change is that Microns, (oxide layer thickness for iridescent reflections), is actually a "zone" on an "object", rather than a material characteristic.  On the other hand, it is but one type of rust metal can exhibit.  Rusts can be black, red, orange, greenish for copper, or clear.  If we use the albedo alpha channel to encode for a rust color, then this "Ageing" rust mask channel can tell where to apply it and how thick.  For colored rusts, Ageing acts like an alpha-blend.  For clear rust, it acts as thickness of dielectric film.  Perhaps it could be used to also add staining to non-metals.  The idea is that if Ageing is fully on (1.0), if AgedColor is a color (below 0.75), it will overwrite albedo and set MSpec and Purity to zero.  But if AgedColor is clear (1.0), MSpec and Purity will be maxed out.  How this would look on non-metals I don't know and at least for now I don't care.

Edited by DanW58
  • Like 2
Link to comment
Share on other sites

@DanW58

Honestly, at this moment a lot of what you post is overwhelming for a lot of us. You go into a lot of (exquisite) detail that will (eventually) be very useful if you continue on your course and improve the renderer as you plan. I like your "fall back" ideas so that we need fewer materials and map types "default" lower to a 1 texel image. That will be very helpful to us modders and artists, your documentation. Please continue and I encourage you to try the IRC channel and join other discussions regarding other aspects of the game. First and foremost, recognize that in this development environment it is a marathon, not a sprint. Personally, I have been involved with the development of this game in one way or another since 2005. I know this isn't the technical feedback you want, but i hope it helps regardless. (y) 

Edited by wowgetoffyourcellphone
  • Like 4
Link to comment
Share on other sites

@DanW58 as @wowgetoffyourcellphone said I also very much like like you describe everything in such detail that is easy to read. It is bit unfortunate that founding programmers are no longer active and as previous post said you need to think here long-term as well as have patience and nerves out of steel or whatever is even more resilient or durable. Some presentations and videos from @vladislavbelovmight be interesting to read or watch https://fosdem.org/2021/schedule/event/0ad/attachments/slides/4767/export/events/attachments/0ad/slides/4767/fosdem2021_0ad_graphics.pdf  and previous fosdem   

and some mentions are here 

https://www.youtube.com/watch?v=9QcDZvpAM5Q

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...