Jump to content

Post-processing effects test (SSAO/HDR/Bloom)


Recommended Posts

Or even for WFG to expand upon the game for 0 A.D. Part 2 or anything that comes after.

True :) I would imagine it wouldn't necessarily be as useful for Part 2 at least to have too detailed effects, but sure, if they are there they might be found useful. Definitely beats wanting to have them, but not having them :)

Link to comment
Share on other sites

Hey all,

Could someone with access to Maya or similar software please upload a sample Collada file that has been exported with TEXTANGENT (and/or TEXBINORMAL) info included? It looks like Blender's exporter can't handle that yet.

I'm going to delve into the mesh-loading code and see how I can grab that info from FCollada. Once that's done, the rest should hopefully just fall into place.

Cheers,

Myc

Link to comment
Share on other sites

IMO we shouldn't rely on features that Blender doesn't support since many (all?) of our modelers use Blender. Is there any way of doing what you want to do without that feature of COLLADA? Unfortunately that issue has been a Blender TODO for over a year, which is not surprising. Requiring Maya is pretty unrealistic and means this great new feature will never be used :(

Link to comment
Share on other sites

IMO we shouldn't rely on features that Blender doesn't support since many (all?) of our modelers use Blender. Is there any way of doing what you want to do without that feature of COLLADA? Unfortunately that issue has been a Blender TODO for over a year, which is not surprising. Requiring Maya is pretty unrealistic and means this great new feature will never be used :(

Which feature are we talking about, though? I see about 5 or 6 separate features being discussed in this thread. :) If it's normal and parallax mapping, I can live without those. AO, bloom, HDR, are enough to make the game look nearly AAA.
Link to comment
Share on other sites

What I'm working on is needed for both normal and parallax mapping. I need the tangents that define the direction of the surface of objects (which unfortunately can't be derived from UV coords or anything else normally available to the shaders).

The most elegant solution would be to use Blender to export them in the Collada files and then the game takes those and uses them directly. Since Blender can't export tangents yet, I have to either find another program that can calculate them (any program, not necessarily Maya), or simply calculate them myself on the fly.

At this point, the last option seems best... Which is great, because I take pleasure in messing with things like this!

So, patience and I'll get there soon enough. :)

Link to comment
Share on other sites

What I'm working on is needed for both normal and parallax mapping. I need the tangents that define the direction of the surface of objects (which unfortunately can't be derived from UV coords or anything else normally available to the shaders).

The most elegant solution would be to use Blender to export them in the Collada files and then the game takes those and uses them directly. Since Blender can't export tangents yet, I have to either find another program that can calculate them (any program, not necessarily Maya), or simply calculate them myself on the fly.

At this point, the last option seems best... Which is great, because I take pleasure in messing with things like this!

So, patience and I'll get there soon enough. :)

Indeed that sounds best :) Also, I don't know if you follow our IRC chat, but there was some brief discussion about SSAO a few days ago in #0ad-dev, and one suggestion was that we calculate an AO texture in advance when models are loaded and cache the result, I guess to avoid the overhead of doing that every frame in screen space. You might be interested in reading that conversation, it seems relevant to your interests.

Link to comment
Share on other sites

What I'm working on is needed for both normal and parallax mapping. I need the tangents that define the direction of the surface of objects (which unfortunately can't be derived from UV coords or anything else normally available to the shaders).

The most elegant solution would be to use Blender to export them in the Collada files and then the game takes those and uses them directly. Since Blender can't export tangents yet, I have to either find another program that can calculate them (any program, not necessarily Maya), or simply calculate them myself on the fly.

At this point, the last option seems best... Which is great, because I take pleasure in messing with things like this!

So, patience and I'll get there soon enough. :)

Ah I see. Calculating them yourself sounds like fun to me as well :).

Mythos: normal and displacement mapping can make a huge difference. Left is normal and displacement mapped, right is not. Though it would be quite a lot of work generating the maps for them.

cJoCN.jpg

Link to comment
Share on other sites

Indeed that sounds best :) Also, I don't know if you follow our IRC chat, but there was some brief discussion about SSAO a few days ago in #0ad-dev, and one suggestion was that we calculate an AO texture in advance when models are loaded and cache the result, I guess to avoid the overhead of doing that every frame in screen space. You might be interested in reading that conversation, it seems relevant to your interests.

Thanks for the heads up.

Since people are wondering, here's what the filters look like in motion: http://www.2shared.com/file/GMrSa79V/out-7.html

(sorry for the quality, my crappy laptop can't handle screen capturing)

I don't know much about precomputed AO or its advantages/disadvantages, and I don't know if there are any games that use it (doesn't mean such games don't exist). It definitely does sound like an interesting experiment for another day, though.

Link to comment
Share on other sites

Normal/Displacement maps would look great if applied to the existing low poly models. The model's UV maps would have to be reworked - unless a seperate UV mapping could be applied to them. The problem with our existing UV maps is that there is shared texture space. For example, on many and most all of the unit maps the leg and arm portion of the texture is shared by both the left and right. If I recall correctly this goofs up the normal maps. Every tri must have it's own portion of the UV map. The same story for buildings too. It's to bad that the models weren't set up this way already - but back in the day - when these models were first made, normal mapping wasn't even on the radar.

I think the biggest benefit for the least cost would be terrain normal / displacement maps, followed by units, then followed by buildings (the bump mapping looks great on them - and the art team is going high poly on the static structures anyway, so I don't think there would be much benefit in displacement maps on most flat/rectangular buildlings)

This capability would be awesome though. Modders who use the engine for different purposes as well as the WFG developers could really do a lot with this - both now and in the future :)

Link to comment
Share on other sites

Terrain would definitely benefit from normal mapping and parallax mapping. Units, I still wonder because the camera is always fairly far away (though it could be used, of course). I think buildings could use it for the tiles/blocks of stone/whatever to a great effect.

AO should be baked beforehand for units and buildings, to me. There's no real point in dynamically computing it in 0 A.D.

Link to comment
Share on other sites

Normal/Displacement maps would look great if applied to the existing low poly models.

Hmm, I'd probably expect the opposite - the look of low-poly models will be dominated by their silhouettes, which no kind of mapping can help with. (Especially units, which are very small during normal gameplay, so there's not enough pixels to represent any fancy lighting effects, and the silhouette and broad texture detail is all you'll notice). It'd probably be more effective to increase the polygon budget by 10x or more (plus some kind of LOD system) so the silhouettes can contain more detail. (And even more effective to simply stop using the same mesh and animations for pretty much every single unit :))

Link to comment
Share on other sites

Mythos: normal and displacement mapping can make a huge difference. Left is normal and displacement mapped, right is not.

cJoCN.jpg

I can see it working nicely for, say, terrain and some stone walls, but at our zoom it would largely be useless for units and even many structures.

Though it would be quite a lot of work generating the maps for them.

Indeed, this is my main concern. Our active artists are: Pureon, Enrique, Eggbird, and myself. We're all busy doing essential work already... Pureon and I are even doing essential non-art work. I'm stuck doing technologies for another alpha or two, and Pureon is working on sound and other stuff. I'm more excited about AO, HDR, Bloom, etc., because it could take 1 programmer a few weeks or a month to get it all going and then BOOM, it dramatically increases the look of the ENTIRE game. Normal&Parallax mapping will take our artists (I'm not kidding) 100s of hours to make (folks need to appreciate the sheer number of models and textures we have now), and then it would only look good on one object at a time. An artist could spend 3 hours making bump maps and displacement maps for an object and it would be barely noticeable in the game by most players. A programmer could spend 3 hours and get bloom implemented and every single player would notice the difference.

On the other hand... Having these mapping features could attract additional talent to the project. Additional art talent means more content over time, which is always good. And Indian architecture is so richly detailed, it could really benefit from these mapping techniques. And we don't have to drop everything just to add normal maps for everything in the game. If we just focus on terrain textures first, it would make a greater impact than if we, say, did units first.

Link to comment
Share on other sites

Well, I know it is easy for me to just comment ( as I don't have to do all that work) but I think it could be useful for say, ingame cut-scenes.

(Were there no plans to redo the models of units with higher poly count?)

0 AD is not just a game, it is THE leading opensource RTS engine :) who knows what other mod teams could do with those possibilities (yes, I want to see more Rise of the East screenies)

Link to comment
Share on other sites

Well, I know it is easy for me to just comment ( as I don't have to do all that work) but I think it could be useful for say, ingame cut-scenes.

(Were there no plans to redo the models of units with higher poly count?)

0 AD is not just a game, it is THE leading opensource RTS engine :) who knows what other mod teams could do with those possibilities (yes, I want to see more Rise of the East screenies)

Yep, by all means, let's include as many features as possible. myconid's work is very exciting. (y) Just saying the art dept will likely be focusing on things other than making normal and displacement maps for the foreseeable future. :) Though, I'll likely play around with terrain textures to see what I can do, if we add these features. ;)

Link to comment
Share on other sites

The model's UV maps would have to be reworked - unless a seperate UV mapping could be applied to them. The problem with our existing UV maps is that there is shared texture space. For example, on many and most all of the unit maps the leg and arm portion of the texture is shared by both the left and right. If I recall correctly this goofs up the normal maps.

Nope. That's true for object-space mapping, however the thing I'll work on next should overcome that limitation.

A suggestion: OK-looking maps can be generated automatically from the textures (e.g. with the GIMP), while the existing UV coords will be reused. If they choose to do so, the artists could batch-generate maps for various models, bake portions of particular types of texture (e.g. rooftiles) and simply replace parts of the batch-generated maps with the baked maps. While the sculpting/baking work for the texture-types will need to be done by the artists themselves, the rest can be done by any volunteer who can use an image editor, and the artists will just do the QA. Not the best solution, perhaps, but I hope that at least it makes sense.

A question for programmers: Am I right in assuming that all skeletal objects are handled by the stuff in "renderer/HWLightingModelRenderer.*" and all statics are handled by "renderer/InstancingModelRenderer.*"? If so, I might concentrate on just the latter, which will make things much easier...

A note for those holding their breaths: I'm busy with IRL stuff until the end of the week. Hopefully there'll be some more progress over the weekend.

Link to comment
Share on other sites

Nope. That's true for object-space mapping, however the thing I'll work on next should overcome that limitation.

A suggestion: OK-looking maps can be generated automatically from the textures (e.g. with the GIMP), while the existing UV coords will be reused. If they choose to do so, the artists could batch-generate maps for various models, bake portions of particular types of texture (e.g. rooftiles) and simply replace parts of the batch-generated maps with the baked maps. While the sculpting/baking work for the texture-types will need to be done by the artists themselves, the rest can be done by any volunteer who can use an image editor, and the artists will just do the QA. Not the best solution, perhaps, but I hope that at least it makes sense.

That's how I thought it would be. Using a normal map generator plugin on PS or GIMP from the existing diffuse textures, then tweak the parts that may look goofy or is going to be more noticeable (like rooftiles). These plugins usually make a good work achieving decent normal maps without too much effort (in most cases). Having to do hand-made sculpts to bake normals for every model/texture on the game is just out of scope. (I'm speaking of normal maps, I do not know what kind of maps are needed for parallax)

For example, on many and most all of the unit maps the leg and arm portion of the texture is shared by both the left and right. If I recall correctly this goofs up the normal maps.

Is this true? I never heard that normal maps can't be shared in different parts of the model without looking goofy. If this happens to be the case, then I do not know a feasible way to create normal maps for the existing content.

Link to comment
Share on other sites

A question for programmers: Am I right in assuming that all skeletal objects are handled by the stuff in "renderer/HWLightingModelRenderer.*" and all statics are handled by "renderer/InstancingModelRenderer.*"?

Roughly, yes. To be more precise:

ShaderModelVertexRenderer (in HWLightingModelRenderer.h) is used for:

* All models when using the 'fixed' renderpath (fixed-function pipeline, no programmable shaders, CPU lighting).

* All skinned models when using the 'shader' renderpath and not using the GPU skinning option.

InstancingModelRenderer (in InstancingModelRenderer.h) is used for:

* All unskinned models when using the 'shader' renderpath.

* All skinned models when using the 'shader' renderpath and using the GPU skinning option.

ShaderModelRenderer (in ModelRenderer.h) and ShaderRenderModifier (in RenderModifiers.h) are used for all models, to do the batching and the per-batch shader setup.

The GPU skinning option requires GLSL, and is highly experimental and broken and nobody should ever use it. (I just added it to do performance comparisons, and it mostly lost). The class names and filenames are very misleading so ignore the words like "Shader" and "HWLighting". (They really need renaming.)

Link to comment
Share on other sites

Nope. That's true for object-space mapping, however the thing I'll work on next should overcome that limitation.

Is this true? I never heard that normal maps can't be shared in different parts of the model without looking goofy. If this happens to be the case, then I do not know a feasible way to create normal maps for the existing content.

I did some digging around and my memory failed me. It appears that mirroring UVs is only a problem when you are developing the normal map and baking the map. Once that is complete, you are ok it seems. Two good tuts I've came across are here:

http://www.chrisalbe...p_Tutorial.html

http://wiki.polycount.com/NormalMap

A note for those holding their breaths: I'm busy with IRL stuff until the end of the week. Hopefully there'll be some more progress over the weekend.

I would like to do a demonstration of how this could affect some objects in the game. Looking forward to seeing your future work :)

Link to comment
Share on other sites

I did some digging around and my memory failed me. It appears that mirroring UVs is only a problem when you are developing the normal map and baking the map. Once that is complete, you are ok it seems. Two good tuts I've came across are here:

I thought you were talking about a pyrogenesis rendering limitation or something I wasn't aware about it, but now I know what you mean :)

Also, I don't know if you follow our IRC chat, but there was some brief discussion about SSAO a few days ago in #0ad-dev, and one suggestion was that we calculate an AO texture in advance when models are loaded and cache the result, I guess to avoid the overhead of doing that every frame in screen space. You might be interested in reading that conversation, it seems relevant to your interests.

I read the discussion you linked. I have some questions about it.

Correct me if I'm wrong, but that means that the models in the game allows more than one UV coordinates?

How is mapped the precomputed AO into the model to show the effect?

and last question, If the effect can be applied also on the terrain, will the game re-bake the AO each time a structure is built/destroyed?

Link to comment
Share on other sites

I read the discussion you linked. I have some questions about it.

Correct me if I'm wrong, but that means that the models in the game allows more than one UV coordinates?

Not currently, but it's easy to implement multiple UV coordinates per mesh. I don't know how hard it is to create the new set of UVs though, either manually or with some automated unwrapping tool. (It mustn't use the same UV value for multiple points on the mesh (so it can't be the same as we currently use for the diffuse texture), and it ought to be biased to give more texels in the areas of highest-frequency lighting variation.)

How is mapped the precomputed AO into the model to show the effect?

The colour of a pixel is simply the AO texture value plus the diffuse lighting factor, all multiplied by the diffuse texture. (...and modified by specular and shadows etc)

The idea is that it should be equivalent to baking AO into the diffuse texture in Blender, except that we combine the AO and diffuse components in the engine's renderer instead of when exporting from Blender. The advantage would be that we can still share the high-res diffuse textures between multiple buildings (minimising memory usage and download size etc), while having a unique low-resolution lighting texture per building (or even per combination of randomised props on a building), and without making life hard for inexperienced artists/modders or for those who don't use Blender. The advantage compared to SSAO would be that it's computationally cheaper (it should be usable on even the lowest-end hardware), and (as far as I'm aware, not having actually tested any of this in practice) it should give a higher-quality appearance (since SSAO is fundamentally a total hack and can suffer from ugly artifacts). I think the main disadvantage compared to SSAO is that it's much more work to implement, but probably not infeasibly so.

and last question, If the effect can be applied also on the terrain, will the game re-bake the AO each time a structure is built/destroyed?

The effect of buildings occluding nearby terrain could probably be approximated adequately by having a decal underneath the building, which just darkens the terrain around the building.

Link to comment
Share on other sites

Thanks for your detailed answer Philip :), much clearer now. I was aware of both methods, but I didn't know how the process works on the engine/precomputed side.

For the standard level of zoom playing on 0AD I think SSAO may look grany/with artifacts, so precomputed seems to be the ideal option.

For those who want to see the difference between precomputed and SSAO here's a video from Overgrowth's dev blog that shows both effects:

I think the main disadvantage compared to SSAO is that it's much more work to implement, but probably not infeasibly so.

Then may we stick with SSAO as a toggle option while working on high priority features until precomputed AO implementation starts? :)

Link to comment
Share on other sites

Not currently, but it's easy to implement multiple UV coordinates per mesh. I don't know how hard it is to create the new set of UVs though, either manually or with some automated unwrapping tool. (It mustn't use the same UV value for multiple points on the mesh (so it can't be the same as we currently use for the diffuse texture), and it ought to be biased to give more texels in the areas of highest-frequency lighting variation.)

Hey, thanks for the info about the model renderers. :)

This literally took 5 minutes using Blender:

http://imgur.com/a/MSmGU

The final image is from the vanilla game without any shaders, though obviously the diffuse textures had to be baked quite large for this to work.

I think the precomputed texture method you are suggesting would definitely work and would be easy to implement, so it'd be silly of us not to implement it...

Edit: Oh, before I forget, there seems to be a bug in the get_shadow() function in the model/terrain_common.fs shaders. On my system (Linux/ATI), the LOS texture gets painted where the shadowmap should be. I haven't investigated the causes of this, though a "hunch" led me to replace this:


return 0.25 * (
shadow2D(shadowTex, vec3(v_shadow.xy + shadowOffsets1.xy, v_shadow.z)).a +
shadow2D(shadowTex, vec3(v_shadow.xy + shadowOffsets1.zw, v_shadow.z)).a +
shadow2D(shadowTex, vec3(v_shadow.xy + shadowOffsets2.xy, v_shadow.z)).a +
shadow2D(shadowTex, vec3(v_shadow.xy + shadowOffsets2.zw, v_shadow.z)).a
);

with this:


return 0.25 * (
shadow2D(shadowTex, vec3(v_shadow.xy + shadowOffsets1.xy, v_shadow.z)).b +
shadow2D(shadowTex, vec3(v_shadow.xy + shadowOffsets1.zw, v_shadow.z)).b +
shadow2D(shadowTex, vec3(v_shadow.xy + shadowOffsets2.xy, v_shadow.z)).b +
shadow2D(shadowTex, vec3(v_shadow.xy + shadowOffsets2.zw, v_shadow.z)).b
);

to fix it.

Edited by myconid
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...