Jump to content

Post-processing effects test (SSAO/HDR/Bloom)


Recommended Posts

  On 10/07/2012 at 8:33 PM, wraitii said:

Very good then... I'll just wait :)

(and no, I indeed didn't need real raycasting, just some info about the heightmap, or the depthbuffer, though I hadn't thought of that).

I feel bad because everything you want to do depends on stuff I need to do first. :(

  Quote

Would this (particularly points 4/5) require rendering the water twice?

Nope. You render it once and you can get the depth straight from the vertex shader.

Link to comment
Share on other sites

  On 10/07/2012 at 4:37 PM, myconid said:

Maybe. I'll have to look into it.

Ah, ok. Created a new map using a 3x3 Sobel filter, as opposed to the 2x2 filter you used. Using GIMP's normalmap plugin. Still pretty subtle.

http://imgur.com/a/Siwn4

Don't worry, I don't mind at all. After all, I want to test this as well! :)

In fact, do you have any other elevation maps I can play with?

That celtic texture looks very nice, I think that did well. I was hoping for a little better results with the unit texture, but I'm afraid I can't do much about it because I didn't make that one. I also don't have any more elevation maps. Though I could make another to demo. Is there one you had in mind you would like to see? I think all the shield textures have some benefit from both the specular and normal mapping.

Link to comment
Share on other sites

  On 10/07/2012 at 8:47 PM, Wijitmaker said:

That celtic texture looks very nice, I think that did well. I was hoping for a little better results with the unit texture, but I'm afraid I can't do much about it because I didn't make that one. I also don't have any more elevation maps. Though I could make another to demo. Is there one you had in mind you would like to see? I think all the shield textures have some benefit from both the specular and normal mapping.

Well, when all this is done and committed I guess we'll probably want to do a "tech demo" for PR. Maybe first check if whoever is in charge of promotional materials has some specific preference. Personally, I'd love to see how the roman buildings look like, especially the Mars temple and civil centre that I've been testing with all this time. If not those, pick whichever buildings you think will come out looking the coolest.

  On 10/07/2012 at 8:49 PM, wraitii said:

Was parallax activated on the unit?

@Don't overstress yourself: the less I can work on shaders, the more I'll work on my bot :)

No parallax, it's just a normalmap. Nope, not stressing at all, I just like to be efficient.

Link to comment
Share on other sites

  On 10/07/2012 at 4:37 PM, myconid said:

Ah, ok. Created a new map using a 3x3 Sobel filter, as opposed to the 2x2 filter you used. Using GIMP's normalmap plugin. Still pretty subtle.

http://imgur.com/a/Siwn4

I personally don't think that's subtle at all, in fact I'd say it looks a bit extreme on the stone walls =) Perhaps it needs a higher resolution texture to make any difference for the straw roof/wood though, but on the other hand perhaps it's not ever going to make a big enough difference on such areas at the zoom we're using :unsure:

Link to comment
Share on other sites

  On 10/07/2012 at 11:07 AM, wraitii said:

It could also be used on buildings (modifying their xml to add the baked texture). This would help with modding support, too, as they would not be required to bake the AO themselves.

I was thinking about this. Currently our texture manager can do realtime texture conversion from png to dds, and that's integrated with the cache system. So if a texture is referenced that isn't present in the cache already but we have a source file, there's no error, it simply uses a temporary "default" texture and marks the desired texture as needing conversion in a separate thread. This is what's happening when you clear the cache and see all grey textures the first time they load. When we build the release package, we run the archive builder which does all the caching at once and stores the converted files in a zip archive, so users don't have to wait for the textures to convert.

We could do something very similar with per-model AO. Each time a model is loaded (with a material that wants AO), some manager would search for the cached AO map texture, if present it uses that, if not there is a need to a.) load the AO map from png, or b.) generate a new one. If an artist did create an AO map by hand or in Blender, there would be a suitably named png found. Otherwise instead of an error (there's no source file), the manager would mark the AO texture for generation in a separate thread (the process doesn't sound hard, basically look at every face of the model and send out random rays, counting how many collide with the model -- it could be done very quickly on GPU, but maybe it's best to have a CPU path as well for the archive building?), caches the results, and the cached texture is loaded and used for rendering. While the AO map was being generated, the model would still be visible, only without AO. Most users would never see this because they would be pre-generated in the archive builder process.

I think this is flexible and mirrors how our textures already work. I would like the process to be easy as possible for artists, in fact, I would like to make it so that even non-artists and people who have no idea what AO is can still create/import models for the game that look great. It would be nice if we could even generate multiple non-overlapping UV sets at loading, for models with only one -- but that's more of a wish for maximum flexibility than a requirement ;)

Link to comment
Share on other sites

  On 10/07/2012 at 8:29 PM, zoot said:

Cool! I happened upon this page yesterday. They use water depth in a seemingly simple way to do foam (don't know how well it looks, though). Another interesting point they make is about the phenomenon of 'color extinction', which may be related to what you are doing with murkiness.

fig4.jpg

i love how in that water instead of seeing the bottom of the ocean floor it steadily becomes harder to see
Link to comment
Share on other sites

@HistoricBruno: technically, AO could require a lot of "rays" to be precise. It can probably be approximated fairly well, for example with SSAO. I'm fairly sure we can't get a result as good as Blender's, but for most model, and approximation should be enough. However, I agree that it would be the most straightforward way.

Edited by wraitii
Link to comment
Share on other sites

  On 11/07/2012 at 5:06 AM, wraitii said:

@HistoricBruno: technically, AO could require a lot of "rays" to be precise. It can probably be approximated fairly well, for example with SSAO. I'm fairly sure we can't get a result as good as Blender's, but for most model, and approximation should be enough. However, I agree that it would be the most straightforward way.

If we use Blender's AO code, it should be just as good, and it would be done by the autobuilder, so most users wouldn't have to wait for it.

Link to comment
Share on other sites

  On 11/07/2012 at 12:05 AM, historic_bruno said:

We could do something very similar with per-model AO. Each time a model is loaded (with a material that wants AO), some manager would search for the cached AO map texture, if present it uses that, if not there is a need to a.) load the AO map from png, or b.) generate a new one. If an artist did create an AO map by hand or in Blender, there would be a suitably named png found. Otherwise instead of an error (there's no source file), the manager would mark the AO texture for generation in a separate thread (the process doesn't sound hard, basically look at every face of the model and send out random rays, counting how many collide with the model -- it could be done very quickly on GPU, but maybe it's best to have a CPU path as well for the archive building?), caches the results, and the cached texture is loaded and used for rendering. While the AO map was being generated, the model would still be visible, only without AO. Most users would never see this because they would be pre-generated in the archive builder process.

I don't think high quality AO raytracing is practical at runtime. Blender traces hundreds of rays per texel; even for an average model (say 10k vertices) with a reasonably-sized texture (512x512) it can take several minutes to render.

That said, we may be able to do something on the GPU, if shadowmapping is supported by the user's hardware: we can set up the model with a few hundred directional lights around it to simulate ambient "sky" light coming from all directions, and render a combination of all the shadows to the coordinates of its second UV map. This will require rendering of the model for each and every light, though it'll be hardware-accelerated so it's much faster than CPU raytracing. As for the quality of the results, I'm sure it won't be as good as what we could get from offline raytracing (shadowmapping artifacts!), but it'll be good enough for prototyping, and still better-looking than SSAO.

For the final game release, we probably want a build script, that is perhaps separate from the main build scripts, that detects what meshes have been added/changed and calls Blender to bake their AO automatically.

Edited by myconid
Link to comment
Share on other sites

  On 10/07/2012 at 9:06 PM, myconid said:

Personally, I'd love to see how the roman buildings look like, especially the Mars temple and civil centre that I've been testing with all this time. If not those, pick whichever buildings you think will come out looking the coolest.

Alright, I've started working on the Romans. The roofs are really important because it is a large piece of what people see. Could you show me how this looks?

Also, I was looking at that Brennus texture a little bit and it appears to me that it is in an inverse direction (what is sticking down should be up and vice-versa). I'm going to see if there is some way I can convert it back to a height/elevation map.

post-3-0-52208800-1342032805_thumb.png

Link to comment
Share on other sites

  On 11/07/2012 at 6:55 PM, Wijitmaker said:

Alright, I've started working on the Romans. The roofs are really important because it is a large piece of what people see. Could you show me how this looks?

Here: http://imgur.com/a/gzO9Q (normalmapping + parallax)

Looks good!

  On 11/07/2012 at 7:00 PM, zoot said:

What is needed before the modelmapping branch can go into SVN?

I need to check what's been changed (if anything) in the multi-texture and multi-UV stuff and submit new patches Then those need to be reviewed. Then the rest as patches. Then that needs to be reviewed. Do forever.

Link to comment
Share on other sites

Let's not forget that Blender uses two types of Ambient Occlusion. There's the raytraced, which needs significant amount of time to calculate, of course, depending on how big the model (triangles) and the number of samples you set (the higher, the best quality, more time). And the approximated one, that is faster, but it's quality is not that great.

I can get examples of resolution if you guys want.

I strongly believe that AO should be dynamic. Most of today's AAA games and engines already work with dynamic AO rendering. Or some pre-rendered AO. It would look much better, specially with the integration of different objects in scene, such as a big city, with darker streets, ect. If it's not possible :( It's my dream.

Also, have you guys considered to add light bouncing? :)

Link to comment
Share on other sites

  On 10/07/2012 at 8:29 PM, myconid said:

You don't need real raycasting to calculate that. Here's a clever little algorithm I've come across:

  1. First render the terrain.
  2. Get the depth buffer which tells us the distance between the camera and the ground at each fragment.
  3. Pass the depth buffer as a texture into the water shader.
  4. Transform the water plane and calculate the depth value of each vertex, then interpolate with varying.
  5. We now have the depth of each point on the plane and the depth of each point on the terrain exactly behind it.
  6. A - B = the distance light travels through water.

And what's more, when I implement the Postprocessing manager, the depth buffer will always be available as a texture.

Haha, is that http://www.digitalartform.com/archives/2006/05/faking_simple_v.html from page 8 of this thread? Also, for ultra-realism wouldn't what you are saying need the light-ray-in-water distance computed from the camera's perspective and the light source's?

Link to comment
Share on other sites

I get this on the myconid/biwater branch:

ERROR: Failed to compile shader 'shaders/glsl/water_high.fs':
0:83(33): error: Could not implicitly convert operands to arithmetic operator
0:0(0): error: no matching function for call to `mod(, float)'
0:0(0): error: candidates are: float mod(float, float)
0:0(0): error: vec2 mod(vec2, float)
0:0(0): error: vec3 mod(vec3, float)
0:0(0): error: vec4 mod(vec4, float)
0:0(0): error: vec2 mod(vec2, vec2)
0:0(0): error: vec3 mod(vec3, vec3)
0:0(0): error: vec4 mod(vec4, vec4)
0:83(45): error: Operands to arithmetic operators must be numeric
0:0(0): error: no matching function for call to `mix(vec3, vec3, )'
0:0(0): error: candidates are: float mix(float, float, float)
0:0(0): error: vec2 mix(vec2, vec2, vec2)
0:0(0): error: vec3 mix(vec3, vec3, vec3)
0:0(0): error: vec4 mix(vec4, vec4, vec4)
0:0(0): error: vec2 mix(vec2, vec2, float)
0:0(0): error: ...

Edited by zoot
Link to comment
Share on other sites

  On 12/07/2012 at 4:41 AM, Sonarpulse said:

Haha, is that http://www.digitalar...g_simple_v.html from page 8 of this thread? Also, for ultra-realism wouldn't what you are saying need the light-ray-in-water distance computed from the camera's perspective and the light source's?

It is! :) If we want to be ultra-realist, we can use the shadowmap texture/transform to do that from the perspective of the light source.

zoot, please ignore that for now.

Edited by myconid
Link to comment
Share on other sites

I'm not sure we'd need that level of realism, though of course it could be "easily" implemented, it's just a matter of computing power.

BTW, Myconid, you/we'd have to start thinking about adding new graphical stuff as options that can be activated/deactivated easily. Some of this stuff is CPU/GPU intensive.

(And if you check this thread, you'll see I finally managed to solve my GlTexImage2D issues).

Link to comment
Share on other sites

  On 12/07/2012 at 11:53 AM, wraitii said:

BTW, Myconid, you/we'd have to start thinking about adding new graphical stuff as options that can be activated/deactivated easily. Some of this stuff is CPU/GPU intensive.

(And if you check this thread, you'll see I finally managed to solve my GlTexImage2D issues).

Okay, we have three things that are relevant:

  1. The LOD thing we were talking about yesterday. Effects that recede into the background should be able to replace themselves with less resource-intensive alternatives.
  2. Material-based objects already have a simple system where the user can scale the amount of effects that are loaded based on his hardware.
  3. We also need to allow users to configure which effects they want to activate or deactivate, or at least control the effects distance in 1.

Add to this the possibility of hardware detection and automatic enabling/disabling of some effects. I don't like the possibility of not allowing a user to even access an effect because it would be too slow, though.

Edited by myconid
Link to comment
Share on other sites

  On 12/07/2012 at 12:13 PM, zoot said:

The effects XML has something called "contexts" aka "modes". I suspect this would be used for binary effect on/effect off stuff.

I think that would require us to recompile each shader for every frame. Not really efficient.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...