myconid Posted July 10, 2012 Author Report Share Posted July 10, 2012 Very good then... I'll just wait (and no, I indeed didn't need real raycasting, just some info about the heightmap, or the depthbuffer, though I hadn't thought of that).I feel bad because everything you want to do depends on stuff I need to do first. Would this (particularly points 4/5) require rendering the water twice?Nope. You render it once and you can get the depth straight from the vertex shader. Quote Link to comment Share on other sites More sharing options...
Wijitmaker Posted July 10, 2012 Report Share Posted July 10, 2012 Maybe. I'll have to look into it.Ah, ok. Created a new map using a 3x3 Sobel filter, as opposed to the 2x2 filter you used. Using GIMP's normalmap plugin. Still pretty subtle.http://imgur.com/a/Siwn4Don't worry, I don't mind at all. After all, I want to test this as well! In fact, do you have any other elevation maps I can play with?That celtic texture looks very nice, I think that did well. I was hoping for a little better results with the unit texture, but I'm afraid I can't do much about it because I didn't make that one. I also don't have any more elevation maps. Though I could make another to demo. Is there one you had in mind you would like to see? I think all the shield textures have some benefit from both the specular and normal mapping. Quote Link to comment Share on other sites More sharing options...
wraitii Posted July 10, 2012 Report Share Posted July 10, 2012 Was parallax activated on the unit?@Don't overstress yourself: the less I can work on shaders, the more I'll work on my bot Quote Link to comment Share on other sites More sharing options...
myconid Posted July 10, 2012 Author Report Share Posted July 10, 2012 That celtic texture looks very nice, I think that did well. I was hoping for a little better results with the unit texture, but I'm afraid I can't do much about it because I didn't make that one. I also don't have any more elevation maps. Though I could make another to demo. Is there one you had in mind you would like to see? I think all the shield textures have some benefit from both the specular and normal mapping.Well, when all this is done and committed I guess we'll probably want to do a "tech demo" for PR. Maybe first check if whoever is in charge of promotional materials has some specific preference. Personally, I'd love to see how the roman buildings look like, especially the Mars temple and civil centre that I've been testing with all this time. If not those, pick whichever buildings you think will come out looking the coolest.Was parallax activated on the unit?@Don't overstress yourself: the less I can work on shaders, the more I'll work on my bot No parallax, it's just a normalmap. Nope, not stressing at all, I just like to be efficient. Quote Link to comment Share on other sites More sharing options...
feneur Posted July 10, 2012 Report Share Posted July 10, 2012 Ah, ok. Created a new map using a 3x3 Sobel filter, as opposed to the 2x2 filter you used. Using GIMP's normalmap plugin. Still pretty subtle.http://imgur.com/a/Siwn4I personally don't think that's subtle at all, in fact I'd say it looks a bit extreme on the stone walls =) Perhaps it needs a higher resolution texture to make any difference for the straw roof/wood though, but on the other hand perhaps it's not ever going to make a big enough difference on such areas at the zoom we're using Quote Link to comment Share on other sites More sharing options...
historic_bruno Posted July 11, 2012 Report Share Posted July 11, 2012 It could also be used on buildings (modifying their xml to add the baked texture). This would help with modding support, too, as they would not be required to bake the AO themselves.I was thinking about this. Currently our texture manager can do realtime texture conversion from png to dds, and that's integrated with the cache system. So if a texture is referenced that isn't present in the cache already but we have a source file, there's no error, it simply uses a temporary "default" texture and marks the desired texture as needing conversion in a separate thread. This is what's happening when you clear the cache and see all grey textures the first time they load. When we build the release package, we run the archive builder which does all the caching at once and stores the converted files in a zip archive, so users don't have to wait for the textures to convert.We could do something very similar with per-model AO. Each time a model is loaded (with a material that wants AO), some manager would search for the cached AO map texture, if present it uses that, if not there is a need to a.) load the AO map from png, or b.) generate a new one. If an artist did create an AO map by hand or in Blender, there would be a suitably named png found. Otherwise instead of an error (there's no source file), the manager would mark the AO texture for generation in a separate thread (the process doesn't sound hard, basically look at every face of the model and send out random rays, counting how many collide with the model -- it could be done very quickly on GPU, but maybe it's best to have a CPU path as well for the archive building?), caches the results, and the cached texture is loaded and used for rendering. While the AO map was being generated, the model would still be visible, only without AO. Most users would never see this because they would be pre-generated in the archive builder process.I think this is flexible and mirrors how our textures already work. I would like the process to be easy as possible for artists, in fact, I would like to make it so that even non-artists and people who have no idea what AO is can still create/import models for the game that look great. It would be nice if we could even generate multiple non-overlapping UV sets at loading, for models with only one -- but that's more of a wish for maximum flexibility than a requirement Quote Link to comment Share on other sites More sharing options...
wrod Posted July 11, 2012 Report Share Posted July 11, 2012 Cool! I happened upon this page yesterday. They use water depth in a seemingly simple way to do foam (don't know how well it looks, though). Another interesting point they make is about the phenomenon of 'color extinction', which may be related to what you are doing with murkiness.i love how in that water instead of seeing the bottom of the ocean floor it steadily becomes harder to see Quote Link to comment Share on other sites More sharing options...
wraitii Posted July 11, 2012 Report Share Posted July 11, 2012 (edited) @HistoricBruno: technically, AO could require a lot of "rays" to be precise. It can probably be approximated fairly well, for example with SSAO. I'm fairly sure we can't get a result as good as Blender's, but for most model, and approximation should be enough. However, I agree that it would be the most straightforward way. Edited July 11, 2012 by wraitii Quote Link to comment Share on other sites More sharing options...
zoot Posted July 11, 2012 Report Share Posted July 11, 2012 @HistoricBruno: technically, AO could require a lot of "rays" to be precise. It can probably be approximated fairly well, for example with SSAO. I'm fairly sure we can't get a result as good as Blender's, but for most model, and approximation should be enough. However, I agree that it would be the most straightforward way.If we use Blender's AO code, it should be just as good, and it would be done by the autobuilder, so most users wouldn't have to wait for it. Quote Link to comment Share on other sites More sharing options...
myconid Posted July 11, 2012 Author Report Share Posted July 11, 2012 (edited) We could do something very similar with per-model AO. Each time a model is loaded (with a material that wants AO), some manager would search for the cached AO map texture, if present it uses that, if not there is a need to a.) load the AO map from png, or b.) generate a new one. If an artist did create an AO map by hand or in Blender, there would be a suitably named png found. Otherwise instead of an error (there's no source file), the manager would mark the AO texture for generation in a separate thread (the process doesn't sound hard, basically look at every face of the model and send out random rays, counting how many collide with the model -- it could be done very quickly on GPU, but maybe it's best to have a CPU path as well for the archive building?), caches the results, and the cached texture is loaded and used for rendering. While the AO map was being generated, the model would still be visible, only without AO. Most users would never see this because they would be pre-generated in the archive builder process.I don't think high quality AO raytracing is practical at runtime. Blender traces hundreds of rays per texel; even for an average model (say 10k vertices) with a reasonably-sized texture (512x512) it can take several minutes to render.That said, we may be able to do something on the GPU, if shadowmapping is supported by the user's hardware: we can set up the model with a few hundred directional lights around it to simulate ambient "sky" light coming from all directions, and render a combination of all the shadows to the coordinates of its second UV map. This will require rendering of the model for each and every light, though it'll be hardware-accelerated so it's much faster than CPU raytracing. As for the quality of the results, I'm sure it won't be as good as what we could get from offline raytracing (shadowmapping artifacts!), but it'll be good enough for prototyping, and still better-looking than SSAO.For the final game release, we probably want a build script, that is perhaps separate from the main build scripts, that detects what meshes have been added/changed and calls Blender to bake their AO automatically. Edited July 11, 2012 by myconid Quote Link to comment Share on other sites More sharing options...
wraitii Posted July 11, 2012 Report Share Posted July 11, 2012 The better solution is always Blender rendering here. But for mod support, it could be practical to either include a script for calling Blender's AO, or having something in the game.It's probably easier to rely on Blender, though. Quote Link to comment Share on other sites More sharing options...
Wijitmaker Posted July 11, 2012 Report Share Posted July 11, 2012 Personally, I'd love to see how the roman buildings look like, especially the Mars temple and civil centre that I've been testing with all this time. If not those, pick whichever buildings you think will come out looking the coolest.Alright, I've started working on the Romans. The roofs are really important because it is a large piece of what people see. Could you show me how this looks?Also, I was looking at that Brennus texture a little bit and it appears to me that it is in an inverse direction (what is sticking down should be up and vice-versa). I'm going to see if there is some way I can convert it back to a height/elevation map. Quote Link to comment Share on other sites More sharing options...
zoot Posted July 11, 2012 Report Share Posted July 11, 2012 What is needed before the modelmapping branch can go into SVN? Quote Link to comment Share on other sites More sharing options...
myconid Posted July 11, 2012 Author Report Share Posted July 11, 2012 Alright, I've started working on the Romans. The roofs are really important because it is a large piece of what people see. Could you show me how this looks?Here: http://imgur.com/a/gzO9Q (normalmapping + parallax)Looks good! What is needed before the modelmapping branch can go into SVN?I need to check what's been changed (if anything) in the multi-texture and multi-UV stuff and submit new patches Then those need to be reviewed. Then the rest as patches. Then that needs to be reviewed. Do forever. Quote Link to comment Share on other sites More sharing options...
Wijitmaker Posted July 11, 2012 Report Share Posted July 11, 2012 Cool thanks, that is not bad, I'll carry on. I'll make a specular and illumination map too. I think that would look nice with the marble and the windows. Quote Link to comment Share on other sites More sharing options...
quantumstate Posted July 11, 2012 Report Share Posted July 11, 2012 Out of curiosity, how did you make that map Wijitmaker? Quote Link to comment Share on other sites More sharing options...
Gen.Kenobi Posted July 11, 2012 Report Share Posted July 11, 2012 Let's not forget that Blender uses two types of Ambient Occlusion. There's the raytraced, which needs significant amount of time to calculate, of course, depending on how big the model (triangles) and the number of samples you set (the higher, the best quality, more time). And the approximated one, that is faster, but it's quality is not that great.I can get examples of resolution if you guys want.I strongly believe that AO should be dynamic. Most of today's AAA games and engines already work with dynamic AO rendering. Or some pre-rendered AO. It would look much better, specially with the integration of different objects in scene, such as a big city, with darker streets, ect. If it's not possible It's my dream.Also, have you guys considered to add light bouncing? Quote Link to comment Share on other sites More sharing options...
Loki1950 Posted July 12, 2012 Report Share Posted July 12, 2012 (edited) We at vega strike have found xN from http://www.xnormal.net to be useful for generating various texture types,it also has a built in model viewer.Enjoy the Choice Edited July 12, 2012 by Loki1950 Quote Link to comment Share on other sites More sharing options...
Sonarpulse Posted July 12, 2012 Report Share Posted July 12, 2012 You don't need real raycasting to calculate that. Here's a clever little algorithm I've come across:First render the terrain.Get the depth buffer which tells us the distance between the camera and the ground at each fragment.Pass the depth buffer as a texture into the water shader.Transform the water plane and calculate the depth value of each vertex, then interpolate with varying.We now have the depth of each point on the plane and the depth of each point on the terrain exactly behind it.A - B = the distance light travels through water.And what's more, when I implement the Postprocessing manager, the depth buffer will always be available as a texture.Haha, is that http://www.digitalartform.com/archives/2006/05/faking_simple_v.html from page 8 of this thread? Also, for ultra-realism wouldn't what you are saying need the light-ray-in-water distance computed from the camera's perspective and the light source's? Quote Link to comment Share on other sites More sharing options...
zoot Posted July 12, 2012 Report Share Posted July 12, 2012 (edited) I get this on the myconid/biwater branch:ERROR: Failed to compile shader 'shaders/glsl/water_high.fs':0:83(33): error: Could not implicitly convert operands to arithmetic operator0:0(0): error: no matching function for call to `mod(, float)'0:0(0): error: candidates are: float mod(float, float)0:0(0): error: vec2 mod(vec2, float)0:0(0): error: vec3 mod(vec3, float)0:0(0): error: vec4 mod(vec4, float)0:0(0): error: vec2 mod(vec2, vec2)0:0(0): error: vec3 mod(vec3, vec3)0:0(0): error: vec4 mod(vec4, vec4)0:83(45): error: Operands to arithmetic operators must be numeric0:0(0): error: no matching function for call to `mix(vec3, vec3, )'0:0(0): error: candidates are: float mix(float, float, float)0:0(0): error: vec2 mix(vec2, vec2, vec2)0:0(0): error: vec3 mix(vec3, vec3, vec3)0:0(0): error: vec4 mix(vec4, vec4, vec4)0:0(0): error: vec2 mix(vec2, vec2, float)0:0(0): error: ... Edited July 12, 2012 by zoot Quote Link to comment Share on other sites More sharing options...
myconid Posted July 12, 2012 Author Report Share Posted July 12, 2012 (edited) Haha, is that http://www.digitalar...g_simple_v.html from page 8 of this thread? Also, for ultra-realism wouldn't what you are saying need the light-ray-in-water distance computed from the camera's perspective and the light source's?It is! If we want to be ultra-realist, we can use the shadowmap texture/transform to do that from the perspective of the light source.zoot, please ignore that for now. Edited July 12, 2012 by myconid Quote Link to comment Share on other sites More sharing options...
wraitii Posted July 12, 2012 Report Share Posted July 12, 2012 I'm not sure we'd need that level of realism, though of course it could be "easily" implemented, it's just a matter of computing power.BTW, Myconid, you/we'd have to start thinking about adding new graphical stuff as options that can be activated/deactivated easily. Some of this stuff is CPU/GPU intensive.(And if you check this thread, you'll see I finally managed to solve my GlTexImage2D issues). Quote Link to comment Share on other sites More sharing options...
myconid Posted July 12, 2012 Author Report Share Posted July 12, 2012 (edited) BTW, Myconid, you/we'd have to start thinking about adding new graphical stuff as options that can be activated/deactivated easily. Some of this stuff is CPU/GPU intensive.(And if you check this thread, you'll see I finally managed to solve my GlTexImage2D issues).Okay, we have three things that are relevant:The LOD thing we were talking about yesterday. Effects that recede into the background should be able to replace themselves with less resource-intensive alternatives.Material-based objects already have a simple system where the user can scale the amount of effects that are loaded based on his hardware.We also need to allow users to configure which effects they want to activate or deactivate, or at least control the effects distance in 1.Add to this the possibility of hardware detection and automatic enabling/disabling of some effects. I don't like the possibility of not allowing a user to even access an effect because it would be too slow, though. Edited July 12, 2012 by myconid Quote Link to comment Share on other sites More sharing options...
zoot Posted July 12, 2012 Report Share Posted July 12, 2012 The effects XML has something called "contexts" aka "modes". I suspect this would be used for binary effect on/effect off stuff. Quote Link to comment Share on other sites More sharing options...
myconid Posted July 12, 2012 Author Report Share Posted July 12, 2012 The effects XML has something called "contexts" aka "modes". I suspect this would be used for binary effect on/effect off stuff.I think that would require us to recompile each shader for every frame. Not really efficient. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.