Jump to content

myconid

WFG Retired
  • Posts

    790
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by myconid

  1. I think that would require us to recompile each shader for every frame. Not really efficient.
  2. Looks quite nice. Are you using the heightmap to calculate the direction of the waves?
  3. Okay, we have three things that are relevant: The LOD thing we were talking about yesterday. Effects that recede into the background should be able to replace themselves with less resource-intensive alternatives. Material-based objects already have a simple system where the user can scale the amount of effects that are loaded based on his hardware. We also need to allow users to configure which effects they want to activate or deactivate, or at least control the effects distance in 1. Add to this the possibility of hardware detection and automatic enabling/disabling of some effects. I don't like the possibility of not allowing a user to even access an effect because it would be too slow, though.
  4. It is! If we want to be ultra-realist, we can use the shadowmap texture/transform to do that from the perspective of the light source. zoot, please ignore that for now.
  5. Here: http://imgur.com/a/gzO9Q (normalmapping + parallax) Looks good! I need to check what's been changed (if anything) in the multi-texture and multi-UV stuff and submit new patches Then those need to be reviewed. Then the rest as patches. Then that needs to be reviewed. Do forever.
  6. Agreed. I think the massive water plane we have at the moment is too inflexible (pun intended) and you can't really do much with it. Maybe eventually we can replace that with multiple small water planes that each has its own elevation, materials, etc and can be placed in Atlas as objects. I hear you. Updated my biwater branch with some hacky and inefficient code to give you access to the depth buffer so you can experiment in the shader. (not confident that it works, btw, haven't tested it) Nice, looking forward to see what you do!
  7. Looks promising! Are you adding foam to low-depth areas, basically? Is it a new texture or are you calculating it in the shader? Btw, I had an idea: https://github.com/myconid/0ad/tree/watertest (needs recompile)
  8. I don't think high quality AO raytracing is practical at runtime. Blender traces hundreds of rays per texel; even for an average model (say 10k vertices) with a reasonably-sized texture (512x512) it can take several minutes to render. That said, we may be able to do something on the GPU, if shadowmapping is supported by the user's hardware: we can set up the model with a few hundred directional lights around it to simulate ambient "sky" light coming from all directions, and render a combination of all the shadows to the coordinates of its second UV map. This will require rendering of the model for each and every light, though it'll be hardware-accelerated so it's much faster than CPU raytracing. As for the quality of the results, I'm sure it won't be as good as what we could get from offline raytracing (shadowmapping artifacts!), but it'll be good enough for prototyping, and still better-looking than SSAO. For the final game release, we probably want a build script, that is perhaps separate from the main build scripts, that detects what meshes have been added/changed and calls Blender to bake their AO automatically.
  9. Well, when all this is done and committed I guess we'll probably want to do a "tech demo" for PR. Maybe first check if whoever is in charge of promotional materials has some specific preference. Personally, I'd love to see how the roman buildings look like, especially the Mars temple and civil centre that I've been testing with all this time. If not those, pick whichever buildings you think will come out looking the coolest. No parallax, it's just a normalmap. Nope, not stressing at all, I just like to be efficient.
  10. I feel bad because everything you want to do depends on stuff I need to do first. Nope. You render it once and you can get the depth straight from the vertex shader.
  11. You don't need real raycasting to calculate that. Here's a clever little algorithm I've come across: First render the terrain. Get the depth buffer which tells us the distance between the camera and the ground at each fragment. Pass the depth buffer as a texture into the water shader. Transform the water plane and calculate the depth value of each vertex, then interpolate with varying. We now have the depth of each point on the plane and the depth of each point on the terrain exactly behind it. A - B = the distance light travels through water. And what's more, when I implement the Postprocessing manager, the depth buffer will always be available as a texture.
  12. It's possible, yes. You can access a raw 2D array of vertex heights by using g_Game->GetWorld()->GetTerrain()->GetHeightMap(); with lineskip being m_VertsPerSide = g_Game->GetWorld()->GetTerrain()->GetVerticesPerSide(); Values are unsigned 16-bit ints. What on earth do you need raycasting for??
  13. Wijit, I removed the bones from the Hero model and attached your textures. Here's what I get: http://imgur.com/a/ZsvjU
  14. Maybe. I'll have to look into it. Ah, ok. Created a new map using a 3x3 Sobel filter, as opposed to the 2x2 filter you used. Using GIMP's normalmap plugin. Still pretty subtle. http://imgur.com/a/Siwn4 Don't worry, I don't mind at all. After all, I want to test this as well! In fact, do you have any other elevation maps I can play with?
  15. Wijitmaker, tried your Celt normalmap. It may be a bit too subtle (or, I chose the wrong building to test it on).
  16. Of course that's just my opinion. I do see the usefulness of what you're suggesting and it's certainly an improvement from a usability point of view. You know what, I'll do it in a separate branch just to see how messy/not it turns out.
  17. From a coding perspective it's easy to add as we already have most of the code for it, but it feels messy to create a special case for something that doesn't really need it.
  18. That would be a PITA for the programmers, though. It doesn't save any memory (when many things use the same texture, it's automatically loaded only once). Maybe we could have an external tool to combine the alphamaps to a texture.
  19. If rectangular maps exist they must work like the circular maps, by hiding the parts of a square heightmap that fall outside a predefined shape. That probably happens somewhere in the simulation code...
  20. Uh, I should remind people that the modelmapping stuff is limited to instanced objects, like buildings and ships but not people or animals! No doubt I'll get around to adding support for that too in a later patch, but for now I've sort of put it on the backburner because it won't make that huge of an impact visually, but would require hacking of SSE assembly on the CPU side to transform the tangents. On that note, perhaps it would be worth revising the GPU skinning code to transform animated objects on the GPU and it would remove the need for big changes to the engine... It would be possible, but I'm doubtful if it would be desirable. SSAO can add haloing artifacts that are most visible on dynamic objects. If we use SSAO, it would be better applied to terrains, but not models. It's really slow, too. All the same, I'm planning to make a SSAO filter available once I properly add the "Postproc Manager", and we'll see what tricks we can use from there. I'm thinking whether the best use of SSAO would be for dynamically "baking" a top-down approximation of the terrain's AO at load time/when modified...
  21. We might as well. The better our framework, the easier it will be to maintain and expand later.
  22. I've already added something like that. The user-defined materialmgr.quality setting in the config determines which materials/shaders are loaded. For example, the player_trans_ao_parallax_spec.xml material has this line: <alternative material="player_trans_ao_spec.xml" quality="8"/> that tells it to load an alternative material without parallax if the user/engine set the quality factor to less than 8. Unfortunately, this is determined statically at load time, so we can't use it for LOD during rendering. Come to think of it, this could still be combined with the new stuff quite elegantly.
  23. If you look at the public/shaders/effects folder, you'll see how the multipass shader architecture works. It should be pretty easy to hack it so each pass has two optional parameters that define a range where that pass is active.
  24. Yeah, and it should all be abstracted as much as possible, so everything is defined in the xml and config.
  25. Actually, dynamic branching is inefficient and to be avoided in all shaders, even when it's supported, and that includes GLSL shaders. Ideally, I want to unroll the loop in the GLSL shader as well. I'd prefer we find a unified solution that executes the condition just once on the CPU, based on eye distance and a user-defined parameter, and then chooses the appropriate shaders rather than have to execute the condition on the GPU for each and every fragment with predetermined parameters.
×
×
  • Create New...