Jump to content

myconid

WFG Retired
  • Posts

    790
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by myconid

  1. Ok. I'll also have to separate the code from the game data in the commits, because they've become somewhat mixed together. Btw, would it be possible for people to review the code straight from the git repo? It's very cumbersome to have to generate patches, get feedback, generate the patches again, get other feedback...
  2. I think I see what you mean. Yeah, that should be possible. It may not even require changes to the engine, just some tweaks in the shaders.
  3. You can't add normalmapping to an alphamap (it makes no sense). You can add normalmapping to the grass, and you can add normalmapping to the path. Actually, now you can even add normalmapping to decals (finished it today!). I'd say we could have a system where you can tile decals to create paths, like you'd tile models to create walls. Something of that sort would look a lot more detailed and much less "blocky".
  4. I think they're quite nice, though I'd say both the specular and the normal maps look a bit too flawless. If you look at the second pic (where I rotated the camera to face the sun), the reflected light looks way too even. For example, where there are scratches and imperfections in the stone walls or the marble floors (in the diffuse texture), light could behave slightly differently to look more realistic. e.g. In this pic, there are some self-shadows, which would affect the surface normals/specularity.
  5. Right now it doesn't reflect buildings, just the sky. I guess in my first attempt I'll make it so everything is reflected (including dynamic objects) and updated every frame. This is actually easiest from a dev point of view, uses very little memory but a lot of processing power, and will give us an idea about how feasible this is. But anyway, as interesting as this is I don't plan to tackle it right away. Just FYI, my priorities list atm looks somewhat like this: Finish off the new materials system for the terrain renderer. Extend this materials system to terrain decals. Wrap up the custom terrain alphamaps loader. Finish the postprocessing manager: add correct LOS rendering, expose effects parameters in config. Add passing cloud shadows on terrain. Add basic terrain erosion in Atlas. Support for animated textures (parametric, like translation for a waterfall). Support for animated textures (from files, like the water normalmaps). Experiment with GPU raytracing of AO. Experiment with cloud rendering, particle effects. Experiment with reflection cubemaps rendering for reflections. Experiment with more realistic water effects (waves, ship trails etc). Experiment with LOS obstructions and multiple pathfinder obstructions. Et cetera, ad infinitum.
  6. Okay, we could do something like this: There are two cubemaps per reflective object. One contains all the static objects and is rarely updated, while a second cubemap contains all the dynamic objects and is updated every frame. The alpha channels of the two cubemaps contain depth information. When rendering the reflective object, we pass in both cubemaps. We sample the same locations from both cubes and decide which one to use based on depth. Don't know if the alpha channel of a texture has good enough precision for depth testing, but this seems like a very interesting experiment. 32f textrures, maybe? This is starting to sound way too wasteful.
  7. I think that's equivalent to my solution, but uses more memory. Btw, I don't think we can do this with units that "walk into view" as for that we'd have to re-render all (or most) cubemaps every frame. I'm suggesting we render only when static objects like trees, buildings and terrain change, so we only rarely need to re-render the cubes. (or maybe I misunderstood your suggestion)
  8. Yeah, I've been thinking about that. I'm concerned that the cost of culling may exceed that actual cost of rendering... Okay, here's what we can do for efficient culling: we divide each game map into regions and store the static models in a 2d array of hashmaps, where each array element corresponds to a separate region. Every time the reflections need to change, we use the 9 regions around the model to re-render its cubemap. Sounds workable to me..! On hardware that supports MRT, we can actually render all sides in a single pass! The only remaining question is the size of the cubemaps in memory. For reflections, 128x128 is probably enough and I think comes out at 400k per model. Not bad.
  9. That's the limitation, and what wraitii said is correct. The water on the objects is completely moddable and separate from the ocean water. You can edit the parameters by creating as many materials as you need. The missing reflections aren't too noticeable, tbh, but you can tell they're missing if you're looking for them. The reflections are basically the only hardcoded thing I use, which are generated using a cubemap of the skybox textures. We do have a couple of options to improve this... A worthwhile side-project would be to add support for loading cubemaps through the usual texture manager (so basically custom cubemaps loaded from the actor xml). This would allow us to at least have self-reflections in the water, but not reflections of other objects or terrain. It would also let us fake materials that look like this. A more challenging, but higher quality, solution would be to actually render our own cubemaps when there are large changes in the game-world (e.g. only after a new building is completed). Each reflective object instance would need its own low-resolution cubemap (i.e. somewhat large memory usage), and would need 6 renderings of the game-world for each update. This would give us accurate results, and may not be too performance/memory intensive since there probably won't be too many of these reflections.
  10. I'm using a cubemap to simulate sky reflections on arbitrary water planes. Works pretty well, I think. http://imgur.com/a/vgOsb Code: https://github.com/myconid/0ad/tree/skycubemap
  11. We might! Seriously though, just look at Skyrim! That sort of effect can be done with particles, which can be generated procedurally in Atlas. A while ago, I linked to a paper that explains how they did awesome-looking clouds in MS Flight Simulator '04. It's on the list!
  12. Yeah, I figured that was the case. I believe you can check if you are in Atlas by testing this: if (g_AtlasGameLoop) {} Not sure how reliable this is.
  13. Very nice! Tested it and it works great. It is a bit too slow in Atlas - maybe the superfancywater stuff could be turned off when not in the game proper? (after all, the water isn't even animated in Atlas) As for your git questions you have a number of options. You can make a temporary copy of your water branch in git, squash the commits you created, write the squashed commit to a file to create a patch, checkout the svn code and patch it with the "patch" command. Make sure everything is merged properly and then "diff" a new patch for submission. Remember not to include any of my model/terrainmapping code when squashing, though! It may be easier if you add the official 0ad git repo to your local git (which I believe is up to date enough,since the terrainrenderer isn't changing much), make a copy of its master branch and cherrypick/apply the commits you want from your water branch. You could then squash and create a patch to test with the svn code, though I'm fairly sure you won't need any further changes. Another option would be for me to update my master repo from upstream and you pull the updates through me, though I can't be bothered to regenerate the actor/terrain xml files, so I'd prefer we avoid this for now. Btw, that trick I suggested for getting the accurate water depth looks even better than I expected! We should use it to do volumetric fog as well! Just think "fog planes" that apply that sort of effect to their surfaces... Come to think of it, the depth texture should be accessible to any material that requests it. On that note, if you're interested in future changes that may help speed up your code, look in my "postproc" branch and you'll find the (still unfinished) PostprocManager that has the depth buffer already available for binding as a texture, which can save you the overhead of copying the depth all the time. If you want to try it, remember to turn depth writes off if they're used in the water renderer (can't read and write to the same buffer in the same shader). Anyway, nice work, wraitii!
  14. The effect hasn't changed - it's because I'm using Wijitmaker's hand-crafted heightmap. Thanks for reporting. That's probably due to my laziness (i.e. I type 1 instead of 1.0), combined with the fact that ATI's GLSL compiler is very permissive so I don't get those errors. Imho, since GLSL describes itself as a C-like language (instead of Java or such), the compilers should be very permissive. But I digress. I've changed the water shader GLSL version from 1.10 to 1.20 to match all the other shaders, and it should work now.
  15. fabio, it worked! Some odd seams at the polygon edges, but I don't think we'll get any closer on your hardware. (and thanks for responding/testing!)
  16. Yup, already did that and also reworked the math a bit. Hopefully it won't mind the crapload of uniforms that are getting merged from the modelmapping branch, though. fabio, please try it now! (and fingers crossed)
  17. Thanks fabio! We've seen something similar before, if you recall. It means there are too many inputs to the fragment shader. I'll try to reduce them, though I don't know if that'll be possible.
  18. Not very specific, though I think it should be there now. Closer now! http://imgur.com/sCObb Added a test map for you.
  19. Well, for reflections at least, everything needs to be rendered upside down. The more reflections you have, the more times you need to re-render everything for each frame. For refractions you need to "cut out" the bottom of the water and render it separately to a texture. Not sure if there's a clever way to do all of these at the same time (nothing comes to mind), but there must be a way to do it, I just haven't thought it yet. The alternative I was thinking is to simply have some prerendered textures that we use for relfections/refractions. For example, we could just reflect the sky texture and refract a static "reservoir-bottom" texture. As long as they roughly look like they are moving correctly relative to the camera, it may hide the fact they aren't real reflections.
  20. Aqueducts yes, lakes no(t yet). Yup, basically it attaches a new material type to a transparent plane that is then attached as a prop to a model. The material uses a normalmap/specular shader + some new "renderquery" stuff in the engine (a system I've been working on for materials to query the renderer to give them specific info, such as the game-time, the water texture etc). For lakes I want to modify the "global" water instead, which completely separate from this. Specifically, I want to make it possible for the global water plane to be deformed just like the terrain plane.
  21. Myconid's random experiment #2842: Animated water material that can be attached to models. http://imgur.com/QfhzH No reflections or refractions. Not sure how efficient those would be to render. Maybe they can be faked with static simple/cube textures instead...
  22. Please ignore the fog thing for now, it's not properly implemented yet so it gets drawn over the LOS. In the prototype I used multi-target rendering to draw the LOS to a different buffer and then recombine it with the output after the fog, though that feels like a massive kludge and I want to avoid it. Maybe there's something clever I can do with the renderbuffer's alpha channel, if you see what I mean... As for the other error, fixed. Man, ATI's GLSL compiler will let me get away with anything.
  23. Let's not get ahead of ourselves, people. Here my focus was on the postprocessing manager, not on the effects themselves. You'll get a chance to tinker with the effects settings yourselves eventually, so for now I'll just stick to coding.
×
×
  • Create New...