Jump to content

aeonios

Community Members
  • Posts

    229
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by aeonios

  1. For particle clouds we're talking a few KB of vertex data at most. For procedural volumetric clouds it's more like 14MB worth of 3D textures and things, and probably a lot more expensive to render. Rendering cost depends a lot on how realistic you want the clouds to look. Extra-realistic looking clouds would also be considerably expensive to render for water reflections, which require rendering and lighting them a second time from a different angle. Personally I prefer to lean towards the side of cheapness.
  2. I doubt it'd be an issue if we used unit cube clipping. Most of the particles would be frustum culled by the GPU automatically so the cost of texture reads would be minimal.
  3. Ah you're right, that's only if you pass vertex data in a uniform. If you use instanced arrays you can pass as much vertex data as will fit in GPU memory. That depends on a lot of things. Since the clouds are moving every frame you probably couldn't cache them meaningfully without it looking wrong. That also heavily depends on what optimizations are used for rendering the shadow map, ie whether any sort of shadow map focusing is used.
  4. I think you can only pass 800 sets of instance attributes at a time. That's still 800 times fewer draw calls though. Ah I wasn't thinking of rendering the clouds into the skybox, but rather having them be particles in worldspace like the cloud particle effect used in maps. They could even cast shadows. In fact they should cast shadows! Drawing them that way would allow them to move and be visible in the camera as an object above the ground, which looks ridiculously cool.
  5. Well you could always draw the sky with one draw call. You can technically draw up to 800 particles at once with drawinstanced, but that's openGL3. Also lots of maps use the existing cloud particle effects anyway so it wouldn't be that much more expensive (if at all) as long as you didn't use a super sophisticated lighting system or use millions of clouds. It'd certainly be nice to have more variety in cloud textures and more particle effects for making random-ish cloud-looking blobs. Dynamic sky wouldn't actually cost more unless you wanted the sun to move and create a dynamic day/night cycle. Even then you can redraw the 6 sides of the skybox separately for that and spread it out over several frames. Depending on how quickly the sun moves you would probably only need to re-render the sky once every couple of seconds and only update one side of the skybox per second.
  6. I noticed that in this image the skybox looks severely stretched at the horizon. That's not really good. I've been thinking of using a dynamic sky, but that'd require dynamic clouds and particles are expensive due to draw calls. It can still be done but it'd be a lot of work producing an openGL3.x rendering pipeline to make it affordable.
  7. Yeah a p2p model would make it irrelevant who the host is after the game starts. Spring uses p2p and anyone can leave the game without consequence. Most spring games also handle redistributing a player's stuff if they crash or disconnect, and can even give it back if they rejoin the game (using server-side login credentials to verify the user's identity). Server-client model is not so smart for an rts.
  8. Well adding exposure and gamma controls would be pretty trivial. Setting up the options in atlas would be more difficult than the actual shader code. I guess the options in atlas need to be updated anyway so why not?
  9. Using floats for color data is ridiculously expensive and unnecessary unless you want to write a professional image editing application or something. Floats do give you greater precision but that's not really needed in games. Gamma and exposure are probably unnecessary things for an artist to need to worry about. We already have full control over lighting and everything else. If the "exposure" is too low then make the lighting brighter, and if it's too high then tone it down. Simple stuff. You can control both the direct lighting and the ambient, plus sun overbrightness, which gives you a lot of options already. That screenshot above was taken on bactria, but I modified all the lighting and fog settings to get it to look the way I wanted. If you have a complaint complain about the settings, not the totally unrelated shader.
  10. No. That is an incorrect implementation of reinhard. Proper reinhard uses luminance rather than color, like this: otherwise you get nasty non-linear hue shifting. With proper reinhard tone mapping if you input an LDR image in the 0..1 color range you get the exact same image back out, but with your implementation you don't. If you simplify the above formula for exposure == 1 you get ilum = (lum * (1.0 + lum))/(lum + 1.0), and gamma is optional.
  11. It'll more than likely get its own option in graphics settings one way or the other. The only reason I've got it set to always on right now is because I don't know how to configure user options.
  12. Have you ever actually written an HDR renderer? :|
  13. No, if you implement tone mapping correctly then shifting from HDR to LDR will give you the exact same result as if you'd used only LDR. In fact the reinhardt tone mapping algorithm in that link you gave was actually wrong. Exponential tone mapping causes non-linear hue shifting which cannot be corrected by any means, and leads to the sort of ugly "looking at the scene through polarized sunglasses" look that everyone associates with HDR. People claim that it has magical powers to make the lighting better or to "bring out details" but really all they've done is to distort the colors beyond recognition.
  14. Bloom is actually on a really low setting in that screen shot, and has little to nothing to do with the lighting of those houses. I don't think HDR does what you think it does.
  15. HDR honestly does nothing when it comes to lighting unless you're stacking lights that might cause a color overflow. Screen pixels are 0..255 intensity and you cannot exceed that no matter how big your delusional ambitions are. Also I still don't see what you're talking about. The side of that building is red, and the sides of the houses are white, which is why they show up relatively bright even in shadow.
  16. I think the problem with the roofs is more of an art problem. You don't see super bright super clean whites like that in nature, and they're kind of obnoxious. Also lambertian diffuse lighting is a bad model for rough surfaces like a plaster roof. That'd need oren-nayar and a better material system. I don't really see any issue with the shadows there though.
  17. I actually got into graphics programming in 0ad as a break from AI/unitAI programming in zero-k. Making shinies is fun! DoF can be disabled by disabling postproc, although making dof and bloom separately enableable would be pretty trivial. Anti-aliasing would indeed be nice. DoF especially makes aliasing painfully obvious (and you'd think that a blur would hide it, but no). FXAA is probably a bad choice though. It has a bad tendency to blur everything into oblivion, especially for small units or anything with a complicated texture. It's also potentially very expensive, especially when it's hitting false positives and blurring things that it shouldn't be. MSAA would be nice though. I guess I'll look into that soonish. I don't think it should be that difficult though, afaik all you have to do is set up a multisampled depth texture and then enable MSAA rendering in GL state. Bonus screen shot (it begged to be taken):
  18. I just got finished tuning up the new combined postproc shader. I posted some screenshots up here: https://code.wildfiregames.com/D1454 It's pretty epic.
  19. Well yeah, it'd break 2.x support. Cascades are awful and don't even do what they're intended to. If you want something like that then lightspace perspective shadow mapping is a far better solution, but it makes shadow filtering trickier due to the non-constant scaling. Cascades just turn the shadow map code into unreadable unmanageable spaghetti, and are also expensive for little to no benefit. Unit cube clipping would also be nice for shadows, as I don't think we're doing that already. I haven't really implemented any of that myself though so I don't know how to make it work. It also requires depth prepass. R11G11B10 is kind of bad since it results in undesirable hue shifts, in particular yellowing. If there's a 10/10/10 format that'd be better, but as I said we have no real use for HDR and if we did then a 10b format would probably not have sufficient bits of precision. Trust me, I've done work with that and there were overflows that resulted in some really weird artifacts. Of course you can fix that by conditionally applying tone rescaling during lighting but I dunno if you can get away with less than 16 bits even then. Again though, that's not anything we have real use for since we're not stacking lighting so we'll never see >1.0. Real HDR is seriously overrated even for bloom, and bloom looks better when it's applied in LDR anyway. Deferred rendering is also only useful if you're using lots of lights, and forward+ is generally better since it lets you use MSAA and translucency. I don't see much case for having tons of lights in 0ad, although there are more than zero cases of flaming projectiles that could possibly use dynamic lights it's not exactly something common. I also did some camera responsiveness tests for comparison and didn't notice much difference between filtering and no filtering.
  20. Yeah I got it. I built wxwidgets the last time I cloned the repo but that was a good while ago so I forgot. I'll just copy over wxwidgets from my old repo and then rebuild. edit: correction, I'm going to have to update and rebuild wxwidgets because the version I'm using is outdated and atlas needs newer. :[
  21. Ah wxwidgets. I totally forgot about that. GL3 would also make soft particles practical, which 0ad badly needs.
  22. Win7 Not really. Most of the performance improvement comes from various rendering functions that reduce the number of draw calls, and all of those are present in GL3.x. The speedups that are possible with GL4.x don't really apply to 0ad since the situations that would allow such speedups simply don't occur. For reference my bargain gfx card supports GL3.3.
  23. Not really an issue, but I don't think the shader cares what version of GL is being used by the game anyway. Yes it doesn't run. That's a pain because it prevents me from making good test maps to demonstrate the difference. I still need to figure out how to adjust the camera in game to get low angle shots. I don't know the specific error though. I'd have to dig around to figure out where that's recorded. This does seem to be an improvement. I need to reduce the reflection wobble to account for that though, because it ends up being ridiculous. I wonder if this fixes the black water issue that occurs with high waviness ocean waves at certain wind angles. True HDR can be done with RGB16F as well, which is my preferred format. HDR doesn't make sense for 0ad though because there aren't any stacking projectile lights or anything that would cause the brightness to exceed 1.0. IMO you shouldn't need to select effects anyway. I'm not really aware of any other postproc effects that would be good for 0ad, and the ones we've got are probably best combined. I've been testing that but it's difficult without the ability to adjust map settings to compensate for the changes or to eliminate interfering factors like fog. I'm running an old core i3 2120 with 4GB of DDR3 RAM and an nvidia 240GT with 1GB of DDR3 RAM. It's literally the cheapest gfx card you can buy. The in-game fps meter shows 30-50fps pretty consistent even with all the bells and whistles I've been working on turned on, and the general responsiveness is good. I'm not a big fan of GL4. There are some interesting things you can do with GL4.x but none of those things really apply to 0ad. Occlusion culling is pretty useless in 0ad since basically nothing is ever occluded to get culled, and fancy stuff like motion blur is kind of pointless. I don't really know how to mix shadow sampling modes in GLSL without copying the shadow map anyway, if it's even possible.
  24. Filterable shadow maps are a lot more expensive than pcf filtering. I did a lot of testing using the intel shadow explorer and that was my basic conclusion. Filterable shadow maps also create nasty light bleeding artifacts that are unavoidable if you actually use any filtering. I had considered even implementing PCSS for real soft shadows since I know how to make that exceptionally cheap, but doing that in openGL is a lot harder than in direct3D and I honestly have no idea how to make it work. At the very least it'd require upgrading the shaders to GL3.x. It's not much more expensive than the existing filter at any rate. I did replace the box filter with a simple 1px gaussian blur as well. I'm also considering merging the HDR and DOF shaders into one and having them be always active as long as postproc is enabled. Not only is it easy to do but since the two shaders read the same textures at the same positions the memory locality would be high so the cost of applying both effects rather than just one or the other is minimal. Also I don't think map makers should be able to tell people what postproc effects to use. If you check the 'postproc' box under graphics you don't generally expect to get nothing for it, and mapmakers already have all the hdr sliders that they can use to adjust bloom and such (to nothing at all, if desired). No but my patch doesn't touch the vertex shader. I don't know anything about that. The map editor is broken in the current git snapshot, which makes it difficult to get good screenshots. I can't seem to manage to adjust the camera angle in game to take low angle shots from replays. I'm working on it anyway. I'll probably do that once I've finished testing and pushing the dof/hdr changes to trac.
  25. It's been a while since I've done any work on this but I've actually had quite a few improvements that I've made locally that I never made patches for. I had to update to the latest development snapshot but now I'm starting to push my backed up changes to trac. Some of the things I've got coming up: Shadow Filtering I replaced the old box filter with an 8 sample poisson disk filter. This should yield significantly smoother shadows with soft edges. Depth of Field The existing dof shader is unusable. The blur it uses is not correctly implemented and it tries (and fails miserably) to approximate a camera's focal distance, which isn't terribly appropriate for an RTS. I made a much simpler dof shader that just interpolates with a blurred/downscaled version of the screen image based on depth to approximate the focal behavior of the human eye. It basically softens far away objects slightly. It's subtle but noticeable, works properly, and is also a lot cheaper to apply than the current filter. Water As a minor tweak to my previous work on the water shader I increased the contrast of water reflections to accentuate shadows a bit more. Currently objects reflected in water tend to have a "washed out" look due to the lack of contrast. This will simply color balance them a little better.
×
×
  • Create New...