Jump to content

aeonios

Community Members
  • Posts

    229
  • Joined

  • Last visited

  • Days Won

    5

Posts posted by aeonios

  1. 10 hours ago, vladislavbelov said:

    But how big will be the attribute buffer if they all are moving not in the same direction? To produce natural clouds it requires a lot of particles.

    There is a good way with volumetric clouds, but it's pretty new to support not new videocards.

    The simplest way is use a flat height texture (i.e. an analytic modification of the Perlin noise) with a parallax like shader. But it looks simple too.

    For particle clouds we're talking a few KB of vertex data at most. For procedural volumetric clouds it's more like 14MB worth of 3D textures and things, and probably a lot more expensive to render. Rendering cost depends a lot on how realistic you want the clouds to look. Extra-realistic looking clouds would also be considerably expensive to render for water reflections, which require rendering and lighting them a second time from a different angle. Personally I prefer to lean towards the side of cheapness.

  2. 35 minutes ago, vladislavbelov said:

    Yeah, in case we're rendering clouds and their shadows. But, usually in game player shouldn't see clouds, because they may block a view. Only for a far zoom. But for usual zoom we can do simpler and cheaper things.

    I doubt it'd be an issue if we used unit cube clipping. Most of the particles would be frustum culled by the GPU automatically so the cost of texture reads would be minimal.

  3. 1 hour ago, vladislavbelov said:

    Where the 800 value is from? Number of attributes may be limited by VBO size, texture size or whatever you use for the instancing. I.e. you can get attributes from a texture in the fragment/vertex shader, so the maximum number of particles would be MAX_TEXTURE_SIZE / ATTR_STRUCT_SIZE.

    Ah you're right, that's only if you pass vertex data in a uniform. If you use instanced arrays you can pass as much vertex data as will fit in GPU memory.

    1 hour ago, vladislavbelov said:

    I think that too. Also we don't have to render particles for the shadow map every frame, if all clouds are generated procedurally and moving in the same direction.

    That depends on a lot of things. Since the clouds are moving every frame you probably couldn't cache them meaningfully without it looking wrong. That also heavily depends on what optimizations are used for rendering the shadow map, ie whether any sort of shadow map focusing is used.

  4. 22 minutes ago, vladislavbelov said:

    Why only 800? You can draw 1e3, 1e4 even 1e5 particles (if it's a simple quad), and it works good.

    I think you can only pass 800 sets of instance attributes at a time. That's still 800 times fewer draw calls though.

    23 minutes ago, vladislavbelov said:

    You may not cache the rendered sky, only prerender the atmospheric scattering of the sky and subsurface scattering of the clouds. And then render them in the real time.

    Ah I wasn't thinking of rendering the clouds into the skybox, but rather having them be particles in worldspace like the cloud particle effect used in maps. They could even cast shadows. In fact they should cast shadows! Drawing them that way would allow them to move and be visible in the camera as an object above the ground, which looks ridiculously cool.

    screenshot0023.thumb.png.c143b3735af11210c5e8c6df14d01086.png

     

  5. 1 minute ago, vladislavbelov said:

    It can be done by few (1 for sky, 1 for particles) draw calls, but yes, it noticeable costs more than the current solution.

    Well you could always draw the sky with one draw call. You can technically draw up to 800 particles at once with drawinstanced, but that's openGL3. Also lots of maps use the existing cloud particle effects anyway so it wouldn't be that much more expensive (if at all) as long as you didn't use a super sophisticated lighting system or use millions of clouds. It'd certainly be nice to have more variety in cloud textures and more particle effects for making random-ish cloud-looking blobs.

    Dynamic sky wouldn't actually cost more unless you wanted the sun to move and create a dynamic day/night cycle. Even then you can redraw the 6 sides of the skybox separately for that and spread it out over several frames. Depending on how quickly the sun moves you would probably only need to re-render the sky once every couple of seconds and only update one side of the skybox per second.

  6. On 2/5/2018 at 5:31 PM, stanislas69 said:

    I can lower the skybox.

    I adjusted the height to new default height in atlas, but all old maps are actually lower.screenshot0207.png

    I noticed that in this image the skybox looks severely stretched at the horizon. That's not really good. I've been thinking of using a dynamic sky, but that'd require dynamic clouds and particles are expensive due to draw calls. It can still be done but it'd be a lot of work producing an openGL3.x rendering pipeline to make it affordable.

  7. Yeah a p2p model would make it irrelevant who the host is after the game starts. Spring uses p2p and anyone can leave the game without consequence. Most spring games also handle redistributing a player's stuff if they crash or disconnect, and can even give it back if they rejoin the game (using server-side login credentials to verify the user's identity). Server-client model is not so smart for an rts.

    • Like 1
  8. 9 hours ago, vladislavbelov said:

    I like changes, but there is a problem, we have many maps. And artists/modders may want to use different settings. So it'd be ideal to fully customise the shader.

    Well adding exposure and gamma controls would be pretty trivial. Setting up the options in atlas would be more difficult than the actual shader code. I guess the options in atlas need to be updated anyway so why not?

  9. You understood me incorrectly. I didn't said that my implementation is the correct Reinhard one, I only mentioned that the Reinhard is simple and can be used easily here too.

    Lets return to the HDR. I said, that I prefer HDR > LDR than LDR > LDR for not one-pass pipeline. Because a loosing of high color values, that stops artists/modders to change exposure/gamma freely. Please, look at the example that send above: http://marcinignac.com/blog/pragmatic-pbr-hdr/305-exposure-basic/. For disabled gamma (doesn't make sense what it means in this context) and some exposure values all dark or light areas are losing details. So my point is to use float textures for g-buffer like data when possible.

    Using floats for color data is ridiculously expensive and unnecessary unless you want to write a professional image editing application or something. Floats do give you greater precision but that's not really needed in games. Gamma and exposure are probably unnecessary things for an artist to need to worry about. We already have full control over lighting and everything else. If the "exposure" is too low then make the lighting brighter, and if it's too high then tone it down. Simple stuff. You can control both the direct lighting and the ambient, plus sun overbrightness, which gives you a lot of options already.

    That screenshot above was taken on bactria, but I modified all the lighting and fog settings to get it to look the way I wanted. If you have a complaint complain about the settings, not the totally unrelated shader.

  10. 57 minutes ago, vladislavbelov said:

    Yes. And I think we're talking from different sides and we don't understand each other. What I mean:

    Rendering scene into a float (i.e. RGB32F) buffer with original brightness (vec3 color) > selecting a gamma (by a pregame setting or on the fly) (float gamma) > a simple tone mapping (like the Reinhard one) (color = color / (color + vec3(1.0)); > a gamma correction (color = pow(color, vec3(1.0 / gamma)).

    No. That is an incorrect implementation of reinhard. Proper reinhard uses luminance rather than color, like this:

    Quote

    vec3 toneMapReinhard(vec3 color){
        // float whitePoint = 1.0/exposure;
        const float whitePoint = 1.0;
        float lum = dot(color, vec3(0.2990, 0.5870, 0.1140));
        float ilum = (lum * (1.0 + (lum/(whitePoint * whitePoint))))/(lum + 1.0);
        return color * ilum/lum;
    }

    otherwise you get nasty non-linear hue shifting. With proper reinhard tone mapping if you input an LDR image in the 0..1 color range you get the exact same image back out, but with your implementation you don't.

    If you simplify the above formula for exposure == 1 you get ilum = (lum * (1.0 + lum))/(lum + 1.0), and gamma is optional.

  11. No, if you implement tone mapping correctly then shifting from HDR to LDR will give you the exact same result as if you'd used only LDR. In fact the reinhardt tone mapping algorithm in that link you gave was actually wrong. Exponential tone mapping causes non-linear hue shifting which cannot be corrected by any means, and leads to the sort of ugly "looking at the scene through polarized sunglasses" look that everyone associates with HDR. People claim that it has magical powers to make the lighting better or to "bring out details" but really all they've done is to distort the colors beyond recognition.

  12. HDR honestly does nothing when it comes to lighting unless you're stacking lights that might cause a color overflow. Screen pixels are 0..255 intensity and you cannot exceed that no matter how big your delusional ambitions are. Also I still don't see what you're talking about. The side of that building is red, and the sides of the houses are white, which is why they show up relatively bright even in shadow.

  13. I think the problem with the roofs is more of an art problem. You don't see super bright super clean whites like that in nature, and they're kind of obnoxious. Also lambertian diffuse lighting is a bad model for rough surfaces like a plaster roof. That'd need oren-nayar and a better material system. I don't really see any issue with the shadows there though.

  14. I actually got into graphics programming in 0ad as a break from AI/unitAI programming in zero-k. :P Making shinies is fun!

     DoF can be disabled by disabling postproc, although making dof and bloom separately enableable would be pretty trivial.

    Anti-aliasing would indeed be nice. DoF especially makes aliasing painfully obvious (and you'd think that a blur would hide it, but no). FXAA is probably a bad choice though. It has a bad tendency to blur everything into oblivion, especially for small units or anything with a complicated texture. It's also potentially very expensive, especially when it's hitting false positives and blurring things that it shouldn't be.

    MSAA would be nice though. I guess I'll look into that soonish. I don't think it should be that difficult though, afaik all you have to do is set up a multisampled depth texture and then enable MSAA rendering in GL state.

    Bonus screen shot (it begged to be taken):

     

    screenshot0018.png

    • Like 1
  15. 28 minutes ago, vladislavbelov said:

    Shaders don't, but a video driver does. I.e. we still support Intel video cards, which can have a version 3.0 or lower, that doesn't have many nice features.

    Well yeah, it'd break 2.x support.

    28 minutes ago, vladislavbelov said:

    I wanna change the current shadow mapping way, because for low angles we have a low resolution per terrain unit. I'd like to use PSM or CSM, and I'm more near to CSM, i.e. 2 cascades.

    Cascades are awful and don't even do what they're intended to. If you want something like that then lightspace perspective shadow mapping is a far better solution, but it makes shadow filtering trickier due to the non-constant scaling. Cascades just turn the shadow map code into unreadable unmanageable spaghetti, and are also expensive for little to no benefit. Unit cube clipping would also be nice for shadows, as I don't think we're doing that already. I haven't really implemented any of that myself though so I don't know how to make it work. It also requires depth prepass.

    28 minutes ago, vladislavbelov said:

    I agree about HDR. Also we don't have deferred rendering yet. About formats RGB16F or RGB32F has enough precision, but it's expensive. So if R11G11B10F will be enough, then we have to use it. Because it's 4 times cheaper.

    R11G11B10 is kind of bad since it results in undesirable hue shifts, in particular yellowing. If there's a 10/10/10 format that'd be better, but as I said we have no real use for HDR and if we did then a 10b format would probably not have sufficient bits of precision. Trust me, I've done work with that and there were overflows that resulted in some really weird artifacts. Of course you can fix that by conditionally applying tone rescaling during lighting but I dunno if you can get away with less than 16 bits even then.

    Again though, that's not anything we have real use for since we're not stacking lighting so we'll never see >1.0. Real HDR is seriously overrated even for bloom, and bloom looks better when it's applied in LDR anyway.

    Deferred rendering is also only useful if you're using lots of lights, and forward+ is generally better since it lets you use MSAA and translucency. I don't see much case for having tons of lights in 0ad, although there are more than zero cases of flaming projectiles that could possibly use dynamic lights it's not exactly something common.

    28 minutes ago, vladislavbelov said:

    FPS meter isn't enough, we need to use more strict tests. Because the FPS meter has a noticeable error.

    I also did some camera responsiveness tests for comparison and didn't notice much difference between filtering and no filtering.

  16. Yeah I got it. I built wxwidgets the last time I cloned the repo but that was a good while ago so I forgot. I'll just copy over wxwidgets from my old repo and then rebuild.

    edit: correction, I'm going to have to update and rebuild wxwidgets because the version I'm using is outdated and atlas needs newer. :[

  17. 4 minutes ago, stanislas69 said:

    What OS are you running ?

    Win7

    4 minutes ago, stanislas69 said:

    I believe the idea behind the switch was this : https://trac.wildfiregames.com/ticket/3054

    Isn't there any performance gain to switch to ogl4 when possible ?

    Not really. Most of the performance improvement comes from various rendering functions that reduce the number of draw calls, and all of those are present in GL3.x. The speedups that are possible with GL4.x don't really apply to 0ad since the situations that would allow such speedups simply don't occur. For reference my bargain gfx card supports GL3.3.

  18. 35 minutes ago, vladislavbelov said:

    It's not possible yet, because we don't drop GL2 yet. But it'd be good to have a switchable renderer in the future.

    Not really an issue, but I don't think the shader cares what version of GL is being used by the game anyway.

    36 minutes ago, vladislavbelov said:

    What's broken? It doesn't run? You can make screenshots from the game.

    Yes it doesn't run. That's a pain because it prevents me from making good test maps to demonstrate the difference. I still need to figure out how to adjust the camera in game to get low angle shots. I don't know the specific error though. I'd have to dig around to figure out where that's recorded.

    37 minutes ago, vladislavbelov said:

    This does seem to be an improvement. I need to reduce the reflection wobble to account for that though, because it ends up being ridiculous. I wonder if this fixes the black water issue that occurs with high waviness ocean waves at certain wind angles.

    1 hour ago, vladislavbelov said:

    The current postprocess isn't completed. I.e. we can't select multiple effects. Also HDR isn't actually HDR, it's usual LDR, but with color corrections. We need to use R10G10B10F or RGB32F for true HDR.

    True HDR can be done with RGB16F as well, which is my preferred format. HDR doesn't make sense for 0ad though because there aren't any stacking projectile lights or anything that would cause the brightness to exceed 1.0. IMO you shouldn't need to select effects anyway. I'm not really aware of any other postproc effects that would be good for 0ad, and the ones we've got are probably best combined. I've been testing that but it's difficult without the ability to adjust map settings to compensate for the changes or to eliminate interfering factors like fog.

    1 hour ago, vladislavbelov said:

    Any filtering is noticeable under a low hardware. It'd be good to know you hardware and some charts to compare results. I'd like to see MSM.

    I'm running an old core i3 2120 with 4GB of DDR3 RAM and an nvidia 240GT with 1GB of DDR3 RAM. It's literally the cheapest gfx card you can buy. The in-game fps meter shows 30-50fps pretty consistent even with all the bells and whistles I've been working on turned on, and the general responsiveness is good.

     

    1 hour ago, stanislas69 said:

    One could have a look at: This that's a hzcky ogl4 implementation for 0ad

    Also Maybe we could use glew to get rid of those versions considerations Here

    I'm not a big fan of GL4. There are some interesting things you can do with GL4.x but none of those things really apply to 0ad. Occlusion culling is pretty useless in 0ad since basically nothing is ever occluded to get culled, and fancy stuff like motion blur is kind of pointless. I don't really know how to mix shadow sampling modes in GLSL without copying the shadow map anyway, if it's even possible.

    • Like 1
  19. 9 minutes ago, vladislavbelov said:

    If make the shadow more expensive, we can use moment shadow maps.

    Filterable shadow maps are a lot more expensive than pcf filtering. I did a lot of testing using the intel shadow explorer and that was my basic conclusion. Filterable shadow maps also create nasty light bleeding artifacts that are unavoidable if you actually use any filtering. I had considered even implementing PCSS for real soft shadows since I know how to make that exceptionally cheap, but doing that in openGL is a lot harder than in direct3D and I honestly have no idea how to make it work. At the very least it'd require upgrading the shaders to GL3.x.

    It's not much more expensive than the existing filter at any rate.

    13 minutes ago, vladislavbelov said:

    That's good, but our blur is pretty simple (just a linear), it'd be good to replace it by the Gauss one.

    I did replace the box filter with a simple 1px gaussian blur as well. I'm also considering merging the HDR and DOF shaders into one and having them be always active as long as postproc is enabled. Not only is it easy to do but since the two shaders read the same textures at the same positions the memory locality would be high so the cost of applying both effects rather than just one or the other is minimal. Also I don't think map makers should be able to tell people what postproc effects to use. If you check the 'postproc' box under graphics you don't generally expect to get nothing for it, and mapmakers already have all the hdr sliders that they can use to adjust bloom and such (to nothing at all, if desired).

    17 minutes ago, vladislavbelov said:

    It's pretty hard to make the water realistic and beautiful at the same time, especially without real references (i.e. the contrast). Also did you use my patch for the water vertex shader fix? Because the current one is broken, so the specular light is calculated wrong for heights != 15.

    No but my patch doesn't touch the vertex shader. I don't know anything about that.

    19 minutes ago, vladislavbelov said:

    It'd be good to have screenshots to compare before/after states.

    The map editor is broken in the current git snapshot, which makes it difficult to get good screenshots. I can't seem to manage to adjust the camera angle in game to take low angle shots from replays. :( I'm working on it anyway. I'll probably do that once I've finished testing and pushing the dof/hdr changes to trac.

  20. It's been a while since I've done any work on this but I've actually had quite a few improvements that I've made locally that I never made patches for. I had to update to the latest development snapshot but now I'm starting to push my backed up changes to trac.

    Some of the things I've got coming up:

    Shadow Filtering

    I replaced the old box filter with an 8 sample poisson disk filter. This should yield significantly smoother shadows with soft edges.

    Depth of Field

    The existing dof shader is unusable. The blur it uses is not correctly implemented and it tries (and fails miserably) to approximate a camera's focal distance, which isn't terribly appropriate for an RTS. I made a much simpler dof shader that just interpolates with a blurred/downscaled version of the screen image based on depth to approximate the focal behavior of the human eye. It basically softens far away objects slightly. It's subtle but noticeable, works properly, and is also a lot cheaper to apply than the current filter.

    Water

    As a minor tweak to my previous work on the water shader I increased the contrast of water reflections to accentuate shadows a bit more. Currently objects reflected in water tend to have a "washed out" look due to the lack of contrast. This will simply color balance them a little better.

    • Like 2
×
×
  • Create New...