Jump to content

vladislavbelov

WFG Programming Team
  • Posts

    1.324
  • Joined

  • Last visited

  • Days Won

    22

Everything posted by vladislavbelov

  1. Implementing LOD for the current renderer arch will be a little bit hacky, so to go to LOD direction we need to improve the renderer, as @Yves mentioned some time ago.
  2. Geomipmaping or ROAM itself can't make art too, but it provides an easy possibility to do it. Tessellation not only smooth, but it can make sharper or whatever the original model was (yes, it's hard to make the same sharpness as for geomip, but it's possible for the geometry shader). It's expensive, yes, but it's possible to add some art things in the tessellation too. Just add a texture with a variable density. More complicated things are possible in the geometry shader. So GPU can do the same things, but with a different cost.
  3. Yeah, and it works good, but it costs the bandwidth. But I like it, it's simple and pretty effective (as most of tree-based optimizations). As a just possibility example we can pass the only one quad and the heightmap texture, the rest of work will be completed by GPU on the tessellation step (don't know how much it's expensive, but for something like GTX 1080 it should work good ).
  4. Yeah, go to 4.0+ is too fast for us, because we want to support usual players too. Not sure, it'd be useful for the terrain, but it's not necessary. For old cards - yes, for new cards - it works. At least some AAA games/engines support the tessellation.
  5. Nobody was talking about it. I'm talking on a higher level, about a choice between multiple renderers (modern and legacy) and single renderer (that supports most of popular videocards), about the renderer architecture at all. With good architecture we can support multiple renderer very easily. But we don't have one (probably yet). But do we really need it? I.e. OpenGL 3.3 was released in 2010 (8 years ago) and is widely available these days, so why not drop OpenGL2 and go to OpenGL3.3 (why 3.3 and not 3.0, because it's nearest to 4.0 for old videocards). There is a problem, that we will always have a modern things and legacy things. Probably not so hacky but still (i.e. GL3.3 vs GL4.0 for tessellation).
  6. So it needs to test how it works and looks and then discuss with artists. There is a complicated task to add a new OpenGL support (btw one of reasons why we need the actual feedback statistics) without dropping not real old cards. Also we have a GLES "support", so probably we need to count it too.
  7. Not necessary to use ray-* techniques for volumetric, but other techniques like direct volume rendering or volumetric particles have an own cost. Btw, we use a parallax mapping, and it's a kind of the ray marching. Yeah, that what I'm talking about.
  8. If we're using volumetric particles it should be ok. But for flat/smooth particles it I'm not sure that the number will be enough for all cases. There is a question, how much different types of clouds we want to support.
  9. But how big will be the attribute buffer if they all are moving not in the same direction? To produce natural clouds it requires a lot of particles. There is a good way with volumetric clouds, but it's pretty new to support not new videocards. The simplest way is use a flat height texture (i.e. an analytic modification of the Perlin noise) with a parallax like shader. But it looks simple too.
  10. Yeah, in case we're rendering clouds and their shadows. But, usually in game player shouldn't see clouds, because they may block a view. Only for a far zoom. But for usual zoom we can do simpler and cheaper things.
  11. Where the 800 value is from? Number of attributes may be limited by VBO size, texture size or whatever you use for the instancing. I.e. you can get attributes from a texture in the fragment/vertex shader, so the maximum number of particles would be MAX_TEXTURE_SIZE / ATTR_STRUCT_SIZE. I think that too. Also we don't have to render particles for the shadow map every frame, if all clouds are generated procedurally and moving in the same direction.
  12. Why only 800? You can draw 1e3, 1e4 even 1e5 particles (if it's a simple quad), and it works good. You may not cache the rendered sky, only prerender the atmospheric scattering of the sky and subsurface scattering of the clouds. And then render them in the real time.
  13. I think, there should be a version/about menu item.
  14. It can be done by few (1 for sky, 1 for particles) draw calls, but yes, it noticeable costs more than the current solution.
  15. 2.6.0 is version of the TortoiseGit. What's an input for the "git --version" command?
  16. The last version is 2.16.2 for Windows.
  17. It's the pretty new error as I remember (mid 2017). So you could try to update your Git. What's your current Git version (use the "git --version" command)?
  18. Ah, you meant cheats. The user was a server, so he controls his state, but any player, that will try connect to him during his memory changes, will get an OOS error. Also Cheat Engine is real overhead in this case, since a user can modify JS files, particularly simulation things. So the game isn't actually hacked.
  19. Only if you can't use it. Most of all modern games use it (i.e. GTA V, 16bit float per component, the pretty fast game). No, we don't have a "full" control yet in the pyrogenesis. Because in the first pass we render a scene into a simple RGBA buffer (8bit per component) and the "HDR" shaders works on the next pass with already clamped values. So the precision is low. I like changes, but there is a problem, we have many maps. And artists/modders may want to use different settings. So it'd be ideal to fully customise the shader.
  20. You understood me incorrectly. I didn't said that my implementation is the correct Reinhard one, I only mentioned that the Reinhard is simple and can be used easily here too. Lets return to the HDR. I said, that I prefer HDR > LDR than LDR > LDR for not one-pass pipeline. Because a loosing of high color values, that stops artists/modders to change exposure/gamma freely. Please, look at the example that send above: http://marcinignac.com/blog/pragmatic-pbr-hdr/305-exposure-basic/. For disabled gamma (doesn't make sense what it means in this context) and some exposure values all dark or light areas are losing details. So my point is to use float textures for g-buffer like data when possible.
  21. Yes. And I think we're talking from different sides and we don't understand each other. What I mean: Rendering scene into a float (i.e. RGB32F) buffer with original brightness (vec3 color) > selecting a gamma (by a pregame setting or on the fly) (float gamma) > a simple tone mapping (like the Reinhard one) (color = color / (color + vec3(1.0)); > a gamma correction (color = pow(color, vec3(1.0 / gamma)).
  22. It's not correct. Just do a simple math: in case you have a very light color: (R:1e6, G:1e3, B:1e3), then with LDR it will be (R:255, G:255, B:255), but with HDR it will have a different distribution depended on the gamma. So you're missing a color itself with only LDR. (If you are using HDR/tone mapping as a separate step, not all things (model lighting, hdr, color correction, etc) together in a one shader).
  23. I didn't mean the lighting from bloom, I mean a visual effect for players. HDR allows you to store a true brightness of each pixel, so on the HDR>LDR step you will get a more correct lightning.
  24. Bloom simulates a very light things, but you still can see all details in shadows. It's wrong and looks weird. Obviously that a pixel color usually can't exceed the 255 limit. So HDR shouldn't be just rendered as is, it should be converted to LDR with a color correction. FYI: http://marcinignac.com/blog/pragmatic-pbr-hdr/
×
×
  • Create New...