Jump to content

myconid

WFG Retired
  • Posts

    790
  • Joined

  • Last visited

  • Days Won

    11

Posts posted by myconid

  1. Also would this update keep support for OpenGL 2.0?

    Yup. Also, llvmpipe has software emulation for up to OpenGL 2.1, though I can't imagine it being particularly fast.

    Maybe you can also start to offer optional features on newer OpenGL 3.0 cards (in some recent commit I noticed however that something requiring OpenGL 3.0 was adapted to be compatible on 2.1 also).

    For OpenGL 3.0 we need SDL 2.0, though it hasn't been released yet. It's definitely something I'm very interested in, though.

  2. Just a thought, but perhaps you could consider splitting the shaders and the code in TerrainRenderer into fancywater and superfancywater, with separate methods and shaders for each. It might lead to some code duplication, however I think it's worth it as it'd make things much easier to read/debug.

  3. The player colours are drawn on the buildings/units, so that means the model textures are being loaded correctly. The problem is with the rendering. This looks suspicious:


    GL_MAX_TEXTURE_UNITS 3

    Historic, remember when we changed the fixed renderpath to draw LOS on models? That might have set the number of samplers above his hardware limit...

    Then again, this probably doesn't explain the crash, does it? :P

  4. If I'm reading this correctly, your issue is caused by the ntdll.dll library... Which is weird. I'm not even sure what that library is.

    ntdll.dll is for low-level OS functions. Our vertex arrays use memory-aligned blocks, so they might be calling stuff in there (most likely indirectly). Maybe you are passing some wrong parameter somewhere? Otherwise, this could be good, old memory corruption, like writing past the end of an array or dereferencing dead pointers, that sort of thing.

  5. I'm not sure what the conflict is, and I don't remember what your code does exactly, but if it contains a line similar to this:


    pglBindFramebufferEXT(GL_FRAMEBUFFER_EXT, 0);

    then you need to do this instead:


    // first save postproc fbo:
    GLint fbo;
    glGetIntegerv(GL_FRAMEBUFFER_BINDING_EXT, &fbo);

    // do stuff in another fbo
    pglBindFramebufferEXT(GL_FRAMEBUFFER_EXT, wraitiifbo);
    ...

    // rebind previous fbo
    pglBindFramebufferEXT(GL_FRAMEBUFFER_EXT, fbo);

  6. You don't need to dissect the attributes yourselves, as you can tell OpenGL to do it for you. The idea is to use the parameters of CShaderProgram::VertexAttribPointer in a CShaderProgram::TexCoordPointer call. You can do this straight from the InstancingModelRenderer::PrepareModelDef, with a fixed texture index (e.g. 2).

    You can view a patch with the text editor (where it'll look like instructions to add/remove lines in specific methods/files). You could also try to apply it (google for a tutorial), though since it's out of date you'll probably run into conflicts.

    Ignore the GPU skinning stuff, for now.

  7. While translating, we noticed some uniform variables (e.g. shadowScale, windData) that were used in the shader which needs to be called in the vertex shader, but we were not able to find the location where these variables were set (to get the index into program.local). So, where are the following variables set: cameraPos, sunDir, sunColor, losTransform, shadowTransform, instancingTransform, shadowScale, sim_time, windData ?

    Nice work, great to see some progress!

    The values of some of those variables are set in renderer/RenderModifiers.cpp, while others are set dynamically though the materials system and bound in renderer/ModelRenderer.cpp. However, you don't need to worry about what sets those values, as the mapping from identifiers to indices is stored explicitly in each shader's xml, i.e. you should only need to change shaders/arb/model_common.xml.

    Btw, I don't know if there've been any changes to our shadow code recently, so make sure you aren't undoing any optimisations by modifying the "USE_SHADOW" stuff.

  8. The game reportedly works well with ARB on Intel 3000, which I believe is worse than what you have. Since we load 99% of our graphics data very early in the game, there would be errors from the start, so I don't think you're hitting a graphics memory limit (don't have the data to rule it out, though).

    This is a complete guess, but maybe for some reason your graphics driver is trying to run the game in software emulation mode? Just in case, you might want to update your graphics card driver. If possible, get the official Intel driver, even if there's an HP-branded driver for your laptop.

    • Like 1
  9. 1. Do ARB shaders read parameters passed in from the c++ program in the same way that GLSL shaders do? Where in the c++ code are the parameters defined and passed into the shaders?

    Yes, both types of shaders use the same interfaces. In graphics/ShaderProgram.cpp, there's a parent CShaderProgram class and also derived classes for CShaderProgramGLSL and CShaderProgramARB (there's also some related stuff for the "FFP" fixed function pipeline, though you won't need to worry about those). The code to load/parse the shader xml files is in graphics/ShaderManager.cpp. The parameters are passed to the shaders from the renderers. I think the only renderer file you might need to modify is renderer/InstancingModelRenderer.cpp.

    Now, the challenge you'll need to overcome on the C++ side is that ARB can't receive parameters as VertexAttribPointer. Instead, you'll need to pass them as texcoords or some other basic type. wraitii had a solution for this in his code which you may be able to use/improve, so I recommend checking what changes he made to ShaderProgram and InstancingModelRenderer.

    2. Is there a way to test if our translation compiles/works/runs?

    In your config file, set preferglsl=false, gentangents=true, materialmgr.quality=10. This will select the ARB shaders, generate tangents for instanced objects (necessary for normal and parallax mapping) and try to compile shaders for advanced effects.

    3. How do we submit our code to the development team for inspection after we are done?

    You can open a ticket on Trac and submit a patch there. If you want, you can also fork our Github repo and push changes on there as you work (this will make it easier for you to collaborate, and that's some Good Software Engineering right there ;) ).

  10. If you won't mind, I am going to tell I'm a former veteran of 3D C&C modding and there 800-poly building were never popular :huh:

    The art design documents you're looking at seem to be way out of date. There are no limits on texture sizes (though power-of-two sizes are still recommended for backwards compatibility with really old hardware). Our building models are usually upwards of 5000 polys, somewhere near 10k on average I'd say, and I think our largest building is currently something like 100k polys. Our units are much simpler, but the reason for that is we want to max the number of units instead. I'm getting started on a LOD implementation, so these numbers won't even matter too much in the future (hopefully!).

    The problem is not with Collada, which works, but with Prometheus 3ds Max propping tools, which do not launch

    Unless I'm mistaken, those tools are obsolete. Prop points can be set in your 3D software as null objects (prefixed with "prop_" IIRC) and saved straight in the Collada files (our artists use Blender).

    That kinda sucks :\ The only absense of rotating turrets makes a modern warfare game impossible. Browsing the renders I put above you can judge how many units have rotating weapons, i.e. almost everyone.

    FWIW, from a graphics/scenegraph perspective at least, our engine can do turrets. It's a matter of adding a simple armature to the turret (a single bone would probably suffice), and then rotation of the turret is a matter of controlling the animations applied to its armature. You'd need to add some code for that in the game's "Simulation" portions, where you're simulating the behaviour of your tank units. Maybe our simulation coders can tell you more about that.

    If programming is not an option for you, there are some other free and open-source engines out there that may be more suited to your needs. There's Spring (which is Total Annihilation-ish), and whatever engine Warzone 2100 is using. (If memory serves, their licenses are compatible with ours, so in theory you could use some of their tank-driving code in our engine...).

  11. Question: Would it be possible to make particles (smoke, etc., but not fire) cast shadows and be affected by sun/ambient color? I know this is a relatively new thing and I'm not sure if possible with OpenGL (I remember seeing screenshots of it for the latest version of DirectX).

    Affected by lighting: yes. Cast shadows: no yes, maybe (the shadowmapping technique we use can't handle transparent objects - you're right that DirectX 11/OpenGL 4.3 have some tricks for that, though - actually, newer APIs aren't a requirement, and there are techniques that we could try!).

  12. I see. The bit we care about is the "tex_count < 8" test. Without looking too deep into that code, I assume it's a hardcoded limit on the number of texcoord vectors (ie. hardware-interpolated 4-float registers). That's probably 4*8=32 floats. Probably. It could well be the case that with all the effects enabled the shaders compile to >32 varyings (worth trying lower values for materialmgr.quality in your config, to see if that helps).

    Also, you mention that these errors are caused by preferglsl=true, not the postproc, however note that the postproc shaders always use GLSL (and we shouldn't discount that there may be something broken with the GLSL implementation you use). Let's see if other people experience similar issues...

    Anyway, I'm afraid I can't for the life of me guess what might be going wrong from your screenshot. Maybe it could help a bit if you described how things change as you change the postproc settings. :unknw:

  13. On Mesa R500 has complete OpenGL 2.1 (the hardware doesn't support NPOT textures, but they are implemented in software, the only problem is a slowdown when they are used). It's likely a driver bug however.

    IIRC, the 2.1 standard requires at least 32 register slots, and according to those errors your hardware doesn't have enough. The driver may provide 2.1 functionality, and maybe some of the higher-end models in same GPU series have the hardware support for it. I guess that's what they mean with the "it's not a bug, do not report it" thing.

    Anyhow, if your driver doesn't report the number of slots, you can find it by sticking this somewhere in the engine's graphics code:


    GLint v;
    glGetIntegerv(GL_MAX_VARYING_FLOATS, &v);
    std::cout << "SLOTS: " << v << std::endl;

  14. The distance fog is look pretty nice. Can we get this implemented as the default for maps please and apply it to all current maps.

    That would be very easy to do, but I'm not sure it's the best solution here. Instead, what if we had a "presets" systems? I think the AoE3 editor had something like that.

    Postproc effects are not working properly o my Radeon X1600 with mesa drivers, see screnshots below. If I also enable preferglsl I get similar screen but also with these:

    Looks like the X1600 doesn't support OpenGL 2.1.

  15. Screenshot added in the first post. As zoot said I also only get it when the wonder is on the screen.

    There's some unfinished/untested code being used in that wonder that could easily be the cause of this. Does this happen on an empty map with just the wonder, and if yes, which prop is causing the problem?

×
×
  • Create New...