Jump to content

wraitii

WFG Programming Team
  • Posts

    3.452
  • Joined

  • Last visited

  • Days Won

    77

Everything posted by wraitii

  1. Could you try first loading test.pmp and then adding the temple? I get the same crash, and I'm not sure what's causing it (I also get it in GLSL, so it should come from something else).
  2. Isn't it 7? Looking here, it appears to be, so it shouldn't crash.
  3. Mh. Then that shader shouldn't crash. What do you have for "MAX_TEXTURE_IMAGE_UNITS_ARB"? To debug, you might try changing that line to a lower number, such as "5". It will perhaps crash, but it should crash somewhere else.
  4. Yeah, hadn't thought of that. Let's roll differently. Go to HWDetect.cpp, and change this #define INTEGER(id) do { \ GLint i = -1; \ glGetIntegerv(GL_##id, &i); \ if (ogl_SquelchError(GL_INVALID_ENUM)) \ scriptInterface.SetProperty(settings.get(), "GL_" #id, errstr); \ else \ scriptInterface.SetProperty(settings.get(), "GL_" #id, i); \ } while (false) to this #define INTEGER(id) do { \ GLint i = -1; \ glGetIntegerv(GL_##id, &i); \ if (ogl_SquelchError(GL_INVALID_ENUM)) \ scriptInterface.SetProperty(settings.get(), "GL_" #id, errstr); \ else \ scriptInterface.SetProperty(settings.get(), "GL_" #id, i); \ std::cout << #id << ":" << i << std::endl; \ } while (false) Then when you start the game you'll have (in the console) MAX_TEXTURE_COORDS_ARB: ... (along with a lot of other info). I might be able to cram info to reduce the number of texture coords I use, but I'm afraid I won't be able to change that much.
  5. That won't help much actually... I need to know the value of "GL_MAX_TEXTURE_COORDS" for you graphic card (on mine, its height, which is fairly standard, but if you have an Intel Card, it may be way lower.) If you can compile 0 A.D., you can add an std::cout << glGet(GL_MAX_TEXTURE_COORDS) << std::endl; I can also access it from some headers in my openGL framework, so you might be able to do a search for 'GL_MAX_TEXTURE_COORDS' and get the info.
  6. What's your graphic card? There might be a limitation on the number of texture coordinate that it can pass... Otherwise I dunno what could cause this.
  7. Well, then, I don't understand why it would be wrong (mathematically, I mean) to multiply the normals using: vec3 normal = ((instancingTransform) * vec4(a_normal,0.0)).rgb; instead of vec3 normal = mat3(instancingTransform) * a_normal; (which is what you, with success, used) And while the second one doesn't work on my computer, the first does, for some reason, so I'd rather both be correct. (Same with the tangents using that: vec4 tangent = vec4(((instancingTransform) * vec4(a_tangent.xyz,0.0)).rgb, a_tangent.w); ). I've pushed the ARB parallax shader (here), in a version that doesn't require the extension (commented that code). Could anybody report if it works?
  8. Well, no. I multiply the unmodified instancingMatrix with a_normal, and the normals that come out are right, and parallax works properly and everything. Didn't notice those operators, thanks.
  9. Thing is, I'm pretty sure I can't even do this sort of small checks without the extension (IF doesn't exist in basic ARB)... In which case, I might just as well use it. But then I need a way to check that the computer has the extension, or deactivate parallax mapping (else it'd crash). By correct, you mean that the other parameters are still important or not?
  10. Yeah, that's the most logical answer. BTW, using my loops and if implementations, I have managed to get parallax running with the ARB shader, using the same wrong mathematic as in the GLSL shader. I'm not sure how to unroll the loop, however, if I want to not use the Nvidia only extension. Edit: well I think it's the same hack anyway... Lemme get the maths straight for a second: InstancingTransform is a 4x4 matrix. The top left 3x3 matrix is the rotation matrix, right? That's what we want to use to create the normal matrix, right? If I understood this all correctly, the normal matrix is the transpose of the inverse of InstancingTransform. However, since the 3x3 matrix is a rotation, it's orthogonal, and thus inverse = transpose. Thus, the normal matrix would be the instancingTransform, right? Or are the other parameters important still?
  11. From a clean download of your modelMapping branch, I can confirm that parallax only works when using the fix I put 3/4 posts higher. I wonder if it comes from my computer or if I for some reason get a wrong matrix.
  12. I should probably redownload your fork and start over, to be sure. That or OS X GLSL implementation is really messed up. Anyway, I should have just commited a fix for the GLSL version of the water shader.
  13. I must stop editing, as you can't read everything I posted ... The black lines actually come from the AO. I thin the AO textures uses another UV set, and i'm not completely sure it works... Removing that and adapting the texture, I have everything working properly. If you try it and it works too, I think it'd be the best solution.
  14. Well, if it does, it's quite well camouflaged. It seems to me like it's working on any part of the terrain, on any rotation. It doesn't mess up buildings that use these textures. Shadows, specular, diffuse lighting and parallax are working (I'd upload a picture but my internet is messy right now). I have no idea why it works, but it works.
  15. I dunno, it seems to work really well using this vec3 normal; normal = ((instancingTransform) * vec4(a_normal, 0.0)).rgb; //vec3 normal = nm * a_normal; #if (USE_NORMAL_MAP || USE_PARALLAX_MAP) vec4 tangent = vec4(((instancingTransform) * vec4(a_tangent.xyz, 0.0)).rgb, a_tangent.w); I do have weird black lines. Edit: lines caused by the AO, actually. The UV sets are not working as intended, I think.
  16. Nah, it just looks wrong. Seriously wrong, but using "crashes" was more of a metaphor than literal statement. Though I do get segmentation faults by using the GLSL shaders and adding the temple to the map if I haven't priorly loaded a map, which is kind of weird. I'll try by manually multiplying the two. Interestingly, I also get the blinking effect if I set the normal to be any component of "mat3(InstancingTransform)". And it does not blink if I set my normal to be (instancingTransform * vec4(a_normal, 0.0)).rgb; Though the the effect is completely messed up. I'll try also changing the tangents/everything. Edit: looks like that one works, but for a weird stretching effect over some textures... I'll look into it.
  17. I've read about some "scaling" stuffs... Is there no problem with that? I've also found out about "gl_NormalMatrix" but it seems kind of unreliable (makes the normals change when I rotate the camera, I have no idea wether this is normal or not.) Edit: anyway, it still crashes with InstancingTransform, which is probably not normal.
  18. Looks like OS X doesn't support GLSL versions higher than 1.2... I'd have to inverse the matrix by hand then.
  19. I'll try that. I have no idea what the instancing matrix is... On the ARB shader, the calculations works and the normals don't look completely wrong, but they might be. I also have the modelview matrix accessible, I'll try and compare the two.
  20. Yeah, I realized that later, but I posted too early in excitement. So yes, I believe my problem is with the instancing matrix. Only it seems to work for the position.
  21. Thanks for answering. As you can see from my edit above, I've got why it didn't work... Edit: need to change v_tangent = tangent to v_tangent = a_tangent too. EditN°2: thought that may break other things... EditN°3: so there actually is something with the matrix calculation…
  22. Complete edit: got it! Live 92 of model_common.vs, change v_normal = normal; to v_normal = a_normal;.
  23. That's the basic idea. Going back to the original issue: I have a problem with GLSL on my Geforce GT 120. It also happens with regular SVN. model_common bugs. I've tracked it down to: v_normal (and incidentally v_tangent/v_bitangent). Now this is sent to the fragment shader by the vertex shader. Here, the problem can't be with InstancingTransform as the position is OK, and it uses that. Since v_normal only calls a_normal, the problem resides in a_normal. Now what is a_normal? It's defined as an attribute, asking for gl_Normal. Let's investigate Shader_manager.cpp. gl_Normal is there defined to have location "2", as documented by nvidia. Afaik, this is right. Now, when creating the shader, Shader_manager.cpp will set this "a_normal" as an attribute in vertexAttribs["a_normal"] = 2 (I believe). Vertex attribs are then passed to the GLSL shader. This is where it starts to differ from the ARB shader (which, remember, works, so we know that the problem is not in reading the files/passing the normals). Going into Shader_program.cpp, it calls CShaderProgramGLSL. This set "m_VertexAttribs" as the vertex Attributes. Lines of interest are now 347/348, where "pglBindAttribLocationARB" is called; lines 489-505, where "Bind" and "Unbind" are defined; lines 633-653 where VertexAttribPointer and VertexAttribIPointer are defined. Base on previous research, I also know that InstancingModelRenderer.cpp has something to do with that. When "stream_normal" is set (and it is), it will use NormalPointer (a overriden function that calls pglVertexAttribPointerARB, in position 2 (the same as defined by Nvidia). According to "glexts_funcs.h", this is equivalent to "glVertexAttribPointer", ie that. No reason to believe this bugs, in particular because the TexCoordPointer and the VertexPointer seem to work just fine. There are other function calls that do other things. I'm not completely sure, as everything calls everything else. But I have no idea why this doesn't work for GLSL when it works for ARB.
  24. Basic solution would be to have bigger textures, and use bigger blending masks.
  25. I may be wrong, but this looks like an actual vertex displacement effect and not a texture simulation. (which would explain the "backyard water" effect too, as this would require huge computing power to simulate an ocean perfectly.)
×
×
  • Create New...