Jump to content

wraitii

WFG Programming Team
  • Posts

    3.443
  • Joined

  • Last visited

  • Days Won

    76

Everything posted by wraitii

  1. That won't help much actually... I need to know the value of "GL_MAX_TEXTURE_COORDS" for you graphic card (on mine, its height, which is fairly standard, but if you have an Intel Card, it may be way lower.) If you can compile 0 A.D., you can add an std::cout << glGet(GL_MAX_TEXTURE_COORDS) << std::endl; I can also access it from some headers in my openGL framework, so you might be able to do a search for 'GL_MAX_TEXTURE_COORDS' and get the info.
  2. What's your graphic card? There might be a limitation on the number of texture coordinate that it can pass... Otherwise I dunno what could cause this.
  3. Well, then, I don't understand why it would be wrong (mathematically, I mean) to multiply the normals using: vec3 normal = ((instancingTransform) * vec4(a_normal,0.0)).rgb; instead of vec3 normal = mat3(instancingTransform) * a_normal; (which is what you, with success, used) And while the second one doesn't work on my computer, the first does, for some reason, so I'd rather both be correct. (Same with the tangents using that: vec4 tangent = vec4(((instancingTransform) * vec4(a_tangent.xyz,0.0)).rgb, a_tangent.w); ). I've pushed the ARB parallax shader (here), in a version that doesn't require the extension (commented that code). Could anybody report if it works?
  4. Well, no. I multiply the unmodified instancingMatrix with a_normal, and the normals that come out are right, and parallax works properly and everything. Didn't notice those operators, thanks.
  5. Thing is, I'm pretty sure I can't even do this sort of small checks without the extension (IF doesn't exist in basic ARB)... In which case, I might just as well use it. But then I need a way to check that the computer has the extension, or deactivate parallax mapping (else it'd crash). By correct, you mean that the other parameters are still important or not?
  6. Yeah, that's the most logical answer. BTW, using my loops and if implementations, I have managed to get parallax running with the ARB shader, using the same wrong mathematic as in the GLSL shader. I'm not sure how to unroll the loop, however, if I want to not use the Nvidia only extension. Edit: well I think it's the same hack anyway... Lemme get the maths straight for a second: InstancingTransform is a 4x4 matrix. The top left 3x3 matrix is the rotation matrix, right? That's what we want to use to create the normal matrix, right? If I understood this all correctly, the normal matrix is the transpose of the inverse of InstancingTransform. However, since the 3x3 matrix is a rotation, it's orthogonal, and thus inverse = transpose. Thus, the normal matrix would be the instancingTransform, right? Or are the other parameters important still?
  7. From a clean download of your modelMapping branch, I can confirm that parallax only works when using the fix I put 3/4 posts higher. I wonder if it comes from my computer or if I for some reason get a wrong matrix.
  8. I should probably redownload your fork and start over, to be sure. That or OS X GLSL implementation is really messed up. Anyway, I should have just commited a fix for the GLSL version of the water shader.
  9. I must stop editing, as you can't read everything I posted ... The black lines actually come from the AO. I thin the AO textures uses another UV set, and i'm not completely sure it works... Removing that and adapting the texture, I have everything working properly. If you try it and it works too, I think it'd be the best solution.
  10. Well, if it does, it's quite well camouflaged. It seems to me like it's working on any part of the terrain, on any rotation. It doesn't mess up buildings that use these textures. Shadows, specular, diffuse lighting and parallax are working (I'd upload a picture but my internet is messy right now). I have no idea why it works, but it works.
  11. I dunno, it seems to work really well using this vec3 normal; normal = ((instancingTransform) * vec4(a_normal, 0.0)).rgb; //vec3 normal = nm * a_normal; #if (USE_NORMAL_MAP || USE_PARALLAX_MAP) vec4 tangent = vec4(((instancingTransform) * vec4(a_tangent.xyz, 0.0)).rgb, a_tangent.w); I do have weird black lines. Edit: lines caused by the AO, actually. The UV sets are not working as intended, I think.
  12. Nah, it just looks wrong. Seriously wrong, but using "crashes" was more of a metaphor than literal statement. Though I do get segmentation faults by using the GLSL shaders and adding the temple to the map if I haven't priorly loaded a map, which is kind of weird. I'll try by manually multiplying the two. Interestingly, I also get the blinking effect if I set the normal to be any component of "mat3(InstancingTransform)". And it does not blink if I set my normal to be (instancingTransform * vec4(a_normal, 0.0)).rgb; Though the the effect is completely messed up. I'll try also changing the tangents/everything. Edit: looks like that one works, but for a weird stretching effect over some textures... I'll look into it.
  13. I've read about some "scaling" stuffs... Is there no problem with that? I've also found out about "gl_NormalMatrix" but it seems kind of unreliable (makes the normals change when I rotate the camera, I have no idea wether this is normal or not.) Edit: anyway, it still crashes with InstancingTransform, which is probably not normal.
  14. Looks like OS X doesn't support GLSL versions higher than 1.2... I'd have to inverse the matrix by hand then.
  15. I'll try that. I have no idea what the instancing matrix is... On the ARB shader, the calculations works and the normals don't look completely wrong, but they might be. I also have the modelview matrix accessible, I'll try and compare the two.
  16. Yeah, I realized that later, but I posted too early in excitement. So yes, I believe my problem is with the instancing matrix. Only it seems to work for the position.
  17. Thanks for answering. As you can see from my edit above, I've got why it didn't work... Edit: need to change v_tangent = tangent to v_tangent = a_tangent too. EditN°2: thought that may break other things... EditN°3: so there actually is something with the matrix calculation…
  18. Complete edit: got it! Live 92 of model_common.vs, change v_normal = normal; to v_normal = a_normal;.
  19. That's the basic idea. Going back to the original issue: I have a problem with GLSL on my Geforce GT 120. It also happens with regular SVN. model_common bugs. I've tracked it down to: v_normal (and incidentally v_tangent/v_bitangent). Now this is sent to the fragment shader by the vertex shader. Here, the problem can't be with InstancingTransform as the position is OK, and it uses that. Since v_normal only calls a_normal, the problem resides in a_normal. Now what is a_normal? It's defined as an attribute, asking for gl_Normal. Let's investigate Shader_manager.cpp. gl_Normal is there defined to have location "2", as documented by nvidia. Afaik, this is right. Now, when creating the shader, Shader_manager.cpp will set this "a_normal" as an attribute in vertexAttribs["a_normal"] = 2 (I believe). Vertex attribs are then passed to the GLSL shader. This is where it starts to differ from the ARB shader (which, remember, works, so we know that the problem is not in reading the files/passing the normals). Going into Shader_program.cpp, it calls CShaderProgramGLSL. This set "m_VertexAttribs" as the vertex Attributes. Lines of interest are now 347/348, where "pglBindAttribLocationARB" is called; lines 489-505, where "Bind" and "Unbind" are defined; lines 633-653 where VertexAttribPointer and VertexAttribIPointer are defined. Base on previous research, I also know that InstancingModelRenderer.cpp has something to do with that. When "stream_normal" is set (and it is), it will use NormalPointer (a overriden function that calls pglVertexAttribPointerARB, in position 2 (the same as defined by Nvidia). According to "glexts_funcs.h", this is equivalent to "glVertexAttribPointer", ie that. No reason to believe this bugs, in particular because the TexCoordPointer and the VertexPointer seem to work just fine. There are other function calls that do other things. I'm not completely sure, as everything calls everything else. But I have no idea why this doesn't work for GLSL when it works for ARB.
  20. Basic solution would be to have bigger textures, and use bigger blending masks.
  21. I may be wrong, but this looks like an actual vertex displacement effect and not a texture simulation. (which would explain the "backyard water" effect too, as this would require huge computing power to simulate an ocean perfectly.)
  22. It's probably easier to add a transparent "wave" texture like most games probably do.
  23. I dunno, boat waves are fairly small things that would require precision. Perhaps we could trick it by using a 1024x1024 texture to say "where" we want the waves to be, and then have some smaller textures to actually represent the waves.
  24. Wouldn't that require a huge texture, though? The normals are basically a repetition of a small one, but using this efficiently would change that, or am I mistaken here?
  25. I don't think the game has enough vertices to support that on a per-vertex basis. I could be done with fairly big waves, but then we would not require that kind or precision (I actually gave that a small shot 3 days ago, you can achieve a nice effect for a bargain cost right now). No idea how it would work on a fragment program, but I think it would require a (way too big) texture, unless I'm mistaken (how do shadows work?). For boats and waves, the easiest system right now (with water as a flat plane) is probably to use a stencil over the water and to use bump mapping/parallax on that to achieve the effect of waves. Then again, I say "easiest", but it's also quite some work. @zoot: pulled your fix, thanks! And I'm glad to know it's working.
×
×
  • Create New...