Jump to content

myconid

WFG Retired
  • Posts

    790
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by myconid

  1. wraitii, it really looks like a compiler bug! Other people have reported similar problems. Suggested fix is to construct the matrix manually, i.e: mat3(matrix[0].xyz, matrix[1].xyz, matrix[2].xyz) where matrix would be the 4x4 instancing matrix.
  2. Are you running the latest drivers? If it's really a bug (and it's a pretty huge bug, if it is), maybe it's been fixed.
  3. Something like that may be possible, if we use the direction of the terrain normal. Don't know if it'll be any faster, but it may be worth experimenting with that. You could try skipping the mat3 function and just cut the 3x3 portion of the matrix manually. If this helps, this is definitely caused by a bug in your GLSL compiler. To me, this looks really nice: vec3 ww = mix(texture2D(normalMap, gl_TexCoord[0].st).xzy, texture2D(normalMap, gl_TexCoord[0].st * 1.33).xzy, 0.5); n = normalize(ww - vec3(0.5, 0.5, 0.5)); (based on the vanilla shader from master, though)
  4. Debugging is always useful. Clear your cache? Whoops, saw it^^ The problem must be on your end. Make sure your local copy is in sync. You did recompile, right?
  5. Nope, it can be done straight from the water shader. Just access the water texture twice, multiplying the UVs with the scaling factor the second time, and then add or multiply the two values together before doing anything else. Or "mix" them: mix(col1, col2, amount); amount = 0.5 mixes them equally.
  6. Ok. Best way to do it is to overlay two different layers of the water texture, one with a slightly scaled UV mapping (e.g. 1.3x the normal one). Should be pretty easy, if you'd like to give it a try. Not sure how that could work...
  7. Nice, fixed here too. The model's much too bright, though! Something must be wrong with your normal mapping.
  8. Been AFK, just compiled. I also got a segfault when loading the temple model. Fixed by setting gentangents to true. Now it almost works, though things look a bit l slanted to the side:
  9. Oh, you mean the rest of the 4x4 instancingMatrix. No, those aren't needed.
  10. You can. CMP, SLT, SGE are comparison instructions. Look at the reference. Not sure which other parameters you are referring to. You transformed the normal in the vertex shader and then drew that exact result in the fragment shader and it looks wrong, yes?
  11. The loop always executes a fixed number of times. After the loop "exits", it continues to get executed but the calculations' results are discarded. For example: float h = texture2D(normTex, coord).a; float sign = v_tangent.w; mat3 tbn = mat3(v_tangent.xyz, v_bitangent * sign, v_normal); vec3 eyeDir = normalize(v_eyeVec * tbn); float dist = length(v_eyeVec); float s; vec2 move; vec2 nil = vec2(0.0); float height = 1.0; float scale = 0.0075; s = 1.0 / 8.0; move = vec2(-eyeDir.x, eyeDir.y) * scale / (eyeDir.z * 8.0); { vec2 temp = (h < height) ? move : nil; height -= s; coord += temp; h = texture2D(normTex, coord).a; } ..... x8 Yup, that's correct.
  12. My code didn't touch that matrix, so I'd say this is a pre-existing bug (if it's even a bug). We know the transform matrix works correctly for the vertices. We also know that the normals, if left untransformed, are accessed correctly. So I can only conclude that unless your computer can't multiply, your problem must be elsewhere.
  13. It doesn't work for me, I'm afraid. Which is actually a good thing, because it's mathematically incorrect. I'm starting to get the impression that there's something broken with your GLSL setup, not with the code. If one thing is failing, it's probably the engine's fault, but if everything is failing...
  14. If that's the case, see if doing the same thing for the tangents fixes the black lines.
  15. If you move the building to a different part of the terrain, doesn't it mess up the lighting?
  16. Replace model_common.xml with this, it might help: <?xml version="1.0" encoding="utf-8"?> <program type="glsl"> <vertex file="glsl/model_common.vs"> <stream name="pos"/> <stream name="normal"/> <stream name="uv0"/> <stream name="uv1" if="USE_AO"/> <attrib name="a_vertex" semantics="gl_Vertex"/> <attrib name="a_normal" semantics="gl_Normal"/> <attrib name="a_uv0" semantics="gl_MultiTexCoord0"/> <attrib name="a_uv1" semantics="gl_MultiTexCoord1" if="USE_AO"/> <attrib name="a_skinJoints" semantics="CustomAttribute0" if="USE_GPU_SKINNING"/> <attrib name="a_skinWeights" semantics="CustomAttribute1" if="USE_GPU_SKINNING"/> <attrib name="a_tangent" semantics="CustomAttribute2" if="USE_INSTANCING"/> </vertex> <fragment file="glsl/model_common.fs"/> </program> That won't work. You're translating the normals, which makes no sense. And of course it's blinking when you set the normal to a random value. I'm starting to think the problem is on the CPU side, not in the shader. Maybe the matrix that gets passed in contains invalid values (e.g. infinity) or something like that.
  17. There's no scaling, I believe. I think gl_NormalMatrix isn't used (and it's deprecated), as the engine is passing the transforms in manually. What? It crashes? I thought it just looked wrong.
  18. So I've been looking at the math... The top 3x3 portion of the instance matrix is the rotation bit (we don't care about translation for normals), which makes it orthonormal by default. The inverse of an orthonormal matrix is equal to its transpose. The transpose of the transpose is equal to the original matrix. Thus, what I suggested above does nothing at all. Sorry.
  19. Matrix inverse is a really slow operation that should be done on the CPU side (and I think a function to inverse a matrix should already exist somewhere). If you just want it to test, then look online for some code. No point in writing your own, as we really shouldn't be doing an inverse in the shader.
  20. Ok, I've tried it and "it works". By "it works" I mean it looks like it always does, for me. #if USE_INSTANCING vec4 position = instancingTransform * vec4(a_vertex, 1.0); //vec3 normal = mat3(instancingTransform) * a_normal; <-- replacing this mat3 nm = mat3(transpose(inverse(instancingTransform))); vec3 normal = nm * a_normal; #if (USE_NORMAL_MAP || USE_PARALLAX_MAP) vec4 tangent = vec4(nm * a_tangent.xyz, a_tangent.w); #endif #else Tangents etc should also use this transform, btw.
  21. Now that I think about it, why are we multiplying a normal with the instance matrix, which I assume contains the modelview matrix? Surely we'd need a normal matrix for that? You could try calculating a normal matrix from the instance matrix and check if that works for you. It should be the transpose of the inverse of the instance matrix. You may need to change the GLSL version to 140 (temporarily) to check if this helps.
  22. Not so sure about that. a_normal is in object space, so the lighting (and other things) won't work. Does this mean your problem is with the instancing transform matrix?
  23. You have two vectors, the normal and tangent, which are at 90 degrees. You want to use those vectors to create a coordinate space (tangent space), and to do that you need a third vector, the bitangent, which is orthogonal to those vectors. But, in which direction does the bitangent go? That's what sign determines. Eyevec should be unnormalised, it's the difference between the camera and the vertex. Eyedir is normalised, and it's the direction from the camera to the vertex. I can get a screenshot, but of what exactly? The transpose can be get rid of like I said earlier: transpose(M) * V = V * M Length is just a measure of distance (i.e. unnormalised) from the vertex to the camera (and it's actually a bit redundant, but nevermind). You shouldn't need/use that if you are unrolling the loop!
×
×
  • Create New...