Jump to content

Post-processing effects test (SSAO/HDR/Bloom)


Recommended Posts

Thing is, I'm pretty sure I can't even do this sort of small checks without the extension (IF doesn't exist in basic ARB)... In which case, I might just as well use it. But then I need a way to check that the computer has the extension, or deactivate parallax mapping (else it'd crash).

By correct, you mean that the other parameters are still important or not?

Link to comment
Share on other sites

Thing is, I'm pretty sure I can't even do this sort of small checks without the extension (IF doesn't exist in basic ARB)... In which case, I might just as well use it. But then I need a way to check that the computer has the extension, or deactivate parallax mapping (else it'd crash).

You can. CMP, SLT, SGE are comparison instructions. Look at the reference.

By correct, you mean that the other parameters are still important or not?

Not sure which other parameters you are referring to. You transformed the normal in the vertex shader and then drew that exact result in the fragment shader and it looks wrong, yes?

Link to comment
Share on other sites

Well, then, I don't understand why it would be wrong (mathematically, I mean) to multiply the normals using:

vec3 normal = ((instancingTransform) * vec4(a_normal,0.0)).rgb;

instead of

vec3 normal = mat3(instancingTransform) * a_normal; (which is what you, with success, used)

And while the second one doesn't work on my computer, the first does, for some reason, so I'd rather both be correct.

(Same with the tangents using that: vec4 tangent = vec4(((instancingTransform) * vec4(a_tangent.xyz,0.0)).rgb, a_tangent.w); ).

I've pushed the ARB parallax shader (here), in a version that doesn't require the extension (commented that code). Could anybody report if it works?

Edited by wraitii
Link to comment
Share on other sites

I've pushed the ARB parallax shader (here), in a version that doesn't require the extension (commented that code). Could anybody report if it works?

Which model should I use? I tried with structures/romans/temple_mars2 and got an error message but now I can't reproduce it.

Does anyone know how I can reset the cache thingy used by the game?

Link to comment
Share on other sites

Thanks.

This is the error I get:

ERROR: Failed to compile vertex program 'shaders/arb/model_common.vp' (line 45): line 35, char 39: error: invalid texture coordinate unit selector
ERROR: Failed to compile vertex program 'shaders/arb/solid.vp' (line 1):
ERROR: CRenderer::EndFrame: GL errors occurred

Edited by zoot
Link to comment
Share on other sites

That won't help much actually... I need to know the value of "GL_MAX_TEXTURE_COORDS" for you graphic card (on mine, its height, which is fairly standard, but if you have an Intel Card, it may be way lower.) If you can compile 0 A.D., you can add an std::cout << glGet(GL_MAX_TEXTURE_COORDS) << std::endl;

I can also access it from some headers in my openGL framework, so you might be able to do a search for 'GL_MAX_TEXTURE_COORDS' and get the info.

Link to comment
Share on other sites

That won't help much actually... I need to know the value of "GL_MAX_TEXTURE_COORDS" for you graphic card (on mine, its height, which is fairly standard, but if you have an Intel Card, it may be way lower.) If you can compile 0 A.D., you can add an std::cout << glGet(GL_MAX_TEXTURE_COORDS) << std::endl;

I can also access it from some headers in my openGL framework, so you might be able to do a search for 'GL_MAX_TEXTURE_COORDS' and get the info.

Where exactly would I add that? Tried in main():

../../../source/main.cpp: In function ‘int main(int, char**)’:
../../../source/main.cpp:554:41: error: ‘glGet’ was not declared in this scope

Link to comment
Share on other sites

Yeah, hadn't thought of that. Let's roll differently. Go to HWDetect.cpp, and change this

#define INTEGER(id) do { \
GLint i = -1; \
glGetIntegerv(GL_##id, &i); \
if (ogl_SquelchError(GL_INVALID_ENUM)) \
scriptInterface.SetProperty(settings.get(), "GL_" #id, errstr); \
else \
scriptInterface.SetProperty(settings.get(), "GL_" #id, i); \
} while (false)

to this

#define INTEGER(id) do { \
GLint i = -1; \
glGetIntegerv(GL_##id, &i); \
if (ogl_SquelchError(GL_INVALID_ENUM)) \
scriptInterface.SetProperty(settings.get(), "GL_" #id, errstr); \
else \
scriptInterface.SetProperty(settings.get(), "GL_" #id, i); \
std::cout << #id << ":" << i << std::endl; \
} while (false)

Then when you start the game you'll have (in the console) MAX_TEXTURE_COORDS_ARB: ... (along with a lot of other info).

I might be able to cram info to reduce the number of texture coords I use, but I'm afraid I won't be able to change that much.

Edited by wraitii
Link to comment
Share on other sites

Mh. Then that shader shouldn't crash.

As far as I can tell from the code, you are using an coordinate index of 8 on the line where it crashes. Wouldn't that exceed the limit by one because the arrays begins at index 0?

What do you have for "MAX_TEXTURE_IMAGE_UNITS_ARB"?

I get this from the command line:

$ glxinfo -l | grep MAX_TEXTURE_IMAGE_UNITS_ARB
GL_MAX_TEXTURE_IMAGE_UNITS_ARB = 16

Edited by zoot
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...