Jump to content

Important: Please read if you have an older computer or graphics card


Recommended Posts

  • 2 months later...

I and my brother just downloaded 0 AD and have started playing it. He uses a Mac with an Intel GMA 950.

Since you're talking about pre-2003 hardware, I assume you're going to use ARB shaders, not GLSL. IIRC, Mac OS X drivers have software emulation for ARB shaders (and a quick Googling suggested that the GMA-950 does pixel shading, with the vertex shading being handled by CPU).

So go ahead and make the switch!

Edited by ummonk
Link to comment
Share on other sites

  • 1 month later...

If everyone does agree, I would propose to create small simple steps/ticket so that people know where to stand there and can start contributing without colliding ?

Here's a modest proposal of tickets that could be created, in order and with dependencies:

  1. Wipe all non-shader and arb code
  2. Those could be done somewhat in parallel with each other ("somewhat" because using svn instead of git is a pain... for any merging):

  • Get rid of all deprecated opengl immediate calls (deprecated and mandatory for openglES support), turning them in vbo calls (yes, even drawing a texture using a quad. should lead to faster rendering)
  • Remove current SDL "os window" handling and handling it directly. (Makes 0ad able to select different opengl profiles)
  • Get rid of fixed function glmatrix calls (deprecated and mandatory for openglES and opengl 4.0 support) and we already compute most matrix anyway (in selection/pathfind/etc). It's just a matter of using uniform for those matrix (worldmatrix, worldviewmatrix, modelmatrix, etc., note that discussing/defining some uniform name scheme so that all shader share the same would ease things there, see next point)
  • Add GLSL high level handling code: parsing glsl using regex to get 0ad #defines, #pragma, etc. (handle debug lines, #import/include pragmas to concatenate files and making glsl code much more DRY, change #precision and #version pragma at runtime, adding #defines at runtime, etc.), add reload/validate shader ability (faster shader debugging/coding). Idea is to be able to have shared reusable glsl code library. (easier to maintain, smaller codebase)

A very good tool for those steps is gremedy opengl profiler/debugger as its analyzer gives nice and precise list of deprecated calls per frame or per run. (and lots of other nice opengl helpers)

Once 1 and 2 done, a much easier next move then would be:

  • Total simulation2/graphics separation using command buffers. In 0ad case, could do it higher than opengl commands: that would be something like taking advantage of "renderSubmit", and the list of submitted render entity, which would end being the "command buffer" given to graphics/render worker thread. (faster rendering as soon as 2 core available, which is pretty standard nowadays)
  • Add new renderers: different opengl profile, openglES, debug/test mock renderer, deferred, forward, etc. ( the hidden cost here is defining a scheme to handle per renderer *materials/shader* in a nice way. (deferred and forward doesn't use the same shaders)

Link to comment
Share on other sites

Add GLSL high level handling code: parsing glsl using regex

This is, um, a really bad idea. GLSL is a Chomsky type-2 grammar, while regexes can only parse type-3 grammars. In other words, you can't do this correctly, for the same reason you can't parse HTML with regexes. You might, maybe, get something that looks like it works, sometimes, but it's just fundamentally a bad idea.

I'm not necessarily adverse to parsing GLSL with a real GLSL parser, but I don't think that would be worth the trouble.

Link to comment
Share on other sites

This is, um, a really bad idea. GLSL is a Chomsky type-2 grammar, while regexes can only parse type-3 grammars. In other words, you can't do this correctly, for the same reason you can't parse HTML with regexes. You might, maybe, get something that looks like it works, sometimes, but it's just fundamentally a bad idea.

I'm not necessarily adverse to parsing GLSL with a real GLSL parser, but I don't think that would be worth the trouble.

"parsing using regex" is a bad way top describe it, sorry. It's more a "preprocessor": you find #define, #pragma, #include. inside glsl and replace it with values, other glsl content file, etc. Here's an example of glsl include support using boost of what I meant. I do agree that real parsing and compiling is not really useful in runtime, only for offline tools like aras_p's glsl optimizer

Edited by tuan kuranes
Link to comment
Share on other sites

"parsing using regex" is a bad way top describe it, sorry. It's more a "preprocessor": you find #define, #pragma, #include. inside glsl and replace it with values, other glsl content file, etc. Here's an example of glsl include support using boost of what I meant. I do agree that real parsing and compiling is not really useful in runtime, only for offline tools like aras_p's glsl optimizer

For just that case I suppose you can sort of get away with regular expressions. It's still ugly and incorrect though.

Link to comment
Share on other sites

  • 4 weeks later...

Short: The old fixed-function pipeline should have been removed already!

The oldest graphic card I use is an ATI radeon X1600 supporting OpenGL 2.1.

(Which will be replaced this year with a computer that has an ATI radeon HD7770 supporting OpenGL 4.3)

Very old hardware in this case is more like approaching 15 years.

The percentage of people not being able to play is very small.

It will help create a good game and weed out very old graphic cards, setups that would hold relative computational-intense features back.

The old fixed-function pipeline is deprecated already. Using it is technically a bad idea in all situations with new development, end of story!

You need to make something that is great when released.

It's important that 0 A.D. runs great on most hardware.

Not all setups out there whatsoever, which is not gonna happen anyway.

Features in newer OpenGL versions like instancing can be really helpful with improving performance

SDL 2 is already in release candidate status.

Hope that 0 A.D. will have an OpenGL ES 3 renderer in the future.

(Fun OpenGL ES3 fact, you can have the OpenGL ES3 context in systems with full OpenGL 4.3 or later.)

Edited by Flamadeck
Link to comment
Share on other sites

Flamadeck: For me that is no argument. It's like preferring "Hey, we could then have worms that glow in the dark! (OK, they stumble around like random walking protozoa... but:) They are shiny!" instead of preferring "Wow, our worms can now talk and sotialy interact with each other!". In fact not even very basic functionality like line-of-sight are in the game. Using up all available computational resources for graphics seams a very bad idea to me. I admit that the support for old graphics cards could be dropped if computers, that can have those cards, are not capable of running 0 A.D. due to the lack of other resources anyway. However, ruling out computers because they have a weak graphics card but would otherwise be able to run 0 A.D. (just for the sake of graphics) is unacceptable to me.

I would accept Arguments like maintenance effort though. That doesn't include implementing features, that only the newer of the supported graphics cards will be able to display, and then (because maintaining compatibility to older graphics cards get more complicated due to this) drop the support for older ones.

Edited by FeXoR
Link to comment
Share on other sites

@FeXoR

Please read my post a bit better than this.

By aiming for better GPU's we can push more calculations on the GPU, this allows for doing more non-graphical interesting things with our computational budget on the CPU (and GPU too, see instancing).

When I said computational-intense features I meant all kinds of things, including AI and physics.

The whole idea on setting minimum here is to be able to do more with the GPU, not necessarily fancy graphics.

Mostly to make sure we don't have to use the CPU for all kinds of things, which would make 0 A.D. unplayable.

Where you say it's okay to drop support, you say other resources. What kind would that be? We're talking about a computer game here where calculation power and memory are almost the only resources that matter.

You make the assumption that a weak graphics card can produce nice graphics but the cutoff point would be where there is unnecessary fancy features. This is not true! A weak graphics card or cpu or too little ram means you won't be able to run 0 A.D. on low detail anyway.

  • Like 1
Link to comment
Share on other sites

  • 7 months later...
  • 4 months later...
  • 5 months later...
  • 2 weeks later...

Another open source game, SuperTuxKart, now requires OpenGL 3.1+. ...

I'd check that further. My understanding of SuperTuxCart developer comments on the phoronix.com forums is that there is an OpenGL 3.1+ renderer added, but that an earlier OpenGL renderer is still present to support older cards.

  • Like 1
Link to comment
Share on other sites

  • 1 year later...

Only as an hint: even on an i7-640LM, the integrated GPU (which can handle OpenGL 2.1 and inofficially even OpenGL 3) isn't fast enough to enable GLSL 2.0 shaders - its way too slow, even if the graphics quality is set to low. Without its working fine. So if all shaders etc. are converted to OpenGL 2.1 GLSL, there should be "extra low quality" versions available - also to ease porting to Android, running it on compute sticks etc.

Regarding suptertuxkart: Yes, the old renderer is still available. The i7-640LM can be forced to claim OpenGL 3 and then its technically running, but practical unplayable because of way too few fps. The old renderer is running fine on the same maschine.

Edited by mifritscher
  • Like 1
Link to comment
Share on other sites

  • 1 year later...

Whole thread much appreciated be me, a 3D programming total noob more involved into general systems builds & optimization.

On the hardware part it seems to me the GLSL you talked about in this thread requires a « GPU » supporting OpenGL >= 2.0? ANd going further does it matter to play 0 A.D., whether the GPU has pixel and vertex shaders (e.g. Radeon Radeon X1950 GT or GeForce 7600 GT) vs more the recent Unified shaders "cores"?

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...