Jump to content

Transparent materials and alpha sorting


Recommended Posts

Let me introduce this topic by stating that 0 ad's rendering is also fairly slow, right now.

A big part of that is trees: since the use transparent materials, they are currently distance-sorted (a usual technique to avoid blending artifacts). However this makes batch sorting basically impossible.

Now, I've done some tastings and it seems to me that alpha on trees is basically used mostly as a "1-bit" method: either transparent or opaque. With such a technique, distance sorting is formally not too necessary, so we could speed up rendering a lot by having not all materials sort by distance, particulalrly with trees. On Gambia river, I basically get it to render twice as fast 180 to 100 ms for "Render" in the profiler. Its my MB air which is slow, and I lowered the camera to have lots of trees. But still.

So basically I'm saying we should probably experiment with this, we might get a really really similar rendering for basically a much lower cost. By not distance-sorting trees, we could batch them much more efficiently.

(if you want to see for yourself, go to line 482 of modelrenderer.cpp and add " && 0 " to the if statement, then recompile)

  • Like 1
Link to post
Share on other sites

Not sure what you mean, but I don't think it's related. Basically what I suggest here is making rough approximations when rendering trees because it doesn't really matter what order they're rendered in, despite being transparent.

Thoug TBH I'm not sure to what extent we actually batch anything and if rendering is really optimized. In particular I'm not sure the profiler draw calls are accurate.

Link to post
Share on other sites

We already have alpha testing (thanks to 11475), we just don't have it enabled by default since it can cause rendering artifacts depending on the model. To try it, set forcealphatest=true in your config file. It was discussed in #0ad-dev about a year ago, including Gambia River :)http://irclogs.wildfiregames.com/2012-03-05-QuakeNet-%230ad-dev.log, http://irclogs.wildfiregames.com/2012-04-03-QuakeNet-%230ad-dev.log

Link to post
Share on other sites

Gave a look at the code... It's actually sort of the same thing, but not really.

Alha testing means textures are either fully opaque or fully transparent, which is a fair approximation for our models but does give artifacts on their borders. It's faster because openGL doesnt really have transparency to do.

Now when rendering transparent mOdels such as trees, we sort them by distance to avoid transparency artifacts such as these: http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-10-transparency/ (sorry typing on the phone).

However that's not really useful for us since we don't really have tons of actually transparent models (which is why alpha testing is not too ugly), except the water which is a special case anyway. So I think not sorting our models by distance would be a good optimization. Note that this can be done mostly on a case by case basis, which is nice. It not only speeds up by not actually sorting, but it helps a lot with batching, which makes it faster (I'm not sure to what extent we actually batch rendering though. It can only help but it might be much more noticeable on some systems).

Edit: note that we can combine these techniques. To me, trees on gambia river look basically the same with alpha testing, and then not sorting them is logical, and the the rendering is faster. This can be done on a case-by-case basis, I think we ought to look into it strongly.

Might want to add a "alpha testing" option in materials instead of alpha blending, which would prevent models from being distance sorted. That could seriously speed up maps with lots of trees.

NB: I'm assuming transparent models are sorted by distance. I'm not 100% sure we actually do that. Edit: if trees use model_transparent, he we do.

Link to post
Share on other sites

Hi! Haven't been on these forum for some time, but I think I might provide some additional insight regarding this.

Right now the engine constructs material buckets (for batch rendering) every frame. This is something that should stick to the context and simply update it when the context changes (i.e. - a new map is loaded) and when new materials are submitted.

As far as sorting techniques by distance is concerned - this is something that has been implemented a bit incorrectly. A better solution would be to have an IShaderTechniqueMgr, that keeps an associated list of materials and dependant models.

This would allow for far easier rendering:


for(IShaderTechniqueMgr* tech : context->GetTechniques())
{
for(IMaterialPass* pass : tech->GetMaterialPasses())
{
renderer->SetTechnique(tech->Technique());
renderer->SetMaterial(pass->Material());
for(IModel* model : pass->GetModelsCulled(frustrum))
{
renderer->Render(model);
}
}
}

Why is this good exactly you ask? Well. You wouldn't be wasting time creating vectors and sorting them all the time. To top it all off, the technique manager or the material pass for the alpha tested objects can update its list of models as needed. I can't begin to describe how much performance boost this would give to the engine.

Right now it is true that pyrogenesis isn't really the best graphics engine out there and would require quite a few changes before it could be used properly. Another solution would be to just use an available tried-and-tested open-source graphics engine and stop trying to implement everything by ourselves (which is obviously a bit too much to handle due to lack of programmers).

Ogre3D has been mentioned many times before and is a very very good engine. It is supported on majority of platforms and its implementations are well debugged and maintainable. Furthermore, its API is extremely well documented and the engine itself has all the tools needed to build any game engine... (notably including shader managers, material passes and friends). To top it all off, the Windows implementation can use DirectxRenderer (which is a virtual device mind you), making it even faster for Windows nuts. This is just food for thought :)

Edited by RedFox
  • Like 1
Link to post
Share on other sites

I guess the disadvantage of using an off-the-shelf engine is that there is an element of bloat/unneeded features and you become locked into an architecture that is designed for the generic case rather than your specific case.

Link to post
Share on other sites

At this point it would be harder to migrate to Ogre3D than to finish Pyrogenesis's renderer, I think.

zoot: I think Ogre is flexible enough so that wouldn't be much of a problem. I'm of the opinion it does suffer slightly from feature bloat though.

Link to post
Share on other sites

I tend to agree. While ogre is an excellent engine, in the current state, the cost of migrating things seems a little high. The current renderer can probably be improved enough to be acceptable for release.

It could be a task for part 2 though, as it seems to me that Ogre could be fitted pretty nicely in pyrogenesis, but it basically requires rewriting a ton of stuffs.

Link to post
Share on other sites

Yes, I really agree about the massive amount of time needed to migrate to Ogre3D. Fortunately, pyrogenesis is written very well in that regard - graphics and game engine are very separated. I also agree that this is something more for part2 if anything else.

Though I disagree about unnecessary bloat. Ogre3D is a very extendable library and in that sense, it does require some tweaking to get your desired fix out of it. It's open source, so the build can be customized to include/exclude features and modules that are considered bloat. Think of it as a customized Ogre distro ^_^.

What I was trying to bring to discussion though was: Which one of the following is easier?:

1) Redesign the current pyrogenesis engine to bring out the needed performance.

2) Migrate to Ogre3D - a painful task, but it might take less time than redesigning/fixing pyrogenesis.

The bonus of Ogre3D is that it works on Windows, Linux, OSX, Android... And best of all? It has great performance.

On a side note: Michael has invited me to take on 0AD development full-time. I'm a C/C++ real-time systems programmer and I've worked on 4 separate graphics engines in my past (1 software rasterizer, a project on Ogre3D, a robotics simulator on OpenGL and a high-end game graphics engine on DirectX). So I would have the required time and skillset to make these changes.

The reason I'm available is that my mandatory service in the Estonian Defense Forces is coming to an end and Michael made me a quite interesting offer regarding 0AD development ^_^.

Edited by RedFox
Link to post
Share on other sites

Based I my (smallish) experience with Ogre, I'd agree that it's really flexible enough to integrate properly without bloating the engine.

One concern about the updating would be the GUI (is it worth recoding from scratch? Use an existing ogre one? Adapt it?).

Basically the renderer is pretty much there and mildly efficient. There are 4 things that have to be worked on: optimizing the rendering (eG by having better instancing (I think we're pseudo instancing non-animated objects, currently)), work on some sort of LoD(I think it's worth it), fix the silhouettes rendering(waaay too slow. Check "combat demo huge" for an example), and the water refraction/reflection, which is not too efficient, but is really tied with the other points.

There are few bugs, nothing too important. Making it efficient enough would require some rewriting, but I believe we could get good enough performance without too much work.

Updating to ogre has one huge advantage long term: no need to worry about maintaining the rendering and updating to the latest technologies. It's also more flexible for the engine itself, for potential other games using it. It will probably allow to do fancier stuffs too, so that's a win-win.

So basically deciding is simple enough: will it take too long to update to Ogre for part 1? If no, I'd say it's an interesting project. If yes, then we're better off without it. Working full-time on it, with experience, I think it's possible to finish the update before part 1 (by doing it properly, I mean.). Perhaps even for beta. Obviously, this would be made easier by using a separate branch in Git, whenever that day comes.

Link to post
Share on other sites
Posted · Hidden by quantumstate, April 25, 2013 - Off topic
Hidden by quantumstate, April 25, 2013 - Off topic

Sorry to talk about other topics, but About Aegis AI in the game load and save error able to be resolved?

Link to post

Based I my (smallish) experience with Ogre, I'd agree that it's really flexible enough to integrate properly without bloating the engine.

One concern about the updating would be the GUI (is it worth recoding from scratch? Use an existing ogre one? Adapt it?).

I have written around 4-5 GUI's from scratch by now and I've seen a plethora of different design choices. I even have a WIP Gui system for DirectX hanging in a repo somewhere. Though I should note that there is MyGUI for Ogre3D, which is much more functional than the current pyrogenesis Gui.

Basically the renderer is pretty much there and mildly efficient. There are 4 things that have to be worked on: optimizing the rendering (eG by having better instancing (I think we're pseudo instancing non-animated objects, currently)), work on some sort of LoD(I think it's worth it), fix the silhouettes rendering(waaay too slow. Check "combat demo huge" for an example), and the water refraction/reflection, which is not too efficient, but is really tied with the other points.

There are few bugs, nothing too important. Making it efficient enough would require some rewriting, but I believe we could get good enough performance without too much work.

That is a huge list of changes to the graphics engine. It basically means an almost complete rewrite.

Updating to ogre has one huge advantage long term: no need to worry about maintaining the rendering and updating to the latest technologies. It's also more flexible for the engine itself, for potential other games using it. It will probably allow to do fancier stuffs too, so that's a win-win.

Exactly my point! It would remove the need to debug all that graphics code and would leave a lot more time for other more pressing matters.

So basically deciding is simple enough: will it take too long to update to Ogre for part 1? If no, I'd say it's an interesting project. If yes, then we're better off without it. Working full-time on it, with experience, I think it's possible to finish the update before part 1 (by doing it properly, I mean.). Perhaps even for beta. Obviously, this would be made easier by using a separate branch in Git, whenever that day comes.

For now there are so many things in the engine that require the attention of a full-time programmer. I wouldn't think any of these changes would be possible if someone tweaked a bit of code every few days or so. I just worked 2 days straight on pyrogenesis, totaling around 20 hours, to change 250 files, remove a mad scientists crazy experiment from the engine and win a 20% performance improvement to the game. It wouldn't be possible if I worked on and off. Same applies to most required changes to the engine.

Edited by RedFox
Link to post
Share on other sites

I think rewriting the pyrogenesis renderer would still be faster than implementing ogre through and throughout. But in the long run, it's not. It really depends on how you feel in terms of time. Working full-time, with your experience, I hardly doubt you'll manage it "fairly" fast.

It is however true that working full time on the engine is much more efficient than on and off, though.

Link to post
Share on other sites

Actually, I think just using Ogre3D with MyGUI would be faster to implement, since I'm already familiar with Ogre3D ^_^ and not so much with pyrogenesis (graphics module). So in that regard, it's an unfair comparison... ^_^

Link to post
Share on other sites

Basically the only downside to Ogre is the time it takes to integrate it. 0 A.D. is on GitHub (although that's just a mirror of the SVN repo), so someone could fork it and try. However, I think there are far more pressing matters, both performance and gameplay wise, to work on. IMO, definitely wait for Part 2 (Part 1 is already taking a while, and the current renderer really isn't that bad).

How difficult would it be to port myconid's awesome graphics stuff to Ogre, anyway? He did a lot of great work, and I'd hate to throw that away when migrating.

Link to post
Share on other sites

Basically the only downside to Ogre is the time it takes to integrate it. 0 A.D. is on GitHub (although that's just a mirror of the SVN repo), so someone could fork it and try.

That would help me out so much, actually. I have a lot of modified code and nowhere to commit.. ^_^

However, I think there are far more pressing matters, both performance and gameplay wise, to work on. IMO, definitely wait for Part 2 (Part 1 is already taking a while, and the current renderer really isn't that bad).

The fact of the matter is that, implementing a lot of these graphics engine changes (to make it run decently) would require a huge change. Same effort would go to using Ogre3D instead, which already implements well, everything graphics related.

How difficult would it be to port myconid's awesome graphics stuff to Ogre, anyway? He did a lot of great work, and I'd hate to throw that away when migrating.

Myconid's work was mostly with shader programs, the shaders are 'graphics card programs' that are compiled during runtime and uploaded to the graphics card. The shader is then usually run in a massively parellel asynchronous way, resulting in a super fast rendition of the data. You throw [Vertices, Textures] into a [shader] and it spits out an [image] on the screen. :)

So yeah, myconid's shaders will work on Ogre3D.

Link to post
Share on other sites

From what I know of ogre, switching to I would almost allow to keep every file as is. Might require rewriting shaders to cg, but that's close enough from glsl. We wouldn't really lose much.

I'd agree that it would be more urgent to, say, rewrite the component system: this would probably stall development for a while, but allow it to resume faster later (c++ side anyway).

The renderer is "not quite slow enough", I find. It's an area where absurd optimizations could be done, but generally I think it'd be best to have the final component/simulation architecture down first.

Link to post
Share on other sites

From what I know of ogre, switching to I would almost allow to keep every file as is. Might require rewriting shaders to cg, but that's close enough from glsl. We wouldn't really lose much.

At least cg is 'portable' ^_^ (between DirectX and OpenGL that is). Furthermore, a lot of cool shader programs are out there and already written for Ogre3D. Want to implement parallax mapping? Sure, grab parallax mapping shader from the Ogre wiki and attach it to your material ^_^...

I'd agree that it would be more urgent to, say, rewrite the component system: this would probably stall development for a while, but allow it to resume faster later (c++ side anyway).

The renderer is "not quite slow enough", I find. It's an area where absurd optimizations could be done, but generally I think it'd be best to have the final component/simulation architecture down first.

Well, in that regard, it's true that the biggest drawback of the engine right now is the ECS and there's no 'easy' way of changing it. Right now it would be prudent to discuss where I could focus my time and coding resource. Making this change to the ECS will be a huuuge change and it will most certainly break script support for a while.

Link to post
Share on other sites

The fact of the matter is that, implementing a lot of these graphics engine changes (to make it run decently) would require a huge change. Same effort would go to using Ogre3D instead, which already implements well, everything graphics related.

Hm. If it's about as much effort to make the pyrogenesis renderer work well and fast as it is to migrate to Ogre, then of course we should definitely go the Ogre way.

Myconid's work was mostly with shader programs, the shaders are 'graphics card programs' that are compiled during runtime and uploaded to the graphics card. The shader is then usually run in a massively parellel asynchronous way, resulting in a super fast rendition of the data. You throw [Vertices, Textures] into a [shader] and it spits out an [image] on the screen. :)

So yeah, myconid's shaders will work on Ogre3D.

Cool. Thanks for the shader explanation (I'm not a graphics programmer, although I dabble with it occasionally).

Might require rewriting shaders to cg, but that's close enough from glsl. We wouldn't really lose much.

According to their features page, Ogre supports GLSL and ARB shaders, so we wouldn't even have to ditch the older assembly ones. GLSL would probably lock us into the OpenGL backend though, which means we might lose some nice performance enhancements from Direct3D on Windows.

The renderer is "not quite slow enough", I find. It's an area where absurd optimizations could be done, but generally I think it'd be best to have the final component/simulation architecture down first.

I agree. I haven't really found that rendering is a big bottleneck currently, and I'm on fairly old (2009) hardware. I'm starting to think that in the long term, we will almost certainly want Ogre regardless of the cost of migrating, if only so that we don't have to keep updating the graphics code -- it'll magically get faster as new techniques are implemented in Ogre.

Link to post
Share on other sites

It's fairly simple, really. If its too long to refactor the ECS, then it'll stall development and while in alpha with not all features implemented, that's really undesirable. Unless it can be done fully and then one day committed in one go, it should not be done now. That's true of the renderer too, but to a lesser extent, and wouldn't prevt adding new things. It's however probably not quite as long (depends. You might well be fast if you know Ogre). The problem with focusing on the renderer is that changes to the components are "lost" in that many things will change, so it might also prevent work on thy, whoch we do not want.

My vote would be for the ECS, but it needs to be in one go, with full support of existing functionality, which means it needs git.

Then the focus should probably be the rznderer. Other things can likely be done by other people anyway if the architecture is there (slower, but that doesn't really matter)

Link to post
Share on other sites

In that regard, I'll have to ease myself to Git and slowly start poking around the component logic. When I'm certain I have the Full Picture, I'll make the required changes. It probably means I'll have to implement some parts inefficiently in order to get the code out faster (for example javascript support).

I'm currently available most of the day, so any ideas / recommendations / additional input is always welcome. Right now I'll try to fork 0ad and get my source under version control.

Link to post
Share on other sites

I think that having the basics down is the necessity. If some stuffs require behind the door changes later, it's less of a problem. This is so the work done on components afte the ECS will be there for good, such as optimizations or new features.

The renderer is a big Change that has the advantage of being very long term and really good FPS-wise, so that's why I think it's the second highest priority (particulalrly as theres really noone else to do it).

Of course, that's only my opinion, and others should weigh in on this.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...