Jump to content

Norvegia

Community Members
  • Posts

    27
  • Joined

  • Last visited

Posts posted by Norvegia

  1. No, it doesn't. We usually have a lot of pathfinding requests per turn. Because of the number, every pathfinding request could be calculated without parallelism, but the different requests could be executed in parallel. So the algorithm doesn't have to be prepared to split in multiple instances, but the interface to call the algorithm must be thread-prepared (i.e. start calculating a bunch of paths together). That interface is already prepared for multithreading.

    So individual pathfinding request aren't dependent on each other? In that case the pathfinder is highly parallel. I thought this wasn't given, because of collision detection (ordered to enter the same spot at the same time).
  2. If any of you plan on developing a new pathfinder, I hope you use choose an algorithm that can take advantage of thread level parallelism(TLP). Because the current push to lighter, thinner and more energy efficient devices many consumer CPUs have taken a step back in instruction level parallelism (ILP) (tablet cpus, intel atom, amd jaguar and beema). The possibility of TLP in the algorithm is therefore crucial imo.

  3. ...

    I think trying to attract more programmers who want to work for free on a free project is a more viable approach. The money can be used for that attractive purpose rather than just for paying one person.

    I was under the impression that the money (including mine) was first and foremost going towards paying Jorma (redfox) to increase game performance. It seems to me like this is a pet project for a selected few who drove out the person we paid you (the dev team) to hire.

  4. To put that in perspective, the typical commit rate was somewhere around 0-3 commits per day. You can track every 0AD change made on the timeline.

    The development pace has been quite remarcable the last week, but I fear it will slow down in the coming weeks when school/work starts. It shouldn't be a problem to have a lot of volunteer activity while a freelance programmer works on short/long pathfinder rewrite or JS to C++ rewrite. I beleive a more playable game is best way to create a larger community. :)

    • Like 1
  5. Yes. (Personally I've always been (and still am) in favour of keeping the fixed-function support, since I don't see it being much of a maintenance burden if the engine uses sensible abstractions, and I think compatibility is very valuable.)

    Pretty optimistic to believe that the <3GHz pentium 4 or <<2.6 GHz Athlon 64 cpu that goes with those cards will be able to run the game logic. A used openGL >>2.0 AGP/PCI-express card is practically free today.

  6. Of course work has to be done, but it is not impossible. And I'm only talking about ground textures here. Is 0 A.D. closed for suggestions? I was just airing my idea, well knowing the programmers are swamped. And maybe, if the idea catches on, someone will take it upon them selves to implement it. Lets not discuss the probability of it being implemented.

  7. If you're referring to using a newer version of OpenGL, that's a good way to lose a lot of users for dubious performance gains.

    If you're referring to porting the game to C#, that's a good way to spend a ridiculous amount of effort and lose a lot of users for more dubious performance gains.

    I'm referring to the opengl version. The performance gains mahdi posted is Incredible. More than 3x. If the single threaded cpu performance requirements is very low, systems with intel atom and amd bobcat cpus will be able to run this game. And AMD and Intel has sold millions upon millions of these systems the last few years (the bobcat APUs have quite a potent GPU). These systems make up a large portion of the market. And with the trend of slimmer and slimmer laptops and tablets, one thing is sure, single threaded cpu performance is not going up by a lot. In contrast to GPU performance which scales well with smaller production nodes. I think compatibility with these systems is more important than compatibility with ancient desktop system (which are the only systems compatibility you are breaking). Maybe it isn't as black and white as I describe it, but 3x cpu performance gain is worth looking into. :) In short: I believe the group of users and future useres with a quite modern GPU and a cpu with poor single threaded performance is much larger than the group who have a fast cpu with an ancient GPU. High cpu requirements is not the way to go.
  8. There is nothing to suggest that this is the case. We can relatively easily "predict if a new api/standard will enhance or decrease performance" by using profilers, and seeing where computation time is actually spent, rather than just guessing. The developer in your example could have done the same.

    Doesn't this require a lot of work? Like moving to c++11, doesn't a large part of the code need to be altered or rewritten before the performance delta can be known?

×
×
  • Create New...