Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 2014-11-15 in all areas

  1. This has actually been discussed before. The result way always that the developers don't want to add another resource, and they have a reason for it. Everybody seems to think of "metal" as "iron", but it is supposed to represent metal in general, including gold. The in-game icon certainly suggests that it represents iron - maybe a different icon can be found, e.g. a pair of metal/gold ingots?
    2 points
  2. Actually the current corral system is not the finalised plan. Rather, animals that are capable of being herded can be garrisoned in corrals for a useful purpose such as providing a steady flow of food or in the case of horses they would make for a small discount. Farming should be the primary source of food historically speaking.
    1 point
  3. I wouldn't mind a germanic faction after the pasturing is implemented.
    1 point
  4. Stan, please be a little bit more careful about what you say if you don't know for sure. It's very nice of you to want to help and all, but sometimes (as in this case) it means that you end up posting false/potentially misleading information. One or more Germanic groups is very likely to be included in 0 A.D. Empires Besieged, the second part of the game that we want to work on once 0 A.D. Empires Ascendant is finished. The first part covers 500 B.C. - 1 B.C. and the second one 1 A.D. - 500 A.D. As the Germanic peoples had a greater impact on history during the later period that's when we want to add them. Now, we are not finished with the first part yet, so no hard, definitive plans have been made yet (nor will be for quite a while), but I would be very surprised if no Germanic groups are included in Empires Besieged.
    1 point
  5. I've made some more performance measurements. I wanted to answer the following questions: 1. What effect does GGC (Generational Garbage Collection) currently have on performance and memory usage 2. What has changed performance-wise since the first measurements in this thread? There have been many changes on 0 A.D., including Philip's performance improvements and AI changes. There have also been some changes in SpiderMonkey since that measurement and in our SpiderMonkey related code. All that could have changed the relative performance improvement directly or indirectly. 3. Have the first measurements been accurate and can they be confirmed again? Based on the experience from the Measurement and statistics thread I suspected that parts of the results could be slightly different if measured again. I didn't expect that difference to be greater than 3% at a maximum, but I wanted to confirm that and get some more accurate numbers by increasing the number of measurements and reducing the effects of other processes running on the system. 4. One mistake I've made in the first measurement was to measure v24 with C++03 and v31 with C++11. I wanted to know how much of the improvement is related to SpiderMonkey and how much to C++11. The measurement conditions were again: r15626, 2vs2 AI game on "Median Oasis (4)", 15008 turns Each measurements was executed 50 times on my notebook by a scirpt (except the v24/C++11 measurement which was executed 100 times). I've posted the distribution graphs just to give a better idea of how close these measurements are in addition to the standard deviation value. SpiderMonkey 24 with C++03 Average: 662.04 s Median: 662 s Standard deviation: 1.68 s Distribution graph of the measured values: SpiderMonkey 24 with C++11 Average: 662.43 s Median: 662 s Standard deviation: 2.47 s Distribution graph of the measured values: It was quite surprising to me that C++11 apparently didn't improve performance at all. At least it didn't get worse either. I have to say that this probably depends on the compiler a lot and I was using a relatively old GCC version which might not be able to take full adavantage of C++11 yet. SpiderMonkey 31, no GGC Average: 661.08 s Median: 661 s Standard deviation: 1.44 s Distribution graph of the measured values: SpiderMonkey 31, GGC Average: 659.94 s Median: 660 s Standard deviation: 1.48 s Distribution graph of the measured values: Memory usage graphs (only from 1 measurement each, comparing v31 and GGC with v24): We see that the average and median values are lower for v31 compared to v24 and for GGC compared to no GGC. However, the difference is so small that it could also be conincidence. That's disappointing because even though the difference in the first measurement was small too, I've hoped for a bigger improvement because of GGC. We don't quite know if the first measurement was conincidence or if some of the changes made the difference smaller again. The memory graphs look quite promising. With GGC there's quite a big decrease in memory usage. Here it's also important to note that this improvement did not have a negative impact on performance. We could have achieved a similar improvement by increasing the garbage collection frequency, but this would have had a bad impact on performance. The additional minor GCs that run between full GCs don't seem to be a performance problem. SpiderMonkey 31, GGC, keepJIT code Average: 572.98 s Median: 573 s Standard deviation: 3.72 s Distribution graph of the measured values (there was probably a background job or something running... the standard deviation is a little bit higher than for the other measurements): So far we have not seen significant performance improvements. Fortunately the impact of SpiderMonkey's problem with garbage collection of JIT code could be confirmed. It's approximately a ~13.5% improvement in this case. There's an additional problem with incremental GC, but I didn't measure that because it's quite hard to setup conditions that reflect the performance of a real fix of this bug. Anyway, "KeepJIT" means I've set the flag "alwaysPreserveCode" in vm/Runtime.cpp from false to true. It causes SpiderMonkey to keep all JIT compiled code instead of collecting it from time to time. Just setting that flag would be kind of a memory leak, so it's only valid for testing. However, SpiderMonkey's behaviour in regard to GC of JIT code has been improved in several changes, so there's a good chance that we'll get a proper fix for that performance problem (unfortunately not with v31).
    1 point
×
×
  • Create New...