Jump to content

Ykkrosh

WFG Retired
  • Posts

    4.928
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by Ykkrosh

  1. Got a likely explanation/fix in that bug now. Is anyone able to confirm/deny this shadow problem on Mesa drivers other than r300g/swrast/softpipe/llvmpipe?
  2. Ykkrosh

    Mac Alpha

    Thanks for your comments Should be fixed in the next release. Hmm, for development I find it very convenient to run in windowed mode without the mouse being trapped, and I'd expect normal players to almost always run in fullscreen, so I think I prefer things the way they are. Does anyone else have preferences here? Alpha 3 has no opponent AI, which does make it pretty useless. Alpha 4 will have the beginnings of AI player scripts, so it should at least build some buildings and send some people to attack, so hopefully that's a step in the right direction I'm not sure what you mean by "resource construction site". Do you mean resource collection (like too many people gathered around a tree), or building construction, or something else? What would you consider ideal? (It could definitely do with improvements, but I don't currently know what the eventual goal should be.) It should do that in the current release - shift+right-click queues movement orders for individuals or for groups of units. There's bugs with queuing non-movement orders for groups of units, but that doesn't sound like what you mean. There ought to be a visual indication of the queued path, but we don't have that yet.... Hmph, Erik beat me to some of these
  3. In theory, I think the global illumination (specifically ambient occlusion) could be done entirely automatically - just need an algorithm to do the UV unwrapping (surely there must be a not-totally-useless one we could copy) and another to compute the lighting (which seems straightforward), so it would involve a one-time programming effort rather than a repeated art effort. Still more effort than not doing it, though . (NVIDIA has some material on ambient occlusion. In an RTS I think we'd only need it for buildings (not units), and we don't have particularly dynamic lighting (we can pretend there's just a single ambient skylight and ignore the direction), so we wouldn't need any of the fancy real-time stuff and could stick with the simpler static offline approaches.) I don't think I see the significance of W mapping - how is it different to any other per-vertex attribute, like RGB(A) vertex colours? (OpenGL shaders let you have at least 16 arbitrary 4-component values per vertex, which you can use as texture coordinates or colours or anything else you want, so I think there's no limit other than in the art tools' ability to make the data comprehensible.) As an attempt to be clearer about my more recent thoughts: Proposal #6: Split the renderer into two modes: a compatibility mode designed for OpenGL 1.4 capabilities, with no unnecessary or optional features (no shadows, no non-ugly water, no specular maps, no distance fog, etc); and a GLSL shader mode designed for OpenGL 2.0 capabilities (plus FBOs since they're ubiquitous). The shader mode will include some configurable options for performance (optional shadows, lower-quality shadows, lower-quality-but-still-reflective water, etc) so it will still scale to low-end hardware (i.e. integrated Intel graphics). Few users should get the compatibility mode (it looks like pretty much only the R300 Linux drivers, or people with seriously buggy drivers that break in the shader mode), so we won't spend much effort making it look good or optimising performance, but at least it should be playable (with no unfair (dis)advantages). Then we can focus on implementing most features and most configuration options and most optimisations in a consistent shader framework, which I believe will slightly simplify the code compared to the current fixed-function/shader mixture and will greatly simplify the addition of new features.
  4. I expect we'll really want some kind of auto-update mechanism for the game content (small patches to fix bugs, improve balance, etc), so we could use a similar thing to push updates to the manual, and we could do updates every few weeks without irritating players too much. If we had that, would you still see a benefit in having the manual support continual updates (which I assume means downloading it from the web every time you run the game)? I think the main drawbacks are the performance impact (players would have to wait for the manual pages to download, or at least to check their cached copy is up to date), and the lack of quality control, and the effort needed to keep translations into other languages up to date, and (if anyone can edit the wiki) the likelihood of spam being seen by players. Even without the game using an online manual, it would be possible to use the wiki as the development area for the manual, and then check it and copy it into SVN before each new release (with a tool that converts syntax and copies embedded images etc) - would that be a better editing environment than directly using SVN and a text editor? (and enough better to be worth the effort of implementing a conversion tool (which is probably not terribly high but is non-zero)?)
  5. (What distro is this? I thought most still had 1.2, or had it under a name "enet-old" or similar.) If you have the library in a non-standard location, I think the only way to make it work is to edit build/premake/extern_libs.lua, around line 115 where it says tinsert(package.links, "enet"), and add a new line like tinsert(package.libpaths, libraries_dir.."/opt/enet12/lib")then run update-workspaces.sh and make again. I think that should take precedence over /usr/lib and make it compile with the right version. I'm not sure whether it'll still use the old when running, though - in that case you might have to run it like 'LD_LIBRARY_PATH=/opt/enet12/lib ./pyrogenesis' to make it look in the right place. (It would be nice if we had a better way to handle this situation, but I don't know what would work )
  6. The code is here (minus some small changes I've not committed yet), using Python and Django (which I currently think is quite nice). The data is collected and processed on my own server. I think making the raw data public would raise too many privacy concerns, but it's mostly anonymous and not really secret so I'd be happy to share it somehow with WFG members or other particularly interested people.
  7. Yeah, this is SVN users and it'll include dirty data from people doing texture-conversion etc. That probably won't affect the numbers hugely (the conversion isn't counted as part of the 'render' time which is used in this chart) but it may make some difference. Currently I only exclude people running Debug versions, but I can restrict it later to e.g. people playing the alpha 4 release on some particular map, to get more meaningfully comparable numbers.
  8. The game reports profiling measurements now, so I thought it'd be useful to plot it and make sure it vaguely makes sense. The current result is like this - this is showing just the time spent in the "render" function (i.e. excluding gameplay logic and buffer-swapping and everything else), and each cross is a reported timing (from either 5 or 60 seconds after the start of a match, from any user with the same graphics device, from any map and any screen size and any graphics settings, etc - this is hopelessly imprecise and I should filter the data better once there's more of it). Red lines are medians, blue rectangles are upper/lower quartiles. Ideally, everyone would be at least at 60fps, which is 16msec/frame (i.e. almost the tick mark just above '10^1'). Anything worse than about 20fps (50msec/frame) is probably no fun to play. The data's currently too random to deduce anything, but at least it seems to be putting older/integrated devices on the left and newer ones on the right so it's not entirely random, and hopefully it'll work once there's more data
  9. A little, but we probably won't make full use of them - there are some things which we could easily run on a second core but they'd probably only improve overall performance by 10% or so, and it would be really hard to get major improvements with the current engine design.
  10. 50% means it's using the whole of one core, which is what it's designed to do. Our rendering engine could benefit from lots of optimisations, so performance should improve significantly in future releases
  11. They can catch goats - it just takes a long time . Goats only run once they're attacked, then stop soon after, so their hitpoints go down (by a tiny bit) every time someone catches up with them and hits them, and eventually they'll be defeated. r9012 just changed the behaviour towards skittish animals like gazelles, which run whenever someone is getting close (i.e before anyone can get close enough to actually hurt them). (Probably the AI ought to use combat units to hunt anything larger than a chicken, and let the female citizens collect the meat afterwards, but that requires more sophistication than the AI is currently capable of.)
  12. Yeah, we ought to provide some options for turning it on in cinematics and for screenshots, and assume that the cinematographer won't make the horizon at the edge of the map visible since that'll always be potentially ugly. (We can make it rarer for players to see the edges during normal gameplay, by adding large margins of terrain around the usable map area, but we can probably never make it impossible, so I think we'll still need to keep the skybox disabled by default to draw black beyond the edges.)
  13. (For people who can't see the old private thread, it's about doing global illumination like this.) It wouldn't have a major effect on that, but it would help a bit (same as proposal #4). To be specific: If we added that lighting, I imagine we'd keep the current sharing of diffuse textures between buildings, and add a secondary low-res lighting texture with a secondary UV coordinate set per building. (The alternative is for each building to have its own diffuse texture with the lighting baked in, with no sharing between models or between polygons on the same model, which sounds a lot more expensive in memory usage (though I have nothing quantitative)). Currently buildings are rendered as something like "(highlightcolour*diffuse*a + playercolour*(1-a)) * shadow + ambient", which needs four steps. As mentioned in some other thread I'd like to do smooth fog-of-war shading on buildings so it'd be like "(highlightcolour*diffuse*a + playercolour*(1-a)) * shadow * fow + ambient", which is five. If we added an extra layer of lighting, it'd be like "(highlightcolour*diffuse*a + playercolour*(1-a)) * lighting * shadow * fow + ambient", which is six. Specular would add another, normal mapping would add more, etc. (We may never need all those things but we'll probably want at least some.) With traditional non-shader-based multitexturing we can only do four steps at once, so for anything more complex we have to do half the work and then render everything again in a second pass to do the rest of the work, which is a bit slow and pretty awkward and restrictive. With proposal #4/#5, this becomes relatively trivial - you just write out the whole equation and it'll work fine. There'd still be other challenges with global illumination (doing all the secondary UV unwrapping and computing the lighting and helping modders cope with the new features), but it would become easier to efficiently implement the renderer code to handle it. Incidentally, I've gone off proposal #5 now: it doesn't help anything except the old R300 drivers (it'd still be incompatible with GeForce 3/4, and everything else supports GLSL) and they wouldn't have shadows (since R300 doesn't have GL_ARB_fragment_program_shadow), so it seems like minimal gain for the effort. Maybe better to keep a stripped-down minimal non-shader-based path (no shadows, no smooth fog-of-war on buildings, ugly water, etc) for maximum compatibility, and prefer a fully GLSL-based path with all the effects - that'd be less effort than trying to keep all the features working in the non-shader-based path, and would allow nicer better-performance graphics on almost all systems. I'm very indecisive, though
  14. The current data (from ~100 users, via SVN and some dev-snapshot Linux packages) shows that GL_ARB_vertex_shader/GL_ARB_fragment_shader are primarily just missing on Intel 945G (as expected) and Mesa R300 (the classic (pre-Gallium) Linux driver for Radeons older than the HD 2000/3000 series (released around 2007)). I wasn't previously aware of that R300 limitation - it sounds a more serious compatibility problem than the 945G (which is too slow to work anyway), depending on how widespread it is (which we can find out when collecting more data from the next release). On the other hand, 100% of users so far have support for GL_ARB_vertex_program/GL_ARB_fragment_program (the old non-GLSL, assembly-based syntax). That's still more powerful than standard multitexturing, and it's got to be less painful to write and maintain. So: Proposal #5: Always require GL_ARB_vertex_program and GL_ARB_fragment_program. We could incrementally rewrite all the existing multitexturing code to be far simpler and cleaner and more flexible. We could rely on more advanced features: multitexturing is limited to 4 'instructions' and one texture per instruction, but ARB_*_program allows lots (even 945G allows 96 instructions and 8 simultaneous textures), so we could do more complex graphical effects (diffuse plus player-colour plus specular plus lighting plus shadows plus fog-of-war plus distance-fog etc) without the complexity and performance cost of multi-pass rendering. (We're pushing the 4-texture limit already, so adding more graphical effects will result in larger slowdowns if we don't switch to programmable shaders than if we do.) Compared to GLSL, these extensions are not part of the GL standard (so some future drivers might (unlikely) theoretically drop compatibility, and OpenGL ES mobile devices probably don't support them) but they seem almost universally implemented on the desktop. The syntax is much uglier than GLSL, but if that's a problem then for complex shaders we could write in GLSL and use NVIDIA's Cg compiler to convert them to assembly syntax. (Don't want to use Cg at runtime because it's heavy and not open source, but we can run it offline and stick the output in SVN.) I think we'd still want GLSL for more complex effects (since it's a more powerful language), so we'd probably have to add a shader abstraction into the engine which lets us switch easily between ARB_*_program and ARB_*_shader, which I assume wouldn't be much of a problem - the language syntaxes and APIs differ but the concepts are fundamentally the same.
  15. You need to install libcurl (probably package libcurl4-openssl-dev on Debian, or something like that).
  16. Either that or wait for the next release (which hopefully will be within the next couple of weeks or so), or maybe click "continue" when it shows that error message since it probably won't break too seriously (Actually, you might be able to use it from SVN without compiling it - just install TortoiseSVN and check out the code, then run binaries\system\pyrogenesis.exe. That .exe is sometimes a bit outdated but most days it should work.)
  17. There isn't any debug code that has a significant effect, except for out-of-sync checking in multiplayer games which takes ~20ms and currently runs every 500ms.
  18. See the build instructions if you want to compile it yourself.
  19. Are you using the alpha 3 release? There are some changes in the current SVN development code which may help with this, so the next alpha release should hopefully work a bit better.
  20. Based on this replay log, I've tried doing a bit more profiling and optimisation, to see what gains can be had without any changes to behaviour. (It'd be easy to e.g. run the AI scripts less frequently so they take less time, but I'd prefer to make them faster before making them less responsive). Running in simulation replay mode (i.e. no graphics) I get the following average times per turn, over the 15212-turn run (on a 2.16GHz Core 2 on 64-bit Linux): 24.8 msec/turn in r8999. 22.4 msec/turn in r9000 (improved long-range pathfinder). 21.0 msec/turn in r9001 (improved obstruction grid computation). 18.2 msec/turn in r9001 plus the SpiderMonkey method-JIT. 16.7 msec/turn in r9001 plus the JIT plus various AI-related optimisations (not committed yet). Annoyingly there's no major bottlenecks that can be fixed for massive speedups, there's just a series of things that each give ~10% improvement. The current time seems to be spent very roughly ~20% in CCmpUnitMotion::Move (which does collision-detection for every movement, and also triggers UnitAI and AIProxy scripts by updating unit positions, and does some other stuff too), 10-20% in CCmpRangeManager::ExecuteQuery (finding enemy units in range), 5-10% in CCmpRangeManager::ExecuteActiveQueries (also finding enemy units in range), ~10% in AI scripts (sometimes rising to around 20% for short sustained periods), and a load of things each under 5%. Pathfinding (both long-range and short-range) averages in the <5% region, but has occasional large spikes. Some likely conclusions: * Improving the average performance is hard. Replacing some code with a clever super-efficient zero-time algorithm can never save more than about 10% of the simulation cost - the only way to make significant progress is to make lots of incremental small improvements. * Multithreading certain bits of code (AI scripts, pathfinding) won't really help average performance on multi-core processors - they'll just save 5-10% at best. * On average, in this kind of scenario, things are already fast enough - at the normal rate of 5 turns per second it's using <10% of the CPU for all the simulation code, which is fine. * Simulation performance isn't the most important thing - graphics performance and network latency and out-of-sync checks probably matter more for players. * Average performance isn't important (except for replay mode, and time-warp mode, etc). What makes players unhappy is jerky framerates caused by worst-case performance spikes, so I should probably focus more on that. The pathfinders and AI scripts are responsible for most spikes, and multithreading would help smooth those out over multiple frames. Better pathfinding algorithms would help with the worst spikes. None of that will be totally straightforward to implement, though.
  21. Performance is probably the biggest problem - doubling the map's width and height means it takes 4x as much RAM and disk space, and certain algorithms that process every tile (AI scripts, pathfinding-related stuff, rendering, etc) will take 4x longer, and you'll probably need 4x as many trees and rocks etc so that'll all go slower too. Designing the engine to work independently of the size of the map (as is necessary in e.g. MMORPGs) would add significant complexity so we've assumed the maps are small enough to load and process all at once. There's a few strict restrictions because of other assumptions the engine makes, e.g. if it's larger than 724x724 tiles then the pathfinder will complain of integer overflows, and position datatypes will likely run out of bits at 5792x5792 tiles (or probably much earlier), etc. I think currently the Median Oasis map is 256x256 tiles, so you'd have to be quite a lot bigger before hitting those limits
  22. Can you reproduce the error? I don't see how it could happen, and it doesn't look AI related. (Might need to ask Jan.)
  23. We should implement terrain decals, so that things like farms and foundations can use textures that follow the terrain. Also we should limit the maximum terrain height variation when you're placing a building, so you can never build it on a steep hill and make it look silly. Also we probably should automatically flatten the terrain underneath non-terrain-decal buildings once you do place them, to smooth out small variations. No, that would be cheating . I think we should try to make fair AIs that are as good as possible, before giving up and giving them artificial gameplay advantages. I think I fixed the errors, but we should change the default map to something where the AI can do a slightly-less-terrible job. Suggestions?
  24. From what I've read, SupCom didn't support random maps at all - is that right? I think we really want to support that eventually (since it can help a lot with replayability) so we need a purely algorithmic solution, and we can use that solution for hand-made maps too and save the designer some seemingly error-prone work. (Also I think we really want to support less experienced designers, since we like modding, and (unlike commercial games) we can't expect employees to spend a year learning how to make good maps for our engine.) I think there's actually two largely-separate things here: pathfinding and terrain analysis (briefly described here). The latter involves identifying bases and islands and shores and forests etc, and finding useful choke-points and ambush sites and locations to launch attacks from, to help the AI scripts make decisions about what to do and where to go. That information isn't used at all for human players - humans and AIs both share the pathfinding algorithms to get from A to B, but only AIs need this extra information in order to decide what "B" should be. It sounds like a lot of that SupCom stuff is the terrain analysis, but the map designer is forced to do all the work. So I think we need an algorithm to do this terrain analysis, though once we've got it it might be possible to give map designers optional control so they can mark in extra ambush sites or forward base locations or whatever. That probably depends on the details of the algorithm, and current I have no idea how it'd work in detail, but it sounds a good thing to keep in mind
×
×
  • Create New...