Jump to content

Ykkrosh

WFG Retired
  • Posts

    4.928
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by Ykkrosh

  1. I experimented a bit with using the terrain blending system for rendering fog-of-war, which looks like this (though it's a hacky prototype so I won't commit the code). It's very reminiscent of AoK's fog, and more stylised/cartoonish than the AoM/AoE3 wide blur. (We could use more or less blurry blend textures easily, but it'll still be strongly tile-based.) Is this a good artistic approach, or a terrible one?
  2. Committed some code now. See e.g. old system, particularly the bit on the right which mixes high-priority and low-priority white tiles and looks a mess, vs new system which handles that much more smoothly - I believe it should always behave reliably now (though please point out any counterexamples ). The remaining ugliness seems to be down to the blend textures, so I made some new ones (based on some old ones by CheeZy - I took the corner and edge blends, fixed the edge one to tile better, then rotated and combined them to produce all the other combinations, then tweaked some of the combinations by hand so it's better at diagonal lines).
  3. Yes: Someone on IRC (skl_) reported the same error about six weeks ago, on Vista, but I didn't get any more information than that.
  4. One thing we have planned is that if you construct a building on uneven terrain, the terrain should get automatically flattened (otherwise the buildings look like they're partly floating, which is silly). But more extensive terrain alteration sounds more complex to do well (we'd probably have to deal with changing terrain textures, and handle units that drown themselves, etc).
  5. binaries/data/mods/public/simulation/components/Formation.js computes the shape of the formation, and tells the individual units to all follow it (via the FormationWalk order).
  6. Of the (4n)^2 potential line segments, any that intersect an edge are not valid (since they'll involve moving through another unit/building), so the visibility tests exclude them from the visibility graph that the A* searches. I think the behaviour of units moving into formations is primarily a gameplay design issue - the first step is to work out what players would want their units to do, and then implement it and optimise it (or throw the design out if it's impossible to implement efficiently enough), rather than starting with performance concerns. (The currently-implemented design is definitely not great for players, so I agree it should be improved, but I don't have any particular ideas of how.) If the avoid-moving-units flag is set then paths will consider moving units to be obstructions at their current positions (so the path will go around them, though it'll probably be invalid once they've actually moved); otherwise it will ignore them entirely, and only path around stationary units (and buildings and terrain). It'd probably be good to analyse what scenarios the current design fails badly at (e.g. two large groups of units moving through each other? two single-file columns moving through each other? one group moving past a wandering pig? large melee battles?) and see what could improve those - I don't really know yet what the problems are that need solving (though I'm sure there are some). The tile-based pathfinder is only an approximation, since it ignores units entirely, and since buildings aren't aligned to tiles and we don't want to be too conservative in blocking all the tiles around their edges (since that'd confuse units trying to walk through cities), and buildings might have been constructed (or gates closed) since the path was computed. It's not a strict over-approximation or under-approximation of passability, so the vertex pathfinder is responsible for dealing with any discrepancies.
  7. The moving unit is treated as a point . The geometry used by the pathfinder is all the other units' squares, expanded outwards by the radius (/half the square width) of the moving unit, so a zero-width path through that expanded geometry corresponds to a path where the non-zero-width units won't collide. The (4n)^2 is because the shortest path through the geometry will be a series of straight line segments between vertexes of squares (or between them and the special start/end points, so actually it's more like 4n+2 vertexes) - if it wasn't wrapped tightly around vertexes then it wouldn't be the shortest path - and there's 4n vertexes so (4n^2) potential line segments that could be part of the path. In theory it's fine to skip the vertex on the far corner of a square; and you can also skip the vertex on the near corner, because the shortest path going around the square will always be directly via one of the two silhouette corners. But I think I tried implementing it (as part of the quadrant code that's still in there) and it didn't work too well in practice - it's common for squares to be overlapping or for the path to start or end inside a square (partly due to numerical imprecision), and if you exclude some vertexes then sometimes you get stupid paths (e.g. a unit walks to a tree a dozen tiles away then walks straight back), and I couldn't find any way to make it work acceptably. (The current less-powerful quadrant code still has that problem a little, and it's pretty annoying.)
  8. If the game was installed as an admin (e.g. to C:\Program Files) then non-admin users wouldn't be able to write there, so for consistency we always assume the installation directory is read-only. The forum index page says "We have 8,345 registered members", though a lot of them might be spammers.
  9. Axis-aligned just means the shape is lined up with the world's horizontal axes (X and Z). Units are treated as axis-aligned squares since that's a good enough approximation, but buildings can be arbitrarily rotated. The vertex pathfinding uses A*, but it's running A* over a visibility graph (not over a 2D tile grid), where a visibility graph has an arc between every pair of geometry vertexes (corners of squares) that doesn't cross a geometry edge. The visibility graph generation is mainly what's slow. Every unit is a square so it has 4 vertexes and 4 edges, so N units means (4*N)^2 vertex-pairs to consider, and each vertex-pair is tested against the 4*N edges to ensure unobstructed visibility, so that's (4^N)^3 vertex-pair-edge tests, so it scales poorly as N increases. (There's lots of optimisations so it's usually not that bad in practice (e.g. the graph is generated on-demand inside the A* loop instead of being pre-generated), but the asymptotic complexity will still be O(N^3). The worst case is when the goal is unreachable (e.g. in the middle of a building) because then it'll search the entire visibility graph before giving up and realising there's no way, and that's quite common when telling a formation to walk alongside a building.) I'm not sure what more detail there is for the formation system. It's just an invisible 'controller' entity moving around like a normal unit, then every unit in the formation pathfinds towards that controller's position plus an offset (depending on their slot in the formation). If they can reach it in a straight line then they do that cheaply, otherwise they run the pathfinding algorithm to catch up with the formation and then look again. Units in the same formation won't block each other's movement, so they can walk through each other and get into position easily. There isn't really any collision avoidance. Usually the pathfinder ignores non-stationary units (assuming they'll probably move before we reach them), and if UnitMotion detects the unit will collide with something while executing the next step in the path (via cmpPathfinder->CheckMovement) then it will stop moving and restart the pathfinder with the 'avoid moving units' flag so it's more likely to find a valid path. I've never really understood this article - it uses a lot of words and diagrams to say how to efficiently predict positions (which seems fairly obvious) and then says almost nothing about how to use the predictions. I guess the idea is that if two units are travelling perpendicularly then instead of everyone stopping and recomputing paths, you can just stop one a few turns earlier and tell it to stay out of the way while the other can carry on its original path, or something like that. That doesn't seem to be what AoM/AoE3 do (I think they're largely indistinguishable from what we currently implement) but I haven't played AoK to see if that's doing something like this.
  10. Green ellipses are for components implemented in C++, black ellipses for components implement in JavaScript. (source/tools/cmpgraph/ is what generates this.)
  11. I think the user profile is the only place the user is guaranteed to be able to write files (especially on Vista+ where users aren't admins), and Application Data is the placed intended for applications to store data in there . (We ought to be storing things like screenshots in different locations where they're easier to find, though.) Counting developers is hard - there's something like 30 people over the years who've written code that's in the game but some wrote a thousand times less code than others and there's only been a few people active at a time, so it depends on what you want to count, and I have no idea how many people have been involved with art and other activities. Counting users is also tricky but the download stats report about 60K downloads, plus there's downloads before we started using SourceForge and also SVN and an unmeasurable-but-quite-large number of Linux package downloads. So it's probably something on the order of 100K downloads.
  12. Quick prototype is here (should at least work in Firefox 3.6 and Opera 11). The first image is the input data (each tile has a texture (colour) plus priority). The second is what the current engine code implements, with obvious blending bugs. (The tiny coloured squares in a tile represent the neighbours that are blended into this tile). The third is my new approach to fix those bugs. The smaller images after it are the blending layers that will be drawn on top of each other. (In the current engine there's one blending layer per texture; in the new one there might be more.) (The code in this prototype is disgusting and inefficient - it should be cleaner when done properly in C++.) Hmm, I'm becoming more doubtful there is one.The problem is: Tile 1 blends an R texture on top of G (on top of the base texture B ), so say G1 < R1. Tile 2 blends G over R over G, so G2 < R2 < G2'. Tile 3 has R3 < G3. We want a total order that respects those per-tile orders, so it could be: [G1, R1, G2, R2, G2', R3, G3] or [G1, G2, R1, R2, R3, G2', G3] or [R3, G1, G2, G3, R1, R2, G2'] etc. We want to minimise the number of times we switch texture when rendering in this order, so the first is very bad (we switch G->R->G->R->G->R->G) and the second is best (we switch G->R->G) and third is a bit worse (R->G->R->G). It's easy to make an algorithm produce either the second or third orders: Pick a blend off the bottom of a random per-tile stack (say G1), then continue picking off blends of the same texture from any stack (in this case G2), until we run out of blends of that texture, then pick another random stack (say R1) and continue until finished. But if you make an unlucky random choice (say R3 first) you'll get sub-optimal behaviour. (The demo page demonstrates this - the 3rd layer could be merged into the 5th.) I guess there are some heuristics that would improve matters (prefer to pick from the largest tile-stack, perhaps?) but I don't see a perfect solution. But hopefully perfection doesn't really matter in practice.
  13. Hmm, I had a look at this again (inspired by AoK terrain discussions elsewhere), and tried doing a quick prototype in JS. I found something that seems to fix the terrain-blending glitches - I'm not sure it's perfect but I just want to write it down before I forget: In the current implementation, for each tile, we look at the 8 surrounding tiles to find ones that are a different texture and a higher priority (i.e. that will be blended into this tile). For each distinct texture used by those higher-priority neighbours, we find all the neighbours using that texture and assume they're all part of the same splat and pick the appropriate blend shape. That is, we treat tiles as equivalent (for the purposes of splatting) if they share a texture, regardless of priority. What works better: For each tile, consider the texture+priority of itself plus the 8 neighbours. Two tiles (t1, p1) and (t2, p2) are equivalent if t1 = t2 and there is no other tile (t3, p3) such that t3 != t1 and (p1, t1) <= (p3, t3) <= (p2, t2) (using a lexicographic order). That means splatting won't violate priorities - two tiles won't be in the same splat if there's an adjacent differently-textured tile between them in the priority order. Then for each equivalence class of higher priority than the current tile, we find all the neighbours in that class and pick the blend shape as before. (Then we need to reconstruct large splats from all these individual tile blends (for polygon batching) without violating any orderings, but that doesn't sound too tricky - we have a partial order over tile-blends so we just do a topological sort to get a total order and try to arrange it so tile-blends with the same texture will be adjacent (is there an optimal algorithm for that?), and batch adjacent same-texture blends.) Incidentally, our blend textures are a bit broken, e.g. blendtwoedges.dds and blendlshapecorner.dds don't line up properly with each other, so it'll never look right until we fix those. Also incidentally, I guess it'd be nice to support multiple blend texture sets, e.g. roads will probably want a much sharper transition than sand. That's easy if the blend texture is solely dependent on the higher-priority texture being blended (not based on the pair of textures involved in the blending - sand over road will be the same as sand over grass etc) so we should probably do that at some point.
  14. Rhino is written in Java, so that's not too useful for us since we're not going to port the whole game from C++ to Java . But we don't need continuations anyway - that's about the current state of execution (e.g. the function call stack and local variables), so you could serialise a program while it's half-way through running a function and then restore it and continue where you left off. We'll never need to serialise the AI while it's still running (since it'll stop running at the end of each turn), so it's a simpler problem (but still not a simple problem because of closures). (Incidentally, I mentioned SpiderMonkey's XDR earlier, but I've been told it can't actually serialise arbitrary data - it apparently just serialises compiled scripts, so it's used as an optimisation for parsing and can't really do any more than that.)
  15. Yes - see source/tools/atlas/AtlasUI/ for most of the code. (It probably won't be easy to edit if you don't have much prior experience with C++ and wxWidgets, though.)
  16. On a vaguely relevant note: http://arstechnica.com/gaming/news/2011/01...competition.ars is interesting as a non-technical discussion of RTS AI design. (But it does seem kind of cheating to focus on a single strategy of using flying mutalisk units (which don't get stuck on obstacles and which benefit from micromanagement speed that humans can't match), and not a good approach in situations like ours where the goal of AI is to help human players have fun.)
  17. Yes, as mentioned here . Run the game normally, find its commands.txt, then pass it to -replay. Does that do what you want? (The replay mode will also print a hash of the complete game state (every 100 turns and at the end) which is useful for verifying that your optimisations didn't change the behaviour of the algorithm.)
  18. There aren't any diagrams like that. (There's this for the simulation components (blue lines for components directly calling methods on others; red lines for components broadcasting messages that are indirectly received by others) but I'm not sure how much that really helps ) Some debug output (via debug_printf) goes to stdout (on Linux/OS X) or to the Windows OutputDebugString system (shown by the VS debugger, or by DebugView). Other logging (via LOG macros from ps/CLogger.h) goes to ~/.config/0ad/logs/mainlog.html or %appdata%\0ad\logs\mainlog.html (But be careful about the VS debugger: it seems to make OutputDebugString extremely slow, which will mess up any profiling. I don't think DebugView has that problem.)
  19. I think it's close to the AoM approach now. simulation2/components/CCmpUnitMotion.cpp drives the pathfinder. If it can move in a short straight line to the target then it doesn't use the pathfinder at all. Otherwise it uses CCmpPathfinder_Tile.cpp first, which does a simple A* search over a tile representation of the map (based on terrain movement costs and water and buildings and other large obstructions) and returns a list of waypoints between adjacent tiles. Then UnitMotion repeatedly uses CCmpPathfinder_Vertex.cpp to find a precise shortest path to a nearby waypoint tile, based on the exact geometric shapes of buildings (rotated squares) and units (axis-aligned squares) in a small region around the source, so that movement isn't constrained to tiles. (That algorithm is something like O(N^4) in the number of obstacles which is why it has to be restricted to a short range; AoM seems to do the same, with a similar range to us.) (You can press shift+D in the game and toggle various checkboxes to enable visualisation of the pathfinder state.) Paths are all computed asynchronously - UnitMotion just registers a request for a path, then CCmpPathfinder::FinishAsyncRequests executes them all. (Eventually this should ideally be in a background thread but it doesn't seem worth the complexity yet - better to improve the serial performance before adding threads and making it impossible to measure.) There's no caching of paths (except to the extent that each unit's UnitMotion component remembers the tile/vertex paths it's currently moving along, so it doesn't recompute every turn). The Tile pathfinder usually seems fast enough anyway (it's not called very frequently, and it doesn't do anything algorithmically clever, but it's implemented to avoid e.g. dynamic memory allocation so it has low overhead). The Vertex pathfinder is often slow but I haven't seen a feasible way to do caching - little can be cached across simulation turns because the whole world might change in that time, and little can easily be shared between units in a single turn because they've all got different source and target positions and all have different sets of obstacles within their search range.
  20. The in-game profiler (press F11 to toggle, then press numbers to drill down) is in ps/Profile.cpp and ProfileViewer.cpp, originally based on "Real-Time Hierarchical Profiling" from GPG3. For much-lower-overhead in-game measurement there's also some macros in lib/timer.h - look for its comments on TIMER and TIMER_ACCRUE, which measure a block and dump the timings into Visual Studio's debug output window (on Windows) or stdout (on Linux/OS X). The timer implementation depends on OS - see lib/timer.cpp, which usually uses clock_gettime on non-Windows platforms and uses lib/sysdep/os/win/whrt/ on Windows. whrt picks one of many timers depending on what's considered accurate and safe. (docs/timing_pitfalls.pdf documents the rationale.) For more detailed profiling I usually use Valgrind's callgrind tool on Linux with KCachegrind, since it's nice and precise (it counts every instruction) and consistent, though not entirely representative of real low-level performance.
  21. It's probably a bit early to worry about the details of AI personalities now - need to focus on getting a single one basically working first, which'll take long enough . But the engine supports multiple independent AI scripts and lets players select them, and it should also be possible to tweak parameters within a single AI script (e.g. to make it more aggressive or more defensive, if the script was designed to support that), so it'd be good to add some variety like that eventually Currently I'm working on making it not incredibly slow - the previously-committed code can take ~100ms to transfer entity data to the AI scripts every turn, which isn't good, so I'm improving that by only transferring the properties that have changed since the last turn, and adding more code on the AI side to decompress it back into the full entity state, and then exposing the state through some wrapper classes so you can say "this.entities.filter(function(ent) { return ent.isOwn; }).destroy();" etc.
  22. That's an interesting idea, though that map's probably not great as a realistic workload - it hits unusual scalability problems due to the units being so tightly spaced (e.g. the engine tells each unit about each other unit that's within attack range, and in this case every unit is within range of several hundred other units, so it spends a lot of time just processing lists of unit IDs), which will rarely come up in normal games. Ideally those problems should get fixed anyway, but they're less important for us than problems that always occur. And a large part of the trick of profiling is making sure you're profiling the right thing, not something that's easy to measure but unrelated to real-world performance, so it's probably better from an educational assignment perspective to stick with less artificial maps
  23. I think the most useful thing for us would be improvements to known bottlenecks in the simulation code. From a rough profile graph (generated via source/tools/replayprofile/) the biggest spikes are usually caused by "ComputeShortPath" (which includes "A*"), in source/simulation2/components/CCmpPathfinder_Vertex.cpp. There's some other bits like "LosUpdateHelper" (CCmpRangeManager.cpp) and "Move" (CCmpUnitMotion.cpp, which includes LosUpdateHelper but also spends time in things like cmpPathfinder->CheckMovement, I think) which take up a noticeable portion of runtime. I've done a bit of work optimising those things, so hopefully the baseline performance shouldn't be terrible, but there should be plenty of room for improvement (both algorithmic and lower-level) with a bit more work It's fairly straightforward to get repeatable real-world-ish performance measurements - play the game for a while (preferably on a complex map with lots of units in close proximity, since we want to optimise the worst-case situation), then look in the logs directory for sim_log/$PID/commands.txt, then run the game with the argument -replay=path/to/commands.txt and it'll run the simulation code (via ps/Replay.cpp) without any graphics or timer delays or randomness, so you can attach a profiler and see what's happening. (Incidentally, the current SVN version has severe performance problems because of unfinished AI work - hopefully that'll be fixed by February, but if not you might want to stick with the alpha 3 release of the code since that should have fewer silly issues.)
  24. I think the danger is of user interface challenges caused by using abstract indicators of progress (like experience points (especially when based on complex equations) that accumulate to unlock new features or ranks or badges), so players won't always understand what's going on - but we could avoid those problems entirely by not using those abstract concepts at all, and instead using something like a linear campaign structure or non-linear conquer-the-world mode (or some innovative new game mode) where there's a simple obvious relationship between winning a game and making progress towards a higher goal and unlocking access to new things. I'm still not sure the experience concept has significant benefits to balance the difficulties it introduces. It was okay for the first half an hour, since it was something new. But for the next half a dozen hours it was stupid and pointless and tedious, but the frequent tiny chance of a worthless reward compelled me to continue instead of doing something fun. People aren't rational, and they get addicted to things they know are bad for them, and game designers know enough psychology to trigger that behaviour. It's the designer's responsibility to use their power only for good, by enhancing things players want to do anyway rather than exploiting players to waste time on something unfulfilling. We're the game designers - saying what's fun is a critical part of our job . We design the gameplay mechanics based on a prediction of what players will find fun or not, and everything else we do is to encourage them to have the fun experience we designed: graphics, UI design, marketing, user manuals, psychological tricks like reward systems, etc, all directing the players towards enjoying the gameplay we built.Some people will naturally enjoy endless repetitive grinding, but as game designers we can decide that most of our target players won't. (Ideally we'd base that decision on research and testing, rather than just personal opinions). We needn't do anything to discourage the first group from doing what they want, but if we used our tools to direct players towards grinding (e.g. by actively rewarding it in any way) then we'd also be directing all the other players towards the same thing (since our tools don't have the subtlety to distinguish these groups), so we'd be making our typical players have less fun. So I think we should be careful to avoid that
  25. Committed some code, though it still needs a lot of work (splitting into multiple contexts, improving performance, serializing more flexibly, etc) which I'll carry on with. To test it, start a game and click the "c" button beside an unassigned player's name in the game setup screen, then select an AI from there. (The UI needs to be redesigned to be usable, when someone has a good idea for it). Or use the command-line autostart mode, like "./pyrogenesis -quickstart -autostart=Oasis -autostart-ai=2:scaredybot" (where -autostart-ai says the player number to use (and if you say player 1 then you'll share manual control with the AI) and you can repeat -autostart-ai multiple times).
×
×
  • Create New...