Jump to content

Sylvain Gadrat

Community Members
  • Posts

    12
  • Joined

  • Last visited

Sylvain Gadrat's Achievements

Discens

Discens (2/14)

0

Reputation

  1. The current heuristic function already do a pretty descent job by avoiding square root. Optimizing only the heuristic function have it's limits. While it may be possible to improve it a little, dividing by 16 the time consumed by the current pathfinder, and being able to change current tiles to 4x4 little ones, seems unrealistic if we want to only change this function. And beware, sacrifying the heuristic quality for speed can leed to far worst pathes than ignoring terrain cost. Note that it is just for calculating pathes. Camels will still be speeder on sand than on grass. Simply if a camel have to choose between a short path over a grass field or a long one over sand, it will always choose the shortest even if it is speeder to. Since maps are often made of large fields of grass/sand/water/... and, for long travels, we usually group different units in formation, I think such a change will go unnoticed. Seems really imprecise, specially dealing with not passable tiles. If there is a small gate in a wall, the big tile may be tagged as impassable. So the coarse path will go round the entire wall, then refinments will caclulate a more accurate path around the wall. To avoid it, it force the alogrithm to check from which entrance of the big tile we could go to which exit... It sounds a lot like HPA* presented by Phillip Stop quoting, here's my own thoughs : Having different terrain type and associated costs seems not a big deal while implementing HPA* (I don't have the time to explore HPA*, so I didn't even tried to implement anything else after areas). But what make it difficult is the fact that the cost calculation change from one unit to the other, So instead of having to store waypoints and one edge between each waypoints pair, we have to store one edge by waypoints pairs and unit type. Making it hard to precompute sub-pathes and having to do advanced things for the first shot implementation. One big feature that can be usefull in the pathfinder is to correctly handle the unit's size for pathing. It could really help with formation's pathing.
  2. Wooh ! Impressive tool ! Great work ! Just as a reminder, please keep the "EngineProfiling" wiki page up to date when integrated. It is an awsome wikipage While talking about optimization, on the first post you talk about optimizing the pathfinder. Do you think your changes will impact the long pathfinder ? I am still musing about implementing hpa* whereas I terribly lack time. Note that it is just a question, not a request, your work is awsome and I lack time
  3. Just throwing an idea : A big clue to know if the path may need to be recomputed is to check if the pathfinder modified it's abstract maps. It can be done by checking if "CCmpPathfinder.m_Grid->m_DirtyID" changed since last time you check it. Note that it is not perfect : - It is not triggered, you need to check periodically - m_DirtyID is updated only if the pathfinder where asked to compute a path, so it can be unchanged while the line is invalid (until something ask for a path) (*) - m_DirtyID can be changed due to a modification on the map not influencing your path (*) Note, there is "CCmpObstructionManager.m_DirtyID" which is the same value except it is modified any time something change in the map. But it is private without accessor. And finally : I like the solution to recompute only when the building is (re-)selected. It is simple and will work very well in practice. Except, maybe, if you like to place rally points in middle of enemy territory and looking at the line during minutes. My humble opinion : All that stuff seems to be very hacky. KISS, do it on building selection
  4. That's because all tiles on the border of the area could be good exit point. Considering a unit travaling on sand and grass at same cost, here a picture with two acceptable path (brown : common ; blue : to the last blue dot ; red : to the last red dot) Both path tarvel from the same area to the same one, are going through the same ones, but dont exit but exit the big sand area from different points. Maybe limiting transition between areas to only one point, generating a high level graph and runing some kind of Hierarchical A* could be a good enought approximation.
  5. Thank you for the wiki page Philip, it is exactly information I needed. So, I played a little more with the Area optimization. While I have to continue to tune performances to be sure, I begin to have serious doubts about de viability of the solution. Let me explain why. First, I have to correct the second graph of my first post, the one showing tiles considered by the pathfinder. It presented tiles that could be used by pathfinder but, in the example, only the ones used to compute the brown path impact on performances. Heres the graph with only really used tiles : The first goal of areas was to reduce the number of tiles considered by the pathfinder. Here we can see that in the example it will use 40 tiles with the current implementation versus 39 with areas. That's not really a big deal. Actually, it is not an improvement at all. With the tile based logic, A* process only direct neighbours of the current tile. So the distance between the current tile and the next one is one tile, simple and easy. With area based logic, A* can process arbitrary tiles. So we have to compute the distance between the current tile and the next one. And computing the distance is not trivial. Briefly, It seems that areas do not decrease a lot the complexity of the path finding problem, but make each node harder to compute. In the next days I will : 1. Finish tweaking of areas and try a little to find a magical trick that solves above outlined problems. 2. Begin to think about implementing HAA* which seems interesting.
  6. Successfully built, ran and played at revision r9956 with boost 1.47. Thank you Oops ! Sorry ! Happy to see that you fixed it anyway.
  7. Attached to this post : the big output of "thread apply all bt full" I suppose that the only interesting thread is the main thread (Thread 1) as pyrogenesis get stuck in it. I compiled with "make CONFIG=Debug" but some values still shown as "<optimized out>" And a clue that does not appear in the attached file : * 82 % of time is spend in "CParticleVarUniform::Compute(...)" itself * 15 % of time is spend in "boost::random::mersenne_twister_engine<...>::twist()" called by "Compute()" * 3% in the other componants of my system gdb_output.txt
  8. Hello, Recently upgraded my system from boost 1.46 to boost 1.47. Thank to svn revisions r9905 and r9907 it compile well, but get stuck when starting a game. Downgrading to boost 1.46 fix the problem, the compatibility patches don't hurt anything. I use ArchLinux's packaged boost version. The code seems to stuck in the particles engine. I have no more information for now, but if needed I'll can later reproduce and give logs, exact stuck location, core files,... Note : it is not impossible that the packaged boost is buggy, if so, sorry for the dummy bug report. System : * Archlinux * Nvidia's drivers Steps to reproduce : * Install all required dependencies for 0 A.D. * "#pacman -Suy" (full update of the system, containing boost 1.47) * build 0 A.D. from svn * start pyrogenesis * single player -> start game Expected result : * The game start Current result : * The game show your initial statement untextured, the cursor and get stuck
  9. Ok, I succeed to implement a functional version, now I would like to profile it. Here are some questions about profiling. I saw a macro named "PROFILE( xyz )" which seems to be perfect, except that results are not printed. Is there some documentation I missed that indicate how to have such stats printed ? Is there a way to start 0.A.D without rendering ? I suspect it to disturb callgrind and sysprof and I focus on pathfinding, I don't really need rendering. What's are your habits for profiling ? Your favourite tools ?...
  10. Hmm, it can be possible by creating special areas containing a grid where uniform areas are inefficient. It can improve the worst case scenario, but it will be more complex to implement. So lets wait to see if such mix is needed before thinking of implementing it Talking about five tiles per area, I'd think of an average value obtained by : <number of tiles> divided by <number of areas>. Sorry if it was not clear. Rectangles should work very well, thanks to point that. My idea was that an area is useful only if it has a maximum of "inner tiles" (the skipped ones by A*). So, avoiding rectangles having a height or width of 2 tiles was a priority. Writing (and re-reading) this post it appear clear that a rectangle is better than a lot of little square, to save some memory as to be able to take diagonal paths. To re-compute the entire map's areas, it is a little slower than to compute the grid version, but nothing worrying in theory, despite my current implementation lag on it I am confident than with some optimization it will be good enough. If it is a real performance issue, when building or destroying something on the map, we could just re-compute impacted areas. It will not give a perfect result, but not far from very good in a really short time. Yeah thank you ! Some good readings for this week-end My primary concern is the cpu time, but seeing that it can take five times more memory than the current implement scare me a little. Good catch for minimizing cache failures. I didn't ever think about it, I'll try to remember and applying it cleverly. Oh ! Very interesting ! Like I said, I have little to no experience, just knowing some concept (notably some of ones exposed in free entries of aigamedev ) and I had never heard about HAA* before. It sounds like I will try to implement that just after being happy with (or definitely trashing) the implementation of uniform areas. HAA* is more like a totally new algorithm whereas uniform areas are just an optimization of the existing one, it may be too hard for my first experience with 0.A.D's internals. Let it be the second experience. If I understand well, in 0.A.D all units are at maximum 1 tile large (please correct me if I misunderstood). So clearance stuff is not needed. But I saw that you talked about reducing the tile's size. Such changes, without clearance stuff, could change the behavior of the pathfinder from always pessimist to often optimist. And an optimist long pathfinder could lead to units being stuck by a little group of trees, just large enough to make the short pathfinder give up, no ? Or worst, trying to shortcut through an impassable forest instead of moving around. I am sure I missed something, but else clearance is really important
  11. Hello, I am a programmer and I recently discovered that playing with 0.A.D's pathfinding logic on my spare time is incredibly fun. While experimenting I thought about a way that can sligthly improve the pathfinder cpu usage and memory consumption. So I'll use this topic to outline the idea, feedback about progresses and get some ideas from you. Disclaimer: I am not expert (actually, not experienced at all) regarding 0.A.D's internals, pathfinding theories nor video games programming in general. I do it to play dolls with units on my spare time. I have not a lot of spare time. 0.A.D needs more impacting optimizations than this one, long pathinder is not a bottleneck. So, I can't ensure to finish, polish and maintain any code related to this idea. For a global view of the pathfinding logic, you can read this post. I will only work on long pathfinder here. Currently the long pathfinder is tile based. It splits the map in a grid of little squares (called tiles). Each tile holds passability information ("this tile is passable by infantry", "this tile is not passable by boats",...) and a cost class ("it is sand", "it is grass",...). With such a grid, when the pathfinder is asked to compute a path from point A to point B, it can run A* on the grid, considering each tile. The optimization idea is to group identical tiles (same passability and cost class) into biggest squares named areas. The first optimization with areas is the memory usage. With the grid we hold passability and cost information (2 bytes) for each tile. With areas, we need such information only once per area but we also need to store position and size (in my current implementation, 8 bytes) of each area, for a total of 10 bytes per area. Hoping there is at least an average of 5 tiles per area, it is an improvement. This "optimization" needs to be confirmed by experimentation, since in worst case (only areas containing an unique tile on the map), it takes five times more memory. Memory consumption in diagram's example : 10x10 map = 100 tiles 11 areas grid = 100 tiles * 2 bytes = 200 bytes areas = 11 areas * 10 bytes = 110 bytes Memory consumption in worst case (10x10 map) : 10x10 map = 100 tiles worst case = 1 area per tile = 100 areas grid = 100 tiles * 2 bytes = 200 bytes areas = 100 areas * 10 bytes = 1000 bytes Note that areas' memory consumption is not directly bound to the map's size, but to the design of the map. A giant map of flat sand can be discretized with 10 bytes but, unlike the grid, we never know how much memory will be used before processing the map. The most interesting optimization is the cpu usage and the resulting path's smoothing. It is grouped, because while using less cpu, we will prefer straight lines to stairs (removing a fixme from the code). The basic idea is that once a unit is in a passable area, it can go anywhere in this area without getting stuck. More, an area is homogeneous in movement's cost. With these both hypotheses, moving inside an area is really trivial, we just follow a straight line from origin to destination, the cost of the move is the length of the line multiplied by the cost of moving in the area. So we can limit the A* to only considering tiles that are needed to go from one area to another. Of course it is interesting only if areas are bigger than 2x2 tiles since only inner tiles are skipped by A*. So, bigger areas are, less memory and cpu are needed to compute path. To conclude this huge post : this solution is not a big deal, in fact it can be a real disaster in terms of memory usage. It is a poor attempt to mix ideas from navigation meshes and the tiles based current implementation. The good side is that it does not require to modify heavily the pathfinder, just changing the way data are hold and add a little smartness to get neighbours of a tile. Talking about navigation meshes, anyone could point me to a good article ? I only found some explaining that we can do all that we want at no cpu/ram cost, but nothing explaining how to do
  12. Hello, I am a programmer and I spend a little of my spare time to play with 0.A.D's pathfinding logic. I currently work on determining areas of homogeneous terrain tiles to speed up the long pathfinder. Such areas allow to not run A* on each tiles, but just looking at the ones necessary to go from an area to the other. For the moment my code is not mature at all. Understand : hacky, buggy, don't respect coding styles and slow down performances. But if you are interested by such optimization/research, I can easily create a topic to give feedbacks about progresses. If you don't specially care I'll open a topic only when (and if) it really improve performances and get polished. So, for the current topic, I just say that I am working on some area mapping. But it is not strategic areas nor currently usable at all. Finally, to solve your AI problems. Isn't it sufficient to add (if not already exists) a pathfinder's API to get the long path cost of traveling from a point A to a point B with a unit class C ? Given A, B and C being parameters and long path cost the return. If so it seems not so hard to provide (but I can mistake, I'm not expert).
×
×
  • Create New...