Jump to content


Community Members
  • Content count

  • Joined

  • Last visited

  • Days Won


sarcoma last won the day on March 20 2016

sarcoma had the most liked content!

Community Reputation

8 Neutral

About sarcoma

  • Rank

Recent Profile Visitors

324 profile views
  1. OpenAI

    Hi No, I have not tried OpenAI. I browsed the pages and seems to me that in order to apply this to 0AD they need permision from WFG to build an environment for the game in Universe. They use the pixels from the game window as state. I see a problem here since that window would show only partial information of the game state. Then the apply RL and output keyboard or mouse commands. Actions in the game seem a little more complicated: Send workers to build something somewhere, maybe out of sight, for example. It doesn't look as simple as some games: Arrow up, etc. In any case it would be good if WFG decides to request these people to take a look if it is feasible. https://universe.openai.com/consent If OpenAI can do it then we can train it and hopefully a great bot comes out. I think Q-learning can really produce a decent bot. It could be programed directly in the AI simulation. But it would be really nice if it could be treated as a black box. The algorithm needs the state, broken into all the possible perceptions, and all the posible actions the agent can take. I know thats a lot to ask so a light version of the game would be nice.
  2. OpenAI

    I tried to apply reinforcement learning (Q-learning specifically) to Petra but failed miserably. It has been applied to many RTS bots Civilization IV: https://pdfs.semanticscholar.org/9f9c/0f114b0c4d9b13ec048507a178fe9b3da4ae.pdf Wargus: http://www.aaai.org/ocs/index.php/AIIDE/AIIDE12/paper/viewFile/5515/5734 BattleCity: https://arxiv.org/pdf/1602.04936 and seems to be successful. @jonbaer: Is it necessary to consider each manager as an agent? The headquarters module makes quite a lot of decisions, I think, and it recieves the gameState as parameter. Multi-agent RL might be more complicated and maybe the whole bot could be considered as the only agent. With those percepts of the environment in gameState, you only need a (long) list of all the possible actions and let the algorithm experiment. Then the bot would need apply the chosen action. Queueing the chosen action might be necessary, I don't know. A neural network could be necessary if you dont want the bot to start from scratch each time but with that OpenAI all the AI is taking care of, I guess. I hope you can give this a try. Reinforcement learning is the closest to human learning (it might be computationaly intensive though) and it can even be applied to other areas like pathfinding and game balancing.
  3. Modifying AI

    Thank you, fatherbushido. I can use this to search for the xml tags and get the stats.
  4. Modifying AI

    Hi, guys I will have time to work on this in 2 weeks when I'm off of work. I need to model the agents: units, structures, resources, etc. Where in the code can I get the numbers of each? HP, speed, attack, armor, etc. I couldn't find that info in http://trac.wildfiregames.com/browser/ps/trunk/binaries/data/mods/public/simulation/templates Thanks
  5. Host exits from Rated 1v1 game

    Some weak players have 1500+ by doing this
  6. Modifying AI

    Thank you, fatherbushido That really helps understand the code
  7. Modifying AI

    Thank you very much, Stanislas. I hope I can make something of value.
  8. Modifying AI

    Hi, Stanislas I had a look at http://trac.wildfiregames.com/browser/ps/trunk/binaries/data/mods/public/simulation/ai I was wondering if it were posible to create an inference engine and use Q-learning to command the AI to make better decisions but I would need a way to get stimuli from the ongoing game. Or maybe if the commands.txt is parsable to extract the actions people normally do, I could use that to create blind build orders. I found http://yieldprolog.sourceforge.net/ hoping to translate prolog into js. Thanks
  9. Hi, community How do I go about understanding the AI code? I was hoping I could implement some logic and prediction for the AI without delving too deep in the GUI code. It would be nice if somehow I could get the information learned as the game progresses and inject commands in response to that, like from a black box, then I might be able to focus on the logic. Thanks
  10. Nevermind, I found it https://wildfiregames.com/forum/index.php?/discover/unread/ it's an icon now. Sorry. If you can delete this silly post, please do
  11. I can no longer load the latest posts in forum like before.
  12. Greetings

    I think someone suggested adding them as mercenaries for Persia, Gaul or I don't know
  13. Balancing

    Lately I've been spamming spartan 2nd-age demi-champs (short swords) to rush enemy before age 3. Build order: get to age 2 asap with 2 barracks preferably and healthy farming and mass produce this guys to chop wood and mine gold, no need to even phase 3, just a blacksmith to improve attack and armor. As soon as you reach 50 to a 100, send them to destroy people and buildings. Pretty unfair against most. Can be countered by enough slingers or seleucid horse archers.
  14. I have played some defensive players that get around the no walls rule by building layers of houses in front of a perimeter of garrisoned fortresses and towers with horses ready to attack rams. Very distasteful. One strategy against turrets, but not turret spam, is to have archers destroy a turret but not the connecting parts. You need lots of archers and healers behind.
  15. Trading forget which resource you are selling

    Those icons in the market are for selling 100 of selected resource, and buying x desired resource. The resource traders win is selected from the coins icon, either in the market or the upper panel. Default is 15% W, 15% F, 35% S, 35% M, or something like that. You change that as desired. That is why you see the trader earning x resource on a trip, x other resource in another, according to percentages and distance.