Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 2020-03-03 in all areas

  1. Hello everyone, I have been interested in making it possible to explore applications of machine learning in 0 AD (as some of you may have gathered from https://trac.wildfiregames.com/ticket/5548 ). I realized that I haven't really explained very thoroughly my interest and motivation so I figured I would do so here and see what everyone thinks! tl;dr - At a high level, I think that adding an OpenAI gym-like interface* could be a cool addition to 0 AD that would benefit both 0 AD (technically and in terms of publicity) as well as the research community in machine learning and AI. I go into the specifics below as well as discuss other potential avenues for integrating/leveraging machine learning: Potential Machine Learning Problems/Applications Intelligent unit control (micromanagement) I have an example where an AI learns to kite with cavalry archers when fighting infantry at https://github.com/brollb/simple-0ad-example. This is probably one of the easiest problems to explore as it can be done progressively starting with small, clearly defined scenarios using the functionality added in the beforementioned ticket. That said, there are still some of the standard challenges present with machine learning around ensuring that the AI has been trained on sufficiently diverse scenarios so that it doesn't ever encounter something new and behave incorrectly. As far as potential impact on the game, automatic micromanagement could be interesting for either a component in an otherwise scripted AI such as Petra or as a way to make the units more intelligent as they gain experience. That is, I could imagine that as the units gain more experience, they could also start having improved tactical behavior, such as kiting, automatically. Enemy AI Trained Entirely with Reinforcement Learning This is actually very difficult although it has been recently done in StarCraft 2 (https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii). Although I think this could be fun for people to try to do, I wouldn't have high expectations on this front for awhile because it is a very hard problem for ML to solve - especially given the large number of different civilizations, maps, resource types, etc. Enemy AI with Scripting and Learned Components This is referring to a generic version of what I mentioned under "intelligent unit control". Essentially, there are a lot of opportunities to incorporate learned components into an otherwise scripted AI. From a technical perspective, this makes the machine learning problem much easier/tractable while still enabling more intelligent behavior from the built in AI. There are many different examples of intelligent components that could be incorporated. For example, it could try to predict the outcome of a battle (to determine if we should retreat) or try to imitate various high-level human strategies (such as predicting what a human might target for an attack). Quantitative Game Balancing This is a very interesting problem and I find 0 AD to be a particularly unique opportunity for exploring it. Essentially, the idea is that there are many different parameters in a game (such as attack damage for each unit, etc) which are quite difficult to tune without making the game imbalanced and one of the civilizations/strategies OP. (I don't think I need an example for this community but I enjoyed watching https://www.gdcvault.com/play/1012211/Design-in-Detail-Changing-the.) This problem is nontrivial since detecting overpowered strategies really requires an understanding of the way various aspects of the game can be exploited. Although this is a nontrivial problem, I find it to be an exciting opportunity for 0 AD to gain publicity and for researchers to have a sandbox in which they can explore this research question in an actual game (rather than a trivial, toy environment). That is, many of the other environments used in reinforcement learning research are either open source toy environments (eg, CartPole) or proprietary games which cannot be modified (eg, StarCraft 2). There has been a bit of related research in detecting imbalance in complex games like StarCraft 2 as well as balancing simpler games but as proprietary games will not be exposing the parameters used for the units (and other aspects of the game), automatic game balancing approaches are limited. Being an open source game that people actually play, 0 AD provides a really exciting opportunity for research in this direction as the parameters of the game are not proprietary and could be modified programmatically enabling researchers to explore this rather complex problem. For the 0 AD community, enabling researchers to conduct this type of research in the game itself should make it much easier to be able to incorporate any results of such research into the game making 0 AD more fun and an even better game! Imitation Learning Training the AI to imitate humans is worth mentioning although the impact on the game is likely to be in one of the beforementioned ways. Imitation learning, unlike reinforcement learning, is training the AI using expert demonstrations of gameplay. It is often used as a method for essentially initializing the AI to something reasonable before training it further with reinforcement learning (ie, training the AI using a reward rather than example). Imitation learning can arguably be more valuable for game development given that it can more directly instill various human-like behaviors (hopefully making the gameplay more engaging and interesting) rather than simply trying to maximize some reward or score in the game. Techniques to Train and Understand AI Agents This is more of a general research direction that I find interesting (and is similar to research that I have done in the past). Essentially, this is exploring the means by which the game developer can use the various methods of instilling behavior into an AI (programming, reinforcement learning, imitation learning) to create the desired behavior (and game experience). This is a bit of both a human-computer interaction (HCI) and machine learning question (also related to machine teaching). To give a more concrete example, this would include exploring the behavior of a trained RL agent in the game, correcting these behaviors, and perhaps trying to detect potentially incorrect behaviors to raise to the user automatically. 0 AD is well suited for this type of research for the same reasons that it is well suited for exploring game balance - most games used in research are either proprietary or not something people would actually play. Optimizing Existing Game Parameters (Relatively Easy) There are some existing machine learning tricks that could be used to make other sorts of improvements to the game rather than explore research questions. A while back, I was playing around with CMAES (a machine learning technique to optimize a set of parameters given a "fitness function") to improve some of the sort of magic numbers used within Petra such as "popPhase2" and "armyMergeSize". Essentially, this made it possible to find values for these parameters which would improve the AI's ability to win when playing against the standard Petra agent (on the hardest difficulty). Although I don't find this as interesting as the other areas, it is a useful tool that could be nice to apply to other aspects of the game. Overall, I think it would be really exciting to be able to explore some of the research questions in 0 AD as I think it could be beneficial both to researchers but also would make it easier to incorporate the results of this research into 0 AD (making it an even better game!). Of course, this is only true if the functionality required to be added to 0 AD is easy to maintain and doesn't add overhead taking away from the development of the core game features and functionality. I am also hopeful that incorporating some of these machine learning capabilities could also be beneficial to the community and raise awareness of 0 AD! As far as technical requirements, I made an RPC interface for controlling the AI from Python (because the majority of machine learning tools are in Python). This makes it possible to explore 1, 2, and 3 as well as provides necessary functionality for 4, 5, and 6. As mentioned above, I have an example of #1 on GitHub and I think this could make for really interesting undergraduate projects (as well as potentially interesting integrations into the game). However, I think 0 AD is a particularly unique opportunity for exploration of 4 and 6. Game balancing (#4) still requires the ability to programmatically edit the unit parameters which I have explored a little bit but haven't added to the game. If this is something that others find interesting (and wouldn't mind me asking a few questions ), I would be open to adding this as well. Anyway, I find these machine learning problems and applications quite exciting both for 0 AD and for AI/ML research but I want to know what the rest of the community thinks! Let me know what you think or if you have any questions/comments! * I say *OpenAI gym-like* because a gym environment requires an observation space (numerical representation of the world for the AI), action space (numerical representation of the actions the AI can perform), and reward function to be defined. It isn't clear what the most appropriate choices for these would be (and they could vary based on the specific scenario) so I would prefer making more of a "meta-gym" where it is basically an OpenAI gym that needs the user to specify these values.
    3 points
  2. the basic premise is you use the most tanky unit based on cost/tankieness the mayurian worker ele that have 300 health 5 hack (thats already insane) and 8 pierce and all of this for only 150 food.so @#$%ing op. bacically you cant do this in low numbers as if enemy can micro to target your ranged units before your meatshields it wont work you need autoatack to be targeting your ele.prob best to use at something like 30 archers min + some ele.cav can be used too ofcourse i mean idk how you are supposed to mass them probably get alot of food and get them in in one go maybe 10 at once or something altho maybe only good as support and not as a sole meatshield. or similar to skiritai just use for building but you dont need too many of them as they are so op. lets compare them to pikemen for comparing.ill be using res cost base of 100. pikemen "hitpoints" which is total damage absorption potential is: 100*100/35=285.7 for ele its 300*100/43/1.5=465 so like 65% more hitpoints or something like that. ofc they are much slower and lack the attack but more extreme the difference in capacity between unit types better the results. lets say get 30 archers and then mass like 7ele in one batch so you should rly overdo food and go in. problems could be enemy selecting like 6 ranged at a time and then shiftclicking a row of ur archers then selecting 6 other units and repeating.or sending mele directly into your archers.but these are things you can have problems with any other build altho lower the amount of meatshields(easier to get to ranged inf as nothing physically blocking the way) and slower they are and if they dont atack or pin enemy units so can be ignored and walked through more problems there are. if anyone tries this tell me how it goes.i think its a pretty stupid idea but idk maybe with ele just being support meatshields with some spears and archers+cav this can work somehow.. if you wanna see more op stuff like this itll only happen if im happy.one of the ways to make me happy is to give me some money. to clarify i do not request payment or a trade. heres my monero: 471oEfj69SvUp19dV7N7vAJu4eaR8yd8267ubduq7dcQYPygQoq2dm5Us5eLbQrsjiWH2RhbsrQKHaQaG1L4QoNGQA7KRw7
    1 point
  3. @jonbaer Nice work with that organization - it looks like it has forked a lot of relevant repos! TensorForce and OpenAI Baselines would definitely be directly applicable to trying some of the SOTA reinforcement learning algorithms in 0 AD. The size of the state representation certainly comes up when exploring things like imitation learning. I actually have another fork of 0 AD that I use to simply log the game state and the replays can get decently large (between 1 and 10 mb iirc). Of course, if you convert them into some specific state representation, they will likely be much more compact (depending on your specific state representation). As far as an 0 AD-gym, technically, there are actually two in the link I posted above (https://github.com/brollb/simple-0ad-example). In the repo, I created two different 0 AD-gym environments for learning to kite. The first used a very simple state space; the game state is represented simply as the distance to the nearest enemy units. As a result, the RL agent is able to learn to play it fairly quickly (although it would be insufficient if the enemy units used more sophisticated tactics than simply deathball). The second 0 AD-gym uses a richer state representation - the state is a simplified minimap centered on the player's units. This requires the RL agent to essentially "learn" to compute the distance - as it isn't already preprocessed. Although this will be harder to learn, the representation could actually capture concepts like the enemy flanking the player, the edge of the map (or other impassable regions), etc. This type of state representation will also make it possible to have a more fine-grained action space for the agent. (In the example, the RL agent can pick 2 actions: attack nearest enemy unit or retreat. With a minimap, the action space could actually include directional movement.) That said, I am not convinced that there is a single state/action space representation for 0 AD given the customizability of maps, players, civs, goals, etc, and the trade-offs between the learnability and representational power. Since I don't think such a representation exists, I prefer providing a generic Python API for playing the game from which OpenAI gym environments can be easily created by specifying the desired state/action space for the given scenario.
    1 point
  4. We planned this for few structure like a natural garden or a sacred grove. And of course mods.
    1 point
  5. Add what Angen said to e.g. binaries\data\mods\public\simulation\templates\template_structure.xml so all structures autobuild. Or some specific structure if only you want that.
    1 point
  6. that is for moding only. It was not used in game. If you want to use it, you need to edit some structure template and add <AutoBuild> <Rate>1</Rate> </AutoBuild>
    1 point
  7. For $100,000, I offer to help explain to your Instagram followers why you are getting repeatedly banned. For $10,000, I offer to set up your own lobby server no one can ban you from. For $1,000, I offer to send you my replay files, so that you know what you are missing. For $100, I offer to play one 1-vs.-1 game with you to help you deal with your addiction (IP hosting fee subject to negotiation). Contact me for further details if interested.
    1 point
  8. unban JC pls, we need more entertainment on lobby
    1 point
  9. What I enjoy about these rants is that Stockfish gets randomly injected.
    1 point
×
×
  • Create New...