Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 2020-02-15 in all areas

  1. Hello everyone, I have been interested in making it possible to explore applications of machine learning in 0 AD (as some of you may have gathered from https://trac.wildfiregames.com/ticket/5548 ). I realized that I haven't really explained very thoroughly my interest and motivation so I figured I would do so here and see what everyone thinks! tl;dr - At a high level, I think that adding an OpenAI gym-like interface* could be a cool addition to 0 AD that would benefit both 0 AD (technically and in terms of publicity) as well as the research community in machine learning and AI. I go into the specifics below as well as discuss other potential avenues for integrating/leveraging machine learning: Potential Machine Learning Problems/Applications Intelligent unit control (micromanagement) I have an example where an AI learns to kite with cavalry archers when fighting infantry at https://github.com/brollb/simple-0ad-example. This is probably one of the easiest problems to explore as it can be done progressively starting with small, clearly defined scenarios using the functionality added in the beforementioned ticket. That said, there are still some of the standard challenges present with machine learning around ensuring that the AI has been trained on sufficiently diverse scenarios so that it doesn't ever encounter something new and behave incorrectly. As far as potential impact on the game, automatic micromanagement could be interesting for either a component in an otherwise scripted AI such as Petra or as a way to make the units more intelligent as they gain experience. That is, I could imagine that as the units gain more experience, they could also start having improved tactical behavior, such as kiting, automatically. Enemy AI Trained Entirely with Reinforcement Learning This is actually very difficult although it has been recently done in StarCraft 2 (https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii). Although I think this could be fun for people to try to do, I wouldn't have high expectations on this front for awhile because it is a very hard problem for ML to solve - especially given the large number of different civilizations, maps, resource types, etc. Enemy AI with Scripting and Learned Components This is referring to a generic version of what I mentioned under "intelligent unit control". Essentially, there are a lot of opportunities to incorporate learned components into an otherwise scripted AI. From a technical perspective, this makes the machine learning problem much easier/tractable while still enabling more intelligent behavior from the built in AI. There are many different examples of intelligent components that could be incorporated. For example, it could try to predict the outcome of a battle (to determine if we should retreat) or try to imitate various high-level human strategies (such as predicting what a human might target for an attack). Quantitative Game Balancing This is a very interesting problem and I find 0 AD to be a particularly unique opportunity for exploring it. Essentially, the idea is that there are many different parameters in a game (such as attack damage for each unit, etc) which are quite difficult to tune without making the game imbalanced and one of the civilizations/strategies OP. (I don't think I need an example for this community but I enjoyed watching https://www.gdcvault.com/play/1012211/Design-in-Detail-Changing-the.) This problem is nontrivial since detecting overpowered strategies really requires an understanding of the way various aspects of the game can be exploited. Although this is a nontrivial problem, I find it to be an exciting opportunity for 0 AD to gain publicity and for researchers to have a sandbox in which they can explore this research question in an actual game (rather than a trivial, toy environment). That is, many of the other environments used in reinforcement learning research are either open source toy environments (eg, CartPole) or proprietary games which cannot be modified (eg, StarCraft 2). There has been a bit of related research in detecting imbalance in complex games like StarCraft 2 as well as balancing simpler games but as proprietary games will not be exposing the parameters used for the units (and other aspects of the game), automatic game balancing approaches are limited. Being an open source game that people actually play, 0 AD provides a really exciting opportunity for research in this direction as the parameters of the game are not proprietary and could be modified programmatically enabling researchers to explore this rather complex problem. For the 0 AD community, enabling researchers to conduct this type of research in the game itself should make it much easier to be able to incorporate any results of such research into the game making 0 AD more fun and an even better game! Imitation Learning Training the AI to imitate humans is worth mentioning although the impact on the game is likely to be in one of the beforementioned ways. Imitation learning, unlike reinforcement learning, is training the AI using expert demonstrations of gameplay. It is often used as a method for essentially initializing the AI to something reasonable before training it further with reinforcement learning (ie, training the AI using a reward rather than example). Imitation learning can arguably be more valuable for game development given that it can more directly instill various human-like behaviors (hopefully making the gameplay more engaging and interesting) rather than simply trying to maximize some reward or score in the game. Techniques to Train and Understand AI Agents This is more of a general research direction that I find interesting (and is similar to research that I have done in the past). Essentially, this is exploring the means by which the game developer can use the various methods of instilling behavior into an AI (programming, reinforcement learning, imitation learning) to create the desired behavior (and game experience). This is a bit of both a human-computer interaction (HCI) and machine learning question (also related to machine teaching). To give a more concrete example, this would include exploring the behavior of a trained RL agent in the game, correcting these behaviors, and perhaps trying to detect potentially incorrect behaviors to raise to the user automatically. 0 AD is well suited for this type of research for the same reasons that it is well suited for exploring game balance - most games used in research are either proprietary or not something people would actually play. Optimizing Existing Game Parameters (Relatively Easy) There are some existing machine learning tricks that could be used to make other sorts of improvements to the game rather than explore research questions. A while back, I was playing around with CMAES (a machine learning technique to optimize a set of parameters given a "fitness function") to improve some of the sort of magic numbers used within Petra such as "popPhase2" and "armyMergeSize". Essentially, this made it possible to find values for these parameters which would improve the AI's ability to win when playing against the standard Petra agent (on the hardest difficulty). Although I don't find this as interesting as the other areas, it is a useful tool that could be nice to apply to other aspects of the game. Overall, I think it would be really exciting to be able to explore some of the research questions in 0 AD as I think it could be beneficial both to researchers but also would make it easier to incorporate the results of this research into 0 AD (making it an even better game!). Of course, this is only true if the functionality required to be added to 0 AD is easy to maintain and doesn't add overhead taking away from the development of the core game features and functionality. I am also hopeful that incorporating some of these machine learning capabilities could also be beneficial to the community and raise awareness of 0 AD! As far as technical requirements, I made an RPC interface for controlling the AI from Python (because the majority of machine learning tools are in Python). This makes it possible to explore 1, 2, and 3 as well as provides necessary functionality for 4, 5, and 6. As mentioned above, I have an example of #1 on GitHub and I think this could make for really interesting undergraduate projects (as well as potentially interesting integrations into the game). However, I think 0 AD is a particularly unique opportunity for exploration of 4 and 6. Game balancing (#4) still requires the ability to programmatically edit the unit parameters which I have explored a little bit but haven't added to the game. If this is something that others find interesting (and wouldn't mind me asking a few questions ), I would be open to adding this as well. Anyway, I find these machine learning problems and applications quite exciting both for 0 AD and for AI/ML research but I want to know what the rest of the community thinks! Let me know what you think or if you have any questions/comments! * I say *OpenAI gym-like* because a gym environment requires an observation space (numerical representation of the world for the AI), action space (numerical representation of the actions the AI can perform), and reward function to be defined. It isn't clear what the most appropriate choices for these would be (and they could vary based on the specific scenario) so I would prefer making more of a "meta-gym" where it is basically an OpenAI gym that needs the user to specify these values.
    4 points
  2. people didnt see me for ages now in lobby. I receive many request off my 0ad followers on Instagram (about 12000 people ) when i'll be back on 0AD. My forum inbox is full. This has to come to an end. I cant keep notify them my return but be banned from the lobby each time i relog after 2 months. They will lose patience. Also, i've heard that Stockfish is getting too far in the ranking. This is insane. How much should i pay to Wildfire games to have a premium pass and get rid of random bans ? monero, bitcoin or just paypal. Choose and communicate me how much you want. kd,
    2 points
  3. PeerTube is a decentralized video hosting platform. You can upload videos to an existing server (instance) and present your videos to the world. In addition, you can setup your own server and connect it to other servers (instances), if you want. PeerTube is federated. Federated means a network of different instances connected to one large platform. PeerTube does not have advertisement or tracks you around the internet. Try it out. Upload your latest 0 A.D. clips and show it to the community.
    1 point
  4. I think the game should have a Deathmatch mode. I'm just not sure if it should be reminiscent of the Age series Deathmatch or something "unique." I honestly think it should be mostly based on the Age series model with a couple unique flairs. So, I think once you choose Deathmatch you should then be able to choose which Phase everyone starts at. The most popular would probably be "Post-City Phase" with everything auto-researched already. The only difference would be that any paired techs would not be auto-researched yet, the player having to choose which one they want. Since it's Deathmatch, they cost no resources or research time, the player just has to choose. And then resources-wise every player starts out with 40000 of every resource. The name of the game is building up and expanding as fast as possible while pumping out tons of troops for massive battles early on.
    1 point
  5. I like your idea, but put to put a different way and massive warfare these seem really interesting CBA and risk
    1 point
  6. @Andrettin Spidermonkey is used by the Pyrogenesis engine to understand the Javascript code we use for map generation gui and simulation and AI. Usually it would be done in LUA but the founders chose Js instead.
    1 point
  7. Sorry, I don't quite understand. Indeed, not inappropiate for a game set in classical times.
    1 point
  8. I want to add new interesting informations about the Emishi:
    1 point
  9. The Wildfiregames Forums is mostly Sans font, and I find this thread readable. The issue is HOW you use the fonts. And Lion has been clear about General design rules... which obviously has exceptions. As for the choice of Font. Yes, Libertine seems to have better Serif attributes than Biolinum. I prefer that type of font for 0 A.D.... especially on the small chat boxes.
    1 point
  10. Here you go: https://0ad.mod.io/linux-libertine
    1 point
  11. Hello, regardless of the debate about serif fonts, it would be nice to have that as a mod for people who agree with Nescio. Can you please follow these instructions? Thank you
    1 point
  12. PeerTube is a bit different in the sense that it's FLOSS software so it's allowed. Not here though I moved the the thread. We should indeed try to mirror videos on that platform with @Sundiata and/or @BrynnOfCastlegate
    1 point
  13. (Edited the title cash-->crash + put the crash report inside a spoiler.)
    1 point
  14. A certain level of randomness can be kept. Hopefully one could make only the most high level aspects regarding balance to be mirrored: extra resources anywhere in a certain radius range (min range-max range-max degree spread) of the cc but can't overlap with the same type or different amounts than opponents, in locations that would make sense e.g. metal and stone reachable only via expand or p2, etc. Wood can't have too much variation (too thin or too fat clumps) and/or the total amount must be equal but the spawns are free to be defined by rngezus etc.
    1 point
×
×
  • Create New...