fabio Posted December 7, 2016 Report Share Posted December 7, 2016 https://openai.com/blog/universe/ https://universe.openai.com/ Quote We're releasing Universe, a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications. They are also searching for games to use to train it: Quote Grant us permission to use your game, program, website, or app. If your program would yield good training tasks for an AI then we'd love your permission to package it in Universe. Good candidates have an on-screen number (such as a game score) which can be parsed as a reward, or well-defined objectives, either natively or definable by the user. 4 Quote Link to comment Share on other sites More sharing options...
wraitii Posted December 7, 2016 Report Share Posted December 7, 2016 I'm not sure what the procedure would be, but it would be interesting to give them 0 A.D. I remain unconvinced that machine learning can efficiently learn to play an RTS with current technology, but who knows. 4 Quote Link to comment Share on other sites More sharing options...
Stan` Posted December 7, 2016 Report Share Posted December 7, 2016 Indeed I'd say it's a good idea. Might reveal some bugs in the process too. Quote Link to comment Share on other sites More sharing options...
fabio Posted December 7, 2016 Author Report Share Posted December 7, 2016 39 minutes ago, wraitii said: I'm not sure what the procedure would be, but it would be interesting to give them 0 A.D. I remain unconvinced that machine learning can efficiently learn to play an RTS with current technology, but who knows. I thought the same, but also noticed they have at least one other RTS (Red Alert 2), (don't know if there are others RTS in their list). Quote Link to comment Share on other sites More sharing options...
Stan` Posted December 7, 2016 Report Share Posted December 7, 2016 Well that could end up beneficial too. 1 Quote Link to comment Share on other sites More sharing options...
FeXoR Posted December 7, 2016 Report Share Posted December 7, 2016 If the procedure is compatible with our lisences I'm for it. I'm not entirely sure what is meant by "package it in Universe" and I wonder why that is needed in the first place. A static fixed version of 0 A.D. (the AI learned with) download link and a README where to put 0 A.D. to make "Universe" link to it should do. Quote Link to comment Share on other sites More sharing options...
jonbaer Posted December 26, 2016 Report Share Posted December 26, 2016 (edited) Curious if here has been any headway on this as I started looking at Gym and Universe myself. What I was originally thinking was that there could be a small "0 A.D. - Lite" with only 2 civs and 4 basic actors, which could be launched as a mod w/ pyrogenesis flag. One idea for anyone doing this is that Petra could probably be stubbed out (where all the "Managers" are agents) and the step() in OpenAI reflects what happens in checkEvents = function(gameState, events) on those managers. This is just from glancing over the universe and gym setups. Edited December 26, 2016 by jonbaer Quote Link to comment Share on other sites More sharing options...
sarcoma Posted December 27, 2016 Report Share Posted December 27, 2016 I tried to apply reinforcement learning (Q-learning specifically) to Petra but failed miserably. It has been applied to many RTS bots Civilization IV: https://pdfs.semanticscholar.org/9f9c/0f114b0c4d9b13ec048507a178fe9b3da4ae.pdf Wargus: http://www.aaai.org/ocs/index.php/AIIDE/AIIDE12/paper/viewFile/5515/5734 BattleCity: https://arxiv.org/pdf/1602.04936 and seems to be successful. @jonbaer: Is it necessary to consider each manager as an agent? The headquarters module makes quite a lot of decisions, I think, and it recieves the gameState as parameter. Multi-agent RL might be more complicated and maybe the whole bot could be considered as the only agent. With those percepts of the environment in gameState, you only need a (long) list of all the possible actions and let the algorithm experiment. Then the bot would need apply the chosen action. Queueing the chosen action might be necessary, I don't know. A neural network could be necessary if you dont want the bot to start from scratch each time but with that OpenAI all the AI is taking care of, I guess. I hope you can give this a try. Reinforcement learning is the closest to human learning (it might be computationaly intensive though) and it can even be applied to other areas like pathfinding and game balancing. 1 1 Quote Link to comment Share on other sites More sharing options...
jonbaer Posted December 28, 2016 Report Share Posted December 28, 2016 I think when we look at this area it's more that we (generally) want to model the AI based on how we would play the game in certain scenarios and of course depending on what the objective is. For example a Wonder based victory is really all about the market (for the most part), Conquest - military, Regicide - defense. Not saying that it is all about that but it would be the priorities for sure. I am going to read through the papers posted, I think OpenAI takes care of long term analysis, but to be honest I think it would be just a computationally expensive process to just determine best strategy on random map layouts, chokepoints, ports, market, etc. I think I am going to give the Universe a try but it would need to be with the same civ setup, a small static map (Continent) and limited actors - for at least just getting the API and steps() working. I think I have a general idea of how this would work + will post back a repo - I am hoping I can get the VNC Ubuntu/Docker setup at least running to start :-) I am assuming it would be fastest time to conquer as the benchmark + the rate of change, I think this is how the other games are setup. Have you tried the OpenAI demos? Quote Link to comment Share on other sites More sharing options...
sarcoma Posted December 31, 2016 Report Share Posted December 31, 2016 Hi No, I have not tried OpenAI. I browsed the pages and seems to me that in order to apply this to 0AD they need permision from WFG to build an environment for the game in Universe. They use the pixels from the game window as state. I see a problem here since that window would show only partial information of the game state. Then the apply RL and output keyboard or mouse commands. Actions in the game seem a little more complicated: Send workers to build something somewhere, maybe out of sight, for example. It doesn't look as simple as some games: Arrow up, etc. In any case it would be good if WFG decides to request these people to take a look if it is feasible. https://universe.openai.com/consent If OpenAI can do it then we can train it and hopefully a great bot comes out. I think Q-learning can really produce a decent bot. It could be programed directly in the AI simulation. But it would be really nice if it could be treated as a black box. The algorithm needs the state, broken into all the possible perceptions, and all the posible actions the agent can take. I know thats a lot to ask so a light version of the game would be nice. Quote Link to comment Share on other sites More sharing options...
Stan` Posted December 31, 2016 Report Share Posted December 31, 2016 Has @mimo seen this thread ? Quote Link to comment Share on other sites More sharing options...
mimo Posted December 31, 2016 Report Share Posted December 31, 2016 Yes, i've seen it but i'm far from convinced such a tool can improve the AI without an internal understanding of the topology, available resources, obstructions and so on. But i'd be happy to be wrong. 3 Quote Link to comment Share on other sites More sharing options...
fabio Posted January 1, 2017 Author Report Share Posted January 1, 2017 The main, and possibly only, benefit to 0 A.D. of this, is the increased publicity for us. I don't expect other benefits. The good thing is that it won't cost us anything registering and given 0 A.D. licence they could likely do it anyway. And I am just curious to what could be the outcome, I am very sceptical about it, still their team is very motivated and already has at least a RTS game they plan trying. 1 Quote Link to comment Share on other sites More sharing options...
jonbaer Posted January 2, 2017 Report Share Posted January 2, 2017 I think I am going to wait and see how the CIV and Starcraft universes work out ... https://universe.openai.com/envs/world.CivilizationV-v0 https://universe.openai.com/envs/world.StarCraft2-v0 I can see how the basic Atari / Flash games are setup and almost feel that nearly all of the graphics pertaining to what most of the UI shows would not be really of much use with the exception of a detailed (and larger) minimap. Locate resources, locate enemies, maximize population, etc. Some gyms of interest are the toy text ones, ie: https://gym.openai.com/envs#board_game + https://gym.openai.com/envs#toy_text Alot of the Hannibal utilities are extremely useful in those type of situations as well. I am sure there are other research papers on 4X / RTS that could have their algorithms applied, just have to read over the papers more. It is certainly an interesting challenge. Quote Link to comment Share on other sites More sharing options...
fabio Posted August 12, 2017 Author Report Share Posted August 12, 2017 It looks OpenAI beats humans in Dota 2: https://openai.com/the-international/ https://blog.openai.com/dota-2/ 1 1 Quote Link to comment Share on other sites More sharing options...
Stan` Posted August 12, 2017 Report Share Posted August 12, 2017 Any hopes of them porting it to our engine @fabio Quote Link to comment Share on other sites More sharing options...
fabio Posted August 12, 2017 Author Report Share Posted August 12, 2017 It looks their AI is self training, so it should adapt to any game, although it requires a lot of training. The link for proposing games is still open: https://docs.google.com/forms/d/e/1FAIpQLSc87de2t5qEB0DzqW-d0Ps3oV09S9IGrHnW51VomYa4PQSE7A/viewform Quote Link to comment Share on other sites More sharing options...
Stan` Posted August 12, 2017 Report Share Posted August 12, 2017 But did we do it @fabio Quote Link to comment Share on other sites More sharing options...
fabio Posted August 12, 2017 Author Report Share Posted August 12, 2017 I don't think so. Quote Link to comment Share on other sites More sharing options...
Stan` Posted August 12, 2017 Report Share Posted August 12, 2017 Shall we ? 1 Quote Link to comment Share on other sites More sharing options...
jonbaer Posted August 24, 2018 Report Share Posted August 24, 2018 Any updates on this topic? Anyone else watching the OpenAI 5 vs. humans? Quote Link to comment Share on other sites More sharing options...
coworotel Posted August 24, 2018 Report Share Posted August 24, 2018 (edited) On 12/31/2016 at 7:25 AM, sarcoma said: They use the pixels from the game window as state. I see a problem here since that window would show only partial information of the game state. Then the apply RL and output keyboard or mouse commands. Actions in the game seem a little more complicated: Send workers to build something somewhere, maybe out of sight, for example. It doesn't look as simple as some games: Arrow up, etc. I think partial information of the game state is an advantage in fact. That is the information human players have: you have the info you see at the screen. We also use our memory to remember previous things we saw at the screen... I don't know if that is considered in their algorithm, but should be. Edited August 24, 2018 by coworotel Quote Link to comment Share on other sites More sharing options...
coworotel Posted August 24, 2018 Report Share Posted August 24, 2018 On 12/27/2016 at 8:23 AM, sarcoma said: I tried to apply reinforcement learning (Q-learning specifically) to Petra but failed miserably. I took a look at Petra and I don't think it would be easy to apply ML to it. It would be easier to redesign a simpler AI with input info, a decision module (where the ML acts) and an output. Petra was not designed with ML in mind so all the game logic is embedded there already. Anyway, very nice to hear that somebody is trying to apply ML to 0 A.D., it is certainly the best way to go. Quote Link to comment Share on other sites More sharing options...
sarcoma Posted August 24, 2018 Report Share Posted August 24, 2018 I gave up long ago. Even if you want to start from scratch you still inherit a lot from baseAI 1 Quote Link to comment Share on other sites More sharing options...
techblogger911 Posted August 24, 2018 Report Share Posted August 24, 2018 Woah, I did think about this but it was a long shot. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.