Jump to content

OpenAI


Recommended Posts

https://openai.com/blog/universe/

https://universe.openai.com/

Quote

We're releasing Universe, a software platform for measuring and training an AI's general intelligence across the world's supply of games, websites and other applications.

They are also searching for games to use to train it:

Quote

Grant us permission to use your game, program, website, or app.

If your program would yield good training tasks for an AI then we'd love your permission to package it in Universe. Good candidates have an on-screen number (such as a game score) which can be parsed as a reward, or well-defined objectives, either natively or definable by the user.

 

 

 

 

  • Like 4
Link to comment
Share on other sites

39 minutes ago, wraitii said:

I'm not sure what the procedure would be, but it would be interesting to give them 0 A.D. I remain unconvinced that machine learning can efficiently learn to play an RTS with current technology, but who knows.

I thought the same, but also noticed they have at least one other RTS (Red Alert 2), (don't know if there are others RTS in their list).

Link to comment
Share on other sites

If the procedure is compatible with our lisences I'm for it.

I'm not entirely sure what is meant by "package it in Universe" and I wonder why that is needed in the first place.

A static fixed version of 0 A.D. (the AI learned with) download link and a README where to put 0 A.D. to make "Universe" link to it should do.

Link to comment
Share on other sites

  • 3 weeks later...

Curious if here has been any headway on this as I started looking at Gym and Universe myself.

What I was originally thinking was that there could be a small "0 A.D. - Lite" with only 2 civs and 4 basic actors, which could be launched as a mod w/ pyrogenesis flag.

One idea for anyone doing this is that Petra could probably be stubbed out (where all the "Managers" are agents) and the step() in OpenAI reflects what happens in checkEvents = function(gameState, events) on those managers.  This is just from glancing over the universe and gym setups.

Edited by jonbaer
Link to comment
Share on other sites

I tried to apply reinforcement learning (Q-learning specifically) to Petra but failed miserably.

It has been applied to many RTS bots

Civilization IV: https://pdfs.semanticscholar.org/9f9c/0f114b0c4d9b13ec048507a178fe9b3da4ae.pdf

Wargus: http://www.aaai.org/ocs/index.php/AIIDE/AIIDE12/paper/viewFile/5515/5734

BattleCity: https://arxiv.org/pdf/1602.04936

and seems to be successful.

@jonbaer: Is it necessary to consider each manager as an agent? The headquarters module makes quite a lot of decisions, I think, and it recieves the gameState as parameter.  Multi-agent RL might be more complicated and maybe the whole bot could be considered as the only agent. With those percepts of the environment in gameState, you only need a (long) list of all the possible actions and let the algorithm experiment. Then the bot would need apply the chosen action. Queueing the chosen action might be necessary, I don't know.

A neural network could be necessary if you dont want the bot to start from scratch each time but with that OpenAI all the AI is taking care of, I guess.

I hope you can give this a try. Reinforcement learning is the closest to human learning (it might be computationaly intensive though) and it can even be applied to other areas like pathfinding and game balancing.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

I think when we look at this area it's more that we (generally) want to model the AI based on how we would play the game in certain scenarios and of course depending on what the objective is.  For example a Wonder based victory is really all about the market (for the most part), Conquest - military, Regicide - defense.  Not saying that it is all about that but it would be the priorities for sure.  I am going to read through the papers posted, I think OpenAI takes care of long term analysis, but to be honest I think it would be just a computationally expensive process to just determine best strategy on random map layouts, chokepoints, ports, market, etc.  

I think I am going to give the Universe a try but it would need to be with the same civ setup, a small static map (Continent) and limited actors - for at least just getting the API and steps() working.  I think I have a general idea of how this would work + will post back a repo - I am hoping I can get the VNC Ubuntu/Docker setup at least running to start :-)

I am assuming it would be fastest time to conquer as the benchmark + the rate of change, I think this is how the other games are setup.

Have you tried the OpenAI demos?

Link to comment
Share on other sites

Hi

No, I have not tried OpenAI.

I browsed the pages and seems to me that in order to apply this to 0AD they need permision from WFG to build an environment for the game in Universe.

They use the pixels from the game window as state. I see a problem here since that window would show only partial information of the game state.

Then the apply RL and output keyboard or mouse commands. Actions in the game seem a little more complicated: Send workers to build something somewhere, maybe out of sight, for example. It doesn't look as simple as some games: Arrow up, etc.

In any case it would be good if WFG decides to request these people to take a look if it is feasible.

https://universe.openai.com/consent

If OpenAI can do it then we can train it and hopefully a great bot comes out.

 

I think Q-learning can really produce a decent bot.  It could be programed directly in the AI simulation. But it would be really nice if it could be treated as a black box.  The algorithm needs the state, broken into all the possible perceptions, and all the posible actions the agent can take. I know thats a lot to ask so a light version of the game would be nice.

Link to comment
Share on other sites

The main, and possibly only, benefit to 0 A.D. of this, is the increased publicity for us. I don't expect other benefits. The good thing is that it won't cost us anything registering and given 0 A.D. licence they could likely do it anyway.

And I am just curious to what could be the outcome, I am very sceptical about it, still their team is very motivated and already has at least a RTS game they plan trying. :)

  • Like 1
Link to comment
Share on other sites

I think I am going to wait and see how the CIV and Starcraft universes work out ...

https://universe.openai.com/envs/world.CivilizationV-v0

https://universe.openai.com/envs/world.StarCraft2-v0

I can see how the basic Atari / Flash games are setup and almost feel that nearly all of the graphics pertaining to what most of the UI shows would not be really of much use with the exception of a detailed (and larger) minimap.  Locate resources, locate enemies, maximize population, etc.  Some gyms of interest are the toy text ones, ie: https://gym.openai.com/envs#board_game + https://gym.openai.com/envs#toy_text

Alot of the Hannibal utilities are extremely useful in those type of situations as well.  I am sure there are other research papers on 4X / RTS that could have their algorithms applied, just have to read over the papers more.  It is certainly an interesting challenge.

Link to comment
Share on other sites

  • 7 months later...
  • 1 year later...
On 12/31/2016 at 7:25 AM, sarcoma said:

They use the pixels from the game window as state. I see a problem here since that window would show only partial information of the game state. 

Then the apply RL and output keyboard or mouse commands. Actions in the game seem a little more complicated: Send workers to build something somewhere, maybe out of sight, for example. It doesn't look as simple as some games: Arrow up, etc.

I think partial information of the game state is an advantage in fact. That is the information human players have: you have the info you see at the screen. We also use our memory to remember previous things we saw at the screen... I don't know if that is considered in their algorithm, but should be.

 

Edited by coworotel
Link to comment
Share on other sites

On 12/27/2016 at 8:23 AM, sarcoma said:

I tried to apply reinforcement learning (Q-learning specifically) to Petra but failed miserably.

I took a look at Petra and I don't think it would be easy to apply ML to it. It would be easier to redesign a simpler AI with input info, a decision module (where the ML acts) and an output. Petra was not designed with ML in mind so all the game logic is embedded there already.

Anyway, very nice to hear that somebody is trying to apply ML to 0 A.D., it is certainly the best way to go.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...