sarcoma
Community Members-
Posts
318 -
Joined
-
Last visited
-
Days Won
2
Everything posted by sarcoma
-
It probably has mods you don't have. Uncheck the only compatible replays option to see them. You can edit the file and remove mods other than public. Mod list in first lines.
-
I gave up long ago. Even if you want to start from scratch you still inherit a lot from baseAI
-
Losing your first game deducts too many points
sarcoma replied to RolandSC2's topic in Help & Feedback
This free online/proprietary system promises wonders and is based in glicko just like trueskill -
Losing your first game deducts too many points
sarcoma replied to RolandSC2's topic in Help & Feedback
I think scythe's implementation departs too far from true ELO https://blog.mackie.io/the-elo-algorithm Rn = Ro + K * (S - E) n=400 x = Ra - Rb s = n/ln(10) exponent = -(x/s) E = 1/(1+e^exponent) K = (2*n)/20 vs player_volatility = (min(games_played, volatility_constant) / volatility_constant + 0.25) / 1.25 rating_k_factor = 50.0 * (min(rating, elo_k_factor_constant_rating) / elo_k_factor_constant_rating + 1.0) / 2.0 volatility = rating_k_factor * player_volatility difference = opponent_rating - rating return round(max(0, (difference + result * elo_sure_win_difference) / volatility - anti_inflation)) And maybe other systems would be more appropriate, like Glicko or the ones that work for teams too. -
Seeing this in the "gnome store" makes me wonder if the 0AD image could be more descriptive of what the game is about
-
Use mogrify for batch jobs if it can handle format
-
Besides having mods on, are these the right replays? They're from July 23
-
You need autociv mod for 4 replays, uncheck compatible replays to see
-
Then you don't understand the mechanics, each round stronger players are added to the loser bracket.
-
You'll have to wait for next tourney, this one already started. If it is a success more can be organized, no?
-
The original poster should add a screenshot of the summary or some description to these replays. https://trac.wildfiregames.com/wiki/GameDataPaths
-
You need to locate the directory where replays are saved https://trac.wildfiregames.com/wiki/GameDataPaths then put a directory there with the commands.txt and metadata.json (optional, summary) inside. The game will appear somewhere in the replay menu
-
fgod-mod (for 0 A. D. A23) fully compatible with 0 A. D. players
sarcoma replied to ffffffff's topic in Game Modification
The stats feature is very useful. Thanks Seems like faction02 has 1 real (6.20 old) and 1 fake fgod (0.23) -
Hi No, I have not tried OpenAI. I browsed the pages and seems to me that in order to apply this to 0AD they need permision from WFG to build an environment for the game in Universe. They use the pixels from the game window as state. I see a problem here since that window would show only partial information of the game state. Then the apply RL and output keyboard or mouse commands. Actions in the game seem a little more complicated: Send workers to build something somewhere, maybe out of sight, for example. It doesn't look as simple as some games: Arrow up, etc. In any case it would be good if WFG decides to request these people to take a look if it is feasible. https://universe.openai.com/consent If OpenAI can do it then we can train it and hopefully a great bot comes out. I think Q-learning can really produce a decent bot. It could be programed directly in the AI simulation. But it would be really nice if it could be treated as a black box. The algorithm needs the state, broken into all the possible perceptions, and all the posible actions the agent can take. I know thats a lot to ask so a light version of the game would be nice.
-
I tried to apply reinforcement learning (Q-learning specifically) to Petra but failed miserably. It has been applied to many RTS bots Civilization IV: https://pdfs.semanticscholar.org/9f9c/0f114b0c4d9b13ec048507a178fe9b3da4ae.pdf Wargus: http://www.aaai.org/ocs/index.php/AIIDE/AIIDE12/paper/viewFile/5515/5734 BattleCity: https://arxiv.org/pdf/1602.04936 and seems to be successful. @jonbaer: Is it necessary to consider each manager as an agent? The headquarters module makes quite a lot of decisions, I think, and it recieves the gameState as parameter. Multi-agent RL might be more complicated and maybe the whole bot could be considered as the only agent. With those percepts of the environment in gameState, you only need a (long) list of all the possible actions and let the algorithm experiment. Then the bot would need apply the chosen action. Queueing the chosen action might be necessary, I don't know. A neural network could be necessary if you dont want the bot to start from scratch each time but with that OpenAI all the AI is taking care of, I guess. I hope you can give this a try. Reinforcement learning is the closest to human learning (it might be computationaly intensive though) and it can even be applied to other areas like pathfinding and game balancing.
-
Very cool. Also I like the theme with guitar and flute.
-
Thank you, fatherbushido. I can use this to search for the xml tags and get the stats.
-
Hi, guys I will have time to work on this in 2 weeks when I'm off of work. I need to model the agents: units, structures, resources, etc. Where in the code can I get the numbers of each? HP, speed, attack, armor, etc. I couldn't find that info in http://trac.wildfiregames.com/browser/ps/trunk/binaries/data/mods/public/simulation/templates Thanks
-
Some weak players have 1500+ by doing this
-
Thank you, fatherbushido That really helps understand the code
-
Thank you very much, Stanislas. I hope I can make something of value.
-
Hi, Stanislas I had a look at http://trac.wildfiregames.com/browser/ps/trunk/binaries/data/mods/public/simulation/ai I was wondering if it were posible to create an inference engine and use Q-learning to command the AI to make better decisions but I would need a way to get stimuli from the ongoing game. Or maybe if the commands.txt is parsable to extract the actions people normally do, I could use that to create blind build orders. I found http://yieldprolog.sourceforge.net/ hoping to translate prolog into js. Thanks
-
Hi, community How do I go about understanding the AI code? I was hoping I could implement some logic and prediction for the AI without delving too deep in the GUI code. It would be nice if somehow I could get the information learned as the game progresses and inject commands in response to that, like from a black box, then I might be able to focus on the logic. Thanks
-
Nevermind, I found it https://wildfiregames.com/forum/index.php?/discover/unread/ it's an icon now. Sorry. If you can delete this silly post, please do
-
I can no longer load the latest posts in forum like before.