Jump to content


Community Members
  • Posts

  • Joined

  • Last visited

  • Days Won


jonbaer last won the day on October 15 2016

jonbaer had the most liked content!


Profile Information

  • Location
    Brooklyn, NY
  • Interests
    Machine learning, AI, 0ad

Recent Profile Visitors

2.241 profile views

jonbaer's Achievements


Sesquiplicarius (3/14)



  1. Trying to fix this myself, there seems to be a case if you play vs. multiple AIs where the diplomacy for ally/neutral/enemy only changes once w/ a preset condition and then stays there. It would be nice to make this smarter and more challenging somehow. Is there even any docs/posts which discuss how this is suppose to work? At the moment I am playing 1 vs. 5 AIs where I will make decisions on which ally makes economic sense but it would be great if there was a decision maker on the other side in the game. Hope this makes sense. https://github.com/0ad/0ad/blob/master/binaries/data/mods/public/simulation/ai/petra/diplomacyManager.js
  2. I have only used it + compiled on OSX and Linux, when you say "it" does that mean it won't compile or the python client won't work/do anything on Windows, is there an error trace somewhere. This is tough because obviously the game itself is (without doubt) rendered beautifully on Windows/GPUs/gaming rigs but I think this bit (RL/ML/AI) is really being done more in Linux/parallel. I come in peace and hope the two camps can work together :-). I guess leaving this as a compile-time option is the only way forward. I was hoping somehow as something through the mod lobby too.
  3. I will try, I think the major issue here is probably along the lines of versioning the protobuf builds at some point and I don't know how that works. Like maybe zero_ad is a pip installed library with a version inline with the subversion id or the Alpha 23 version, etc. Everything else (beyond what is inside of main.cpp) can just be really built on as default. In other words just let python have the ability to talk to pyrogenesis and push everything else to another repo. I think there should be (or probably is) already someway to define a client/server mismatch. I don't think @irishninja needs the main build to include the clients, I could be wrong or this has probably been discussed but I didn't see it yet.
  4. I was able to fix my errors by pip3 install --upgrade tensorboardX + playing around with the zero_ad py client has been fun. One thing I'd like to figure out (this might already exist or I just don't know if it is something which can be accomplished), would be for the python client to actually inject a way to overwrite some of the JS prototypes. I will give an example, in Petra on the tradeManager it will link back to HQ for something like this: m.HQ.prototype.findMarketLocation = function(gameState, template) To me this is a decision making ability I feel like RL would be pretty well suited for (I could be wrong), but I feel like having ways to optimize your market in game (with say a Wonder game mode) on a random map would be a great RL accomplishment ... especially you would get bonus points to work around enemies + allies. Sorry I have always been fascinated w/ that function of the game, kudos to whoever wrote it. There are occasions where this AI makes some serious mistakes on not identifying chokepoints/narrow water routes, etc. But to me @ least it's an important part of the game that pre simulating or making that part of the AI smarter would be key. Also will D2199 make it into master copy? I can't seem to locate where that decision was(n't) made ... thanks.
  5. There are still some minor small issues but I got it running, I had to directly install rllib (pip3 install ray[rllib]) + obviously forgot to install the map(s) first time around :-\ It looks like I may not be running w/ the correct version of TF inside of Ray though since I get this from the logger ... AttributeError: 'SummaryWriter' object has no attribute 'flush' ... Traceback (most recent call last): File "/usr/local/lib/python3.7/site-packages/ray/tune/trial_runner.py", line 496, in _process_trial result, terminate=(decision == TrialScheduler.STOP)) File "/usr/local/lib/python3.7/site-packages/ray/tune/trial.py", line 434, in update_last_result self.result_logger.on_result(self.last_result) File "/usr/local/lib/python3.7/site-packages/ray/tune/logger.py", line 295, in on_result _logger.on_result(result) File "/usr/local/lib/python3.7/site-packages/ray/tune/logger.py", line 214, in on_result full_attr, value, global_step=step) File "/usr/local/lib/python3.7/site-packages/tensorboardX/writer.py", line 395, in add_histogram self.file_writer.add_summary(histogram(tag, values, bins), global_step, walltime) File "/usr/local/lib/python3.7/site-packages/tensorboardX/summary.py", line 142, in histogram hist = make_histogram(values.astype(float), bins) AttributeError: 'NoneType' object has no attribute 'astype' Is there something inside of ~/ray_results which would confirm a successful run? (I have not used it much yet but will read over the docs this week).
  6. Thank you, was able to build this fork and have it running now, I am currently on OSX w/ no GPU @ the moment (usually anything I require GPU for I use Colab or Gradient @ the moment) ... I wasn't able to run PPO_CavalryVsInfantry because of what looked like Ray problems but will figure out.
  7. Hmm ... I still get ... Premake args: --with-rlinterface --atlas Error: invalid option 'with-rlinterface' ERROR: Premake failed I don't see it in premake5.lua @ all ... https://github.com/0ad/0ad/blob/master/build/premake/premake5.lua There is only a master branch there right? Edit: Sorry or just to be clear, I should apply the D2199 diff if I want that option? I meant I just did not see it in my latest git pull anywhere.
  8. I think I really missed it but what is the status of https://code.wildfiregames.com/D2199 (?) ... I don't seem to be able to locate newoption { trigger = "with-rlinterface", description = "Enable RPC interface for reinforcement learning" } in my git copy which I am building from.
  9. I actually started a small repo area a while back (after I found the Hannibal 0AD bot), https://github.com/0ad4ai ... but I think the problem he had was that releases were moving so quickly it was hard to nail down a solid externalized interface. I think the minigames are a great place to start but I tend to think I have my ideas broken up to be too narrow (market production for example), many of these can obviously be related to techniques already in Starcraft play, resource management. I did not bail on these ideas but I rather found a "simpler" version easier to work w/ so I have a built copy which is smaller, but even then I moved onto MicroRTS for quicker implementation of ideas, https://github.com/santiontanon/microrts ... there is an OpenAI gym for it but likewise there issues always seems to be across what format to write data out to (binary vs. JSON) especially when your state is quite large. Either way I would love to see an 0AD-gym be available @ some point. It's just hard to say if it justifies pulling down and using the entire game or just say 1 map + 2/3 civs, etc.
  10. Would there happen to be anyone here who could put together / design this map layout? (It is basically the Arctic circle w/ the Northern Sea Route open). https://en.wikipedia.org/wiki/Northern_Sea_Route
  11. Wasn't there someone already doing nightly builds from git repo already or just my imagination?
  12. Thanks for pointing out the Economy Simulation mod, I think I will look @ it ... I didn't mean for population aging to be so complexed, just something simpler, I feel like there are moments where I don't want to have to kill off idle types of players (fishing, etc.) and have some type where they could either convert on their own or just die off when not needed, but something more in a realistic way that is geared toward overall economics of the gameplay.
  13. Has there ever been any discussion to maybe adding @ some point in time a new dynamic for population aging somehow? Thinking something like just a basic timer but time could be x-factored by medicine technology tree somewhere, also adding some demographic type of charting, also maybe x-factored could be attack dynamics where young battle = x1, middle age (prime) is x5 and later periods in old age are x1. Thoughts? [0] - https://en.wikipedia.org/wiki/Population_ageing
  14. The way I see it the nomad mode is a bit flawed and yet excellent at the same time. Random maps throwing random resources will immediately doom certain civs right off the bat depending on geography. In your best strategy you build a dock and hope for the best in gathering up to build a Civ Centre. Part of it is a short mini-game within a game. So to answer your question build a dock and immediately gather wood/stone/metal (thus placing your dock in an equal distance between all 3 being part of your fate/doom/win).
  • Create New...