Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 2020-03-04 in all areas

  1. cuz they are scared shltless to lose points as their rank isnt same as their skill.
    3 points
  2. Estoy editando un mapa, para hacer un disparo, numancia, necesito saber ciertas cosas sobre la creación de mapas ... ¿Cómo podría hacer que el ataque gaia cada x veces, las posiciones iniciales del mapa no sean automáticas en el editor superior? El manual de Atlas está muy desactualizado y no da respuestas a nada de lo que necesito. ¿Cómo pongo unidades en estructuras gaia? Actualmente el mapa no funciona pero es una creación bastante hermosa de @ Lankusnav Gracias por la ayuda Currently it has too many failures to finish putting things on the map, so it is clean of some parts .....
    2 points
  3. You get really creative when it comes to finding weird stats to be good at while neglecting the bigger picture. You get the best CC/PV but you're banned indefinitely. It's like when you play for the best K/D and let all your allies die. Anyway, the screenshot has already been taken, so you can now stop pressing F5 on your profile page to boost the kills... I mean profile views.
    2 points
  4. It will probably be extended for one month to each of your messages
    2 points
  5. I think that many people will want to know
    1 point
  6. the basic premise is you use the most tanky unit based on cost/tankieness the mayurian worker ele that have 300 health 5 hack (thats already insane) and 8 pierce and all of this for only 150 food.so @#$%ing op. bacically you cant do this in low numbers as if enemy can micro to target your ranged units before your meatshields it wont work you need autoatack to be targeting your ele.prob best to use at something like 30 archers min + some ele.cav can be used too ofcourse i mean idk how you are supposed to mass them probably get alot of food and get them in in one go maybe 10 at once or something altho maybe only good as support and not as a sole meatshield. or similar to skiritai just use for building but you dont need too many of them as they are so op. lets compare them to pikemen for comparing.ill be using res cost base of 100. pikemen "hitpoints" which is total damage absorption potential is: 100*100/35=285.7 for ele its 300*100/43/1.5=465 so like 65% more hitpoints or something like that. ofc they are much slower and lack the attack but more extreme the difference in capacity between unit types better the results. lets say get 30 archers and then mass like 7ele in one batch so you should rly overdo food and go in. problems could be enemy selecting like 6 ranged at a time and then shiftclicking a row of ur archers then selecting 6 other units and repeating.or sending mele directly into your archers.but these are things you can have problems with any other build altho lower the amount of meatshields(easier to get to ranged inf as nothing physically blocking the way) and slower they are and if they dont atack or pin enemy units so can be ignored and walked through more problems there are. if anyone tries this tell me how it goes.i think its a pretty stupid idea but idk maybe with ele just being support meatshields with some spears and archers+cav this can work somehow.. if you wanna see more op stuff like this itll only happen if im happy.one of the ways to make me happy is to give me some money. to clarify i do not request payment or a trade. heres my monero: 471oEfj69SvUp19dV7N7vAJu4eaR8yd8267ubduq7dcQYPygQoq2dm5Us5eLbQrsjiWH2RhbsrQKHaQaG1L4QoNGQA7KRw7
    1 point
  7. Someone should make a quick n dirty little Command & Conquer mod to showcase this. In fact, it would be useful to eventually release a bunch of little official mods showcasing these types of features.
    1 point
  8. Thank you, was able to build this fork and have it running now, I am currently on OSX w/ no GPU @ the moment (usually anything I require GPU for I use Colab or Gradient @ the moment) ... I wasn't able to run PPO_CavalryVsInfantry because of what looked like Ray problems but will figure out.
    1 point
  9. Just dance, it's the same idea except u don't waste resources making eles. Both exploit the nearest enemy targetting, the difference is the other one exploits the projectile dynamics as well
    1 point
  10. I have also pushed the changes of D2199 to a branch on my fork of 0 AD if you prefer: https://github.com/brollb/0ad/tree/arcpatch-D2199
    1 point
  11. I don't get the problem with the infinite queue. I can't say I'm totally correct here, but the amount of resources that could be collected from a site in Rise of Nations were virtually infinite, which is a reasonable basis for the feature. It's a simple quality of life feature that is optional; the same can be said for things such as rally points.
    1 point
  12. @jonbaer Nice work with that organization - it looks like it has forked a lot of relevant repos! TensorForce and OpenAI Baselines would definitely be directly applicable to trying some of the SOTA reinforcement learning algorithms in 0 AD. The size of the state representation certainly comes up when exploring things like imitation learning. I actually have another fork of 0 AD that I use to simply log the game state and the replays can get decently large (between 1 and 10 mb iirc). Of course, if you convert them into some specific state representation, they will likely be much more compact (depending on your specific state representation). As far as an 0 AD-gym, technically, there are actually two in the link I posted above (https://github.com/brollb/simple-0ad-example). In the repo, I created two different 0 AD-gym environments for learning to kite. The first used a very simple state space; the game state is represented simply as the distance to the nearest enemy units. As a result, the RL agent is able to learn to play it fairly quickly (although it would be insufficient if the enemy units used more sophisticated tactics than simply deathball). The second 0 AD-gym uses a richer state representation - the state is a simplified minimap centered on the player's units. This requires the RL agent to essentially "learn" to compute the distance - as it isn't already preprocessed. Although this will be harder to learn, the representation could actually capture concepts like the enemy flanking the player, the edge of the map (or other impassable regions), etc. This type of state representation will also make it possible to have a more fine-grained action space for the agent. (In the example, the RL agent can pick 2 actions: attack nearest enemy unit or retreat. With a minimap, the action space could actually include directional movement.) That said, I am not convinced that there is a single state/action space representation for 0 AD given the customizability of maps, players, civs, goals, etc, and the trade-offs between the learnability and representational power. Since I don't think such a representation exists, I prefer providing a generic Python API for playing the game from which OpenAI gym environments can be easily created by specifying the desired state/action space for the given scenario.
    1 point
×
×
  • Create New...