Jump to content

Proposal: AI Tournaments


Recommended Posts

However, i'm not sure it is simple to build a "really good city planner". When i started my economy project, i found it particular hard to define what "really good" is, in particular on edge cases. The same goes for military defense: When i look at a map, i can tell immediately how i'll set up my defense, but i cannot tell how i made my mind.

Yeah, I have the same problem, which is actually very interesting (I gave a lot of thoughts on "how I decide how to affect resources".) Edge cases are everywhere with AIs.

Anyway I'm not erally saying it's simple, just that if you allow yourself enough time to run the computation, it's possible to eliminate quite a few issues.

Link to comment
Share on other sites

sanderd17:

The problem is with agentx. I think he proposes a tournament where only AI players take part. The double-computation check will probably not get that cheat since the same AI code is executed on any machine.

Hephaestion:

I got curious about my statement myself and created a simple demo bot to show the "code injection attack" (attached to the post - if forum mods remove it for being offensive, i have no problem with it). Actually it was very easy to build once you know all players reside in the same JS runtime. RetroBot does not command its own units in any way - but it injects a trojan into the enemy aegis player. Currently the AEGIS name is hardcoded, but i'm sure this can be 'fixed'.

A demonstration of the bot can be obtained by installing the .zip and running

pyrogenesis.exe -quickstart -autostart="Oasis 10" -autostart-ai=1:retrobot -autostart-ai=2:aegis

(i succeeded with SVN rev. 14760). After player 1 has sent its chat "Injected patch into AEGIS ai", nothing happens at his site anymore, but player 2 (aegis) starts to randomly kill its own units, while the aegis start-up sequence is running 'normally'.

It would look better on the Sicilia map i presume, but unfortunately that map is NLA.

agentx:

I'm afraid a 'real' AI tournament (possibly with prizes to win?) would require manual review of each participant to find such dirty tricks.

Edit: Attachment got lost during preview.

retrobot.zip

Edited by Teiresias
  • Like 1
Link to comment
Share on other sites

I'm afraid a 'real' AI tournament (possibly with prizes to win?) would require manual review of each participant to find such dirty tricks.

That's sad, but I like your trojan bot, he's a winner. :) On the other hand, if "inclusion in next alpha" is acceptable at all, manual review will happen probably anyway. Perhaps and depending on how dynamic a bot is coded Object.freeze() might be an option. Would you like to give it a try?

Link to comment
Share on other sites

I highly doubt this is probable at the current development state, but I've always wanted to see a RTS AI that had the same interface for every difficulty, and each difficulty had different implementation.

In pseudocode, the AI would look like this for every game difficulty:

assesResources();

planDefences();

if(animalsAround)

huntForFood();

else

buildFarms();

(...)

And the implementation would be as follows for each difficulty:

EASY:

void huntForFood(){

moveMouse(womanClosestToAnimal,(pixelsPerSecond)30); //This mouse would be virtual and invisible to the player, of course

mouse.click();

moveMouse(animalClosestToWoman,(pixelsPerSecond)30);

mouse.rightClick();

(...)

}

MEDIUM:

void huntForFood(){

moveMouse(womanClosestToAnimal,(pixelsPerSecond)60);

mouse.click();

moveMouse(animalClosestToWoman,(pixelsPerSecond)60);

mouse.rightClick();

(...)

}

HARD:

void huntForFood(){

for each Woman in WomenCloseToAnimal

womanClosestToAnimal.issueOrder(animal);

}



If this is combined with a genetic AI algorithm (especially one that's reactive to time elapsed and plans their city based on a random number), a human player wouldn't notice that it's the same algorithm with each difficulty. It would appear simply better.


Is such an AI already in existence?

Edited by blargin3
  • Like 1
Link to comment
Share on other sites

This could be done I think, but I'm not sure it's useful currently because we have no idea what the best Interface structure is ...

There is an API but it still will not open the way for innovative / different algorithms like genetic, but I could be wrong because extra variables could be defined despite implementing the interface, even global ones.

It's not a bad idea!

You can of course simplify and keep out the mouse movements.

e.g. here:

void huntForFood(){

moveMouse(womanClosestTo((Entity)animal).moveTo(animal.coordinates, (pixelsPerSecond)30, ENTITY_MOVEMENT_RUN);

womanClosestToAnimal will be a function and has to iterate most of the entities anyway, so why not give the entity the order to move with speed 30 /*px/s*/ immediately. ENTITY_MOVEMENT_RUN could be a global constant or value of enumeration defined as (PixelPerSecond)30 so that speed adaptions could be easily made later on. Edited by Hephaestion
Link to comment
Share on other sites

Actually AIs don't really have a mouse right now, so it'd be more like "FemaleCloseToAnimal.GatherAnmial(animal)".

The "same interface but different implementation" is sort of what I'm trying to do with the AI difficulty right now (which is actually why it's not that different since it kind of requires the AI to be somewhat complete), but it's not nearly that trivial to make an AI dumber.

Link to comment
Share on other sites

Same thoughts...

Couldn't we define a RECURSION_LIMIT? Whenever the AI looks for something in iterations (like good places to build a field or a good dropsite) then it is simply stopped earlier if the difficulty is set not as hard. Then automatically not all resources are examined and nevertheless the AI has to choose one of those examined and will pick the best. But probably there had been a better choice had it not been interrupted by the RECURION_LIMIT.

The same for looking for good attack routes ... it has a limited time to find a good one.

Of course for this to work properly the order of the entities we iterate over has to be random!

Edited by Hephaestion
Link to comment
Share on other sites

I currently tend to implement behaviours like this:

C.behaviour = {  start: "whiteflag",  whiteflag: ["economy", "attack", "defense", "victory"],  economy:   ["populate", "gather", "hunt", "expand", "research", "fortify", "mobilize", "defense", "victory"],  mobilize:  ["parade", "analyze"],  attack:    [...],  defense:   [...],  victory:   [], };

The AI initializes on the start frame and on each call it checks the frames listed for their entry conditions. If at least one frame qualifies the AI switches to this new frame. The frames also have exit conditions, which let it fall back to the calling frame. So, for example if economy has an entry conditions like canTrain it directly changes on startup, the exit conditions are e.g. noPopulation && noBuilding. If the enemy destroys all, the AI falls back to whiteflag and the game is over. Or there is nothing left to fight and the AI enters victory.

Some frames have tasks listed having no behaviour entry, e.g. fortify, these just get executed. The idea is each difficulty has its own behaviour with certain features. That way the AI does what it does in a convincing way. Maybe on an easy level the AI doesn't build towers, but on a hard level I don't want the towers placed in a random fashion.

  • Like 1
Link to comment
Share on other sites

Interesting state machine. Wraitii also uses different stages, but it really is a lot of hard work he has done! *hats up*

Yours looks like a happy hopping around - quite dynamic. The order of states to check to enter within one of the branchable (non-directly executable, singularity) states is fixed? Or do you traverse each branchable main state randomly? (all left-hand "<var>:" could also be called a "state wrapper"...)

The check for if all requirements are met to execute this state or direct action is useful and looks safe. Won't the problem be how to decide if the preconditions for attack are given? Or rather if one should attack or not... hm.. I think at this point my idea for a semi-genetic algorithm could step in:

If raw conditions are met (soldiers can be mobilized/freed without hampering economy rates too much and enemy has been scouted, then [Note: better not check for 'if one is invaded' more than once. this the algorithm that follows should cover automatically by time and experience. (so to learn that it's wise too abort a planned invasion to protect homeland/own people)]):

while --invasion_count_determined_randomly_before > -1:

  • Are we invaded ourselves? YES/NO -> store.
    • Shall we abort and switch to defensive action/state (see agentx's state machine above)? Pick YES/NO randomly. -> store.
  • Determine count of units that should be required for another Invasion. Pick it randomly. Note the number.
  • Guess: The number of available units is enough to succeed? YES/NO. -> store.
    • Decide randomly one out of three modes:
      • Draw some troops from already out of our own territory fielded soldiers? Neutral ground. YES/NO -> Save the outcome.
      • Draw reserves from troops on enemy territory? YES/NO -> Save.
      • Draw reserves from the homeland despite the randomly picked economy threshold is not met? YES/NO -> SAVE.
    • If those units are engaged, draw troops anyway? YES/NO -> Save.
    • Draw from the backline or the frontline?
    • Order here cavalry, melee infantery, archers, catapults or bolt shooter or of all types or any combination of it (decided randomly)? --> Save result in the decision tree.
    • Decide if soldiers are too far to arrive at the destination in time? YES/NO randomly -> Store.
    • Is a unit nearby in trouble?
      • Shall we help? YES/NO --> store.
    • Decide invasion target count. (#target_count = random(0, NUMBER_OF_ATTACK_TARGETS_AT_ONCE_MAX)). Note the random number of positions in a vector or list.
    • While --#target_count > -1:
      • Divide attack on several enemy factions at the same time? #tribes_to_attack_maximum = 0..MAX -> Store.

        #efforts_for_picking_target = -1

        target_tribe_picked = false

        While not target_tribe_picked && not ++#efforts_for_picking_target > maximum_number_of_efforts_to_pick_a_target_tribe:

        • Pick random target_tribe.
        • If not target_tribe.is_ourselves() && not target_tribe.is_ally() && target_tribe.is_scouted():
          • If target_tribes_picked[this_round].isEmpty() || /*this tribe was chosen already at least once [so it's valid, let's pick it again]?*/ targets_picked[this_round].exists(target_tribe) || /*Okay, then another tribe has been attacked somewhen previously this round: so has it been decided to attack more than one enemy? */ count_different_tribes_attacked_this_round() < #tribes_to_attack_maximum):
            • Accept the random target_tribe. -> Store.
      • Attack picked target tribe at random enemy position. [not necessarily near the border, can also be in the middle of enemy territory but losses will be high and the AI will learn from that].
      • Give a random number of attack commands to individual units/formations. (the count of commands is random and is to be noted. So is which command mode is picked. e.g. what the target was (e.g. hero, catapult, ...).)
  • Note the total outcome of the current actions. -> Store units_lost_count (killed + captured), units_returned_average_health, units_lost_average_attack_strength, units_lost_average_defense_strength, ground gained/lost, economy state (growth rate = (production rate = harvest_rate + timber_rate/average_distance_to_tree_per_worker + goods_production_rate) + buildings_built_rate + units_born_rate), ...
Each decision will be weighted by the losses and economy state in comparison to all other tribes on the map that have been attacked. If a route in the above tree that is carefully stored has been gone once and turned out bad (at each decision we look out into the final results and average them over all branches). The average of the outcome weights the decision, so that a non-gone way will be taken. As will be a successful branch.

This way a branch that results in a better outcome (no matter how) will always be prefered than one with a worse outcome.

Some frames have tasks listed having no behaviour entry, e.g. fortify, these just get executed. The idea is each difficulty has its own behaviour with certain features. That way the AI does what it does in a convincing way. Maybe on an easy level the AI doesn't build towers, but on a hard level I don't want the towers placed in a random fashion.

Sounds promising. Could you test it in the simulation? Edited by Hephaestion
Link to comment
Share on other sites

Excellent input, I like that my post has generated some thoughts.

Let me clarify on some of the things I'm implying though.

I'd personally like to see the AI mimic a human player as much as possible. I think that (with an ideal AI) if you recorded a game and watched the AI go, it would pass the 'Turing Test.' Hence my virtual mouse idea. The human player is limited in what they can do per command with the current UI (which I'm not complaining about) whereas the AI can issue commands for the women to hunt, 2 men to chop wood, 2 men to mine, and the cavalry to start scouting all in one frame! This ability is what I believe will make AIs hard to beat, and implementing the virtual mouse idea might bring it down to our level even with a great interface. But that's for an ideal world.

As for the placement of the buildings, I wouldn't want them just randomly placed, I'd want their placement to be based on a random number.

In impractical demonstrative pseudocode:

double rand = randomMinMax(0,6);//make trained champions gather at a randomly-based point (somewhere East between 0-60 units of the CC) until ready for attackPoint championGatherPoint = Point(rand*10, CivilCenter.y); //build the first house 30 units North of the CC, and 0-30 units to the WestfindClosestCitizen().buildHouse(CivilCenter.x - (rand*5), CivilCenter.y - 30);//skip a lotif(population.needsHouses){  findClosestCitizen().buildHouse(lastHouse.x, lastHouse.y - (house.width/2) - (rand*5) ); //build houses along a straight line Northwards with spacing of 0-30 units between houses (assuming the houses local origin is in the center)}

This random-based spacing will change the rate at which units traverse through the city. If the spacing between the houses is zero, it'll take just that little bit longer for the unit to go around the houses, if the spacing is 30, then units can traverse faster, but any wall building may require more stone and build time to fit them in. Add a bunch more little values like this based off the same random number iterated over thousands of frames, and you have yourself an AI that you'll never know what to expect from, and a unique game almost every time.

And I feel that the virtual mouse idea will add predictability to the difficulty while maintaining this random-based strategy.

EDIT: or you could skip the mouse thing and just have a command timer that's high in easy, low in medium, and non-existent in hard. :nod:

Edited by blargin3
Link to comment
Share on other sites

Aegis doesn't actually really use state. There are some state logic, but surprisingly little, which is actually kind of why it sort of fails in the late-game, as my code isn't perfectly extensible yet. I believe that making more general-purpose code leads to a more adaptable and less gameable AI.

What you've got to consider is that anything hardcoded can fail. In your example blargin, what if north of the CC is the sea?

Also adding a virtual mouse is like a TON of work if you want some actual mouse-like dynamics.

Link to comment
Share on other sites

Interesting state machine. Wraitii also uses different stages, but it really is a lot of hard work he has done! *hats up*

Yours looks like a happy hopping around - quite dynamic. The order of states to check to enter within one of the branchable (non-directly executable, singularity) states is fixed? Or do you traverse each branchable main state randomly? (all left-hand "<var>:" could also be called a "state wrapper"...)

The idea came mostly from reading Wraitii's code. I've spend quite some hours fighting Aegis and now many things make sense. Without his work I would not have invested a second to think about a bot, because I had no idea where to start.

I hope hopping doesn't develop into an issue, the frames should be clearly defined without overlap. The left over right vs. random is a good idea and easy to code, noted. One thing I plan differently are targets. So, instead of setting a goals like population=100, I'll try flow rates like population growth per minute, metal per minute, etc.. Still not sure if the math could be based on template data or needs heuristics or both. That should work quite well with AI difficulties too, because all beginner start with a low resource gathering rate.

What I'm really interested in is an AI has far more possibilities on a map than the user handicapped with a mouse interface. I've never managed to use archers efficiently, they are quite powerful, if they are placed correctly and manage to keep themself out of trouble. Finally, I didn't train them at all or put them in towers if they are cheaper. An AI should be able to handle 3 or 4 groups of archers and clear an area effectively.

Also tactics, on one end is the guerilla approach with let's say 20 micro groups of 3 hoplites, just attacking one enemy unit each and after the rush gathering around a healer while another bunch attacks, both alternating and on the other end the mega strike with 200 warriors per city center. I think there are useful attack pattern in between and an AI should be able to launch them all to keep the human enemy on the edge of his chair.

The ultimate dream is this:

330px-Battle_of_Gaugamela%2C_331_BC_-_Op

I really wonder what kind of AI might come up with such a strategy?

PS: Many thanks for your useful and detailed decision tree. Post is bookmarked.

PPS: This is getting off topic, I'll start another thread with AI strategies. Let's discuss AI tournament obstacles here.

Link to comment
Share on other sites

@blargin3: I've seen on the other thread the beautiful results of a random map generator, all with completely different topography. I'd guess a hardcoded city layout would lead to funny results on 50% of them. The key is to think differently and put computation at the cost side.

Somewhere on youtube is a video of a group of little car like robots clearing a room full of scattered table tennis balls by moving all of them in the middle of said room. The algo implemented in each bot is surprisingly simple: move forward until three balls are in front, then stop, drive back a little bit, turn around towards a new random direction and repeat.

I call that ant programming, simple unit behaviour leads to complex group behaviour. Here in Europe many cities developed without planning into efficient infrastructures. How would an algo for organic village growth look like?

Link to comment
Share on other sites

If you wondered what I did since yesterday ... just started writing our AI for the Roman Republic Mod (rather branch probably?). Combined agentx brilliant state machine with my algorithm for decision making.

The decisionmaking algorithm steps in for each "command". A command can be all you can think of, generally it's a subdivision of tasks like e.g. one command for the east front and one for the west. Here a more detailed example I put together. It also shows how we plan to introduce councils and a sympathy/reputation system. Time will also find it's way, for allowing for night/day lateron. But first to the example: You will notice that the AI is present despite us being a human participant. This is intentional. What we plan is a new AI, that relays the decision making to citizens (because of performance issues we restrict it on the elite: council members, military leaders, ...).

command_defensive: {    units_assigned: {        commanders_in_chief: [            new CommanderInChief(this.units_assigned.all_officers.chooseRandomly())                    .assignedArmyCommanders.addAll(this.units_assigned.army_commanders)                    /*each army commander has an army assigned, this way we get a realistic command structure.*/        ]    }},command_occupation: {    occupied_tribe: ai.getTribe(tribe_that_surrendered),    commanders_in_chief: [],    decision_leadership: {        IMPOSE_FOREIGN_LEADER: impose_foreign_leader,        FORCE_NO_LEADER: force_no_leader,        CHOOSE_LEADER_FROM_OCCUPIED_ELITE: choose_leader_from_occupied_elite,          ALLOW_ELITE_OF_OCCUPIED_TRIBE_PROPOSE_A_LEADER: allow_elite_of_occupied_tribe_propose_a_leader    },    allow_elite_of_occupied_tribe_propose_a_leader: function() {                    this.occupied_tribe.councils.senate.forbidden = false;                    this.occupied_tribe.councils.senate.getUpcomingSession()                        .setPointOfTime(Math.min(this.occupied_tribe.councils.senate.upcoming_session.getPointOfTime(), NOW() + 3 * 60 * MINUTES))                        .getTopics().addTopic(new Topic(this/*== command_occupation*/)                        .addPoll(                               new Poll(                                   ELITE,                                   POLL_TYPE_YES_NO,                                   this.occupied_tribe.leadership.consuls.addProposal(                                       this.occupied_tribe.getAllUnits(SORT_REPUTATION_DESCENDING)[0]                                   ).getProposals()[LENGTH-1]                                   .shallBecomeNextLeader //<-- this is what is decided                               )                        )                    );                    //set up the poll for our own tribe as the victorious party will have the 'rights' to overrule the subdued tribe's decisions.                    ai.councils.senate.upcoming_session.topics[] = new Topic(this).addPoll(                               new Poll(                                   COUNCIL_SENATE,                                   POLL_TYPE_YES_NO,                                   this.occupied_tribe.leadership.proposals[LENGTH-1].shallBecomeNextLeader                               )                   );  },    ...  /*other commands*/
So much as a snapshot. Thank you for the interesting AI discussion that made me put that (and 500 lines more) together. Currently I still fight the algorithm at a complicated part but I'm now convinced that it will work. The logic is a little bit complicated as we use the AI's 'experience' of previous decisions in similar situations to make the best possible decision. (Hopefully we can test it soon. It would be fun to see the AI plan to attack 10 targets at the same time ... and shift his troops one front to the other because a commander requested reserves! I've implemented that!! Just need more time to finalize it and to set up the 0 A.D. system and make my work compatible to Aegis/the current system (if possible)... The AI might even decide to force a counterattack on the civic center if it is attacked itself.

Ah, I forgot, I added supply lines too. That's a also a 'command'. So the AI will learn how to control the supplylines too. I'm really excited.

Edited by Hephaestion
Link to comment
Share on other sites

I call that ant programming, simple unit behaviour leads to complex group behaviour. Here in Europe many cities developed without planning into efficient infrastructures. How would an algo for organic village growth look like?

Brilliant! And it matches our council system. The decisions will be taken at a low level: for example the army commanders have some freedom for fulfilling goals and can request reserves and even request the commanders in chief to ring the 'army groups' alarm bell so to trigger the senate to gather for an emergency session. It's true that this sounds way complicated ... I had to fight 24 hours at least until getting this ready. Especially the decision tree and the weighting required me to start over again at least three times.

The idea came mostly from reading Wraitii's code. I've spend quite some hours fighting Aegis and now many things make sense. Without his work I would not have invested a second to think about a bot, because I had no idea where to start.

Wraitii's work is very inspiring. As is yours. I enjoy the discussion a lot.

I hope hopping doesn't develop into an issue, the frames should be clearly defined without overlap. The left over right vs. random is a good idea and easy to code, noted. One thing I plan differently are targets. So, instead of setting a goals like population=100, I'll try flow rates like population growth per minute, metal per minute, etc.. Still not sure if the math could be based on template data or needs heuristics or both. That should work quite well with AI difficulties too, because all beginner start with a low resource gathering rate.

I also believe rates are advantageous. I also settled on that at the end of the decision tree especially unit_rate = units - lost + recruited, because it also helps against variable overflow. Finally that's been the main reason why I turned to unit_rates because I faced just this problem in the the weighted decision tree.

What I'm really interested in is an AI has far more possibilities on a map than the user handicapped with a mouse interface. I've never managed to use archers efficiently, they are quite powerful, if they are placed correctly and manage to keep themself out of trouble. Finally, I didn't train them at all or put them in towers if they are cheaper. An AI should be able to handle 3 or 4 groups of archers and clear an area effectively.

Also tactics, on one end is the guerilla approach with let's say 20 micro groups of 3 hoplites, just attacking one enemy unit each and after the rush gathering around a healer while another bunch attacks, both alternating and on the other end the mega strike with 200 warriors per city center. I think there are useful attack pattern in between and an AI should be able to launch them all to keep the human enemy on the edge of his chair.

Oh .. oh good gracious ... is this a dream? The strategy you outlined made me feel very strange, such a variation in strategy can be decisive. I always dreamt of such a system where units learn how to best position themselves. Realizing that the enemy started an outflank manoeuvre once the units realize ('see', hear, get told of) it. Not because of the AI being omniscient and knowing everything.

The goals is that the units try to respond to the enemy movements, no matter how hopeless the situation. As an example: Remember William Wallace & his scots at Falkirk battle?

What a horrible battle, immobile scottish lance bearers being delivered to the enemy because of their leaders being unable to try ANYTHING to rescue their comrades that could not manoeuvre in the marsh land. Before the battle really started, they all high commanders and all melee units left the fields because they were 1 against 30. This was all much worse than depicted in the movie Braveheart.

Archers played a crucial role in the battle. But guess why? The scottish melee units having left the fields, the cavalry trying to free the lancers but realizing there are too many enemies ... so off they are, too. So who is left in the marshland? - Thousands of lancers! No leader. How could that happen? Is this an AI learning by testing if the outcome is better if the melee and cavalry abandon the lancers? :D

The enemy of course sent in the archers, against archers and lancers - that could not move apparently because of marshland and their large weapons. It took some time until enough arrows were brought by to kill them all. Really this was yet another horrible battle that showed how senseless this warfare all is . Their'd be no difference if the leaders had just not recruited the lancers at all and had left them on the fields harvesting and gathering wood. Instead they:

  • call them in,
  • put them in 4 formations,
  • and abandon them shortly thereafter.

You know ... this really makes a lot of sense!

In context of archers importance in battle in the medieval times, I also remember the battle of Harold of England against William the Norman.

The English were in bad condition because of an almost simultaneous invasion of vikings just three weeks ago.

Harold had to rush northwards to Scotland within a single week. 3 days after a decisive victory of Harold in the North, they heard of the Normans having invaded in the South. So here you go, gather your remaining troops and lead them south. Harold got hold of quite formidable ground on a hill before the battle started - the next day early at 9 p.m. the Normans having been on high alert all night long.

The English's position was surrounded by rivers and streams at each side and luckily even a way to retreat just inbetween the two rivers was available. I think we can call this good ground in a military sense.

But hey, the English had no Archers .. imagine that.

So the Normans ran short of arrows because the enemy did not send new arrows back. Imagine hordes of archers looking for arrows but finding none!

I wished 0 A.D. had this possibility too. Archers having to decide whether to break out of other positions to quickly collect used arrows from the battefield or if it's too risky.

For the English I'm afraid this was a very dark day. As they were somehow cheated into demise though stronger in numbers and on excellent position, furthermore the enemy having not slept the whole night.

As if this'd not been enough the Normans had to attack uphill for hours - with no effect other than less soldiers in its rows. Don't forget: The archers could no longer help as they had no more arrows.

After a 'ceasefire' if you can call it like that in this timeframe, the Normans more and more used tricky manoeuvres:

They attack uphill and fell back quickly, rather chaotic situation, fleeing the battlefield. At least that was what the English on their high ground believed in. This triggered the experienced frontline troops to profit from a decisive chase of the fleeing enemy so to end the battle. Hence down the hill they ventured.

So far so good. Had this not been a trap - or at least the Norman commanders made it to convince their troops to stop fleeing the fields, smashing the English royal troops from all sides - what a horror. Still this was not decisive, but with the experienced soldiers now rare in the frontline the shield wall began to crumble.

Imagine the AI using tricks like this to make your frontline troops give up the high ground.

This is doable because by the very nature of the decision tree the enemy AI will realize that losses are high if it attacks uphill while losses are lower if they start uphill and abort the attack shortly thereafter. This then might make your AI that controls your commanders (if those are subordinates of you, so you are in higher rank then it's up to you to control them) realize that now ground can be gained with very few losses. That they will face no resistance while the troops shift downhill will support the AI's hypothesis at first. At the end, the enemy now might have the chance to annihilate your army.

I would be very interested in seeing a scenario like this in action in 0 A.D. Especially if we get around modifying the unit's frontline behaviour so to decrease losses and to make units more difficult to outflank or penetrate as they slow down significantly depending on armour and stamina if they have to go uphill while performing the manoeuvre.

Strengths and weaknesses of units and armour should be much more emphasised then.

Combine this with supply lines that can be cut off. Good gracious, will we or the AI profit from this increased importance of strategy?

I tried creating a system like this several times before ... mainly I tried to reenact something like fantasy books' councils and legends and guild systems and druids gathering in woods using webtechnologies. My friends and my humble self, we always dreamt to dive into adventures from times that were long forgotten. and still having the characters in this world react naturally and dynamically.

The units in the end should have local awareness: if I cut down this tree, then the wind might harm and enemy spies might see the house much quicker. Somehow I always overlooked the potential of AIs for achieving this until recently - before I was for a human only approach. The new hybrid solution opened my eyes.(and made me text even more than usual :D )

The hybrid system allows to combine RPG and RTS and allows fellows to change sides if they are unhappy with their tribe or the AI expelled them because it imposed a dictatorship. ;)

The key to how we want achieve this is by dynamic and random decision making and quickly learning from mistakes.

This will ultimately create almost invincible AIs especially late in the simulation when it might either have evolved into a Mastermind of Offense or Defense - depending into what turned out to work - so it depends on not only how much you check the AI in the beginning but also when you check it (by a preventive invasion once the AI amasses troops and your scouts tell you, for example). If you're unlucky the AI will realize that it's no good idea to amass his troops inside his own territory and might send them somewhere in the mountains to invade your country downhill.

If the powers on the map are balanced it might well be possible that the AI figures that it's best to have a defesive character. The actions in this direction would produce more mutations. Thus this could result in in incredibly difficult to penetrate enemy whose defensive capabilities will bring you to the brink of ruin - luckily the AI will not start an invasion once it has evolved to the master of defense too quickly.

For that too happen we are giving the process a random variation, that is non-static.

That is the influence of the random part grows with increasing duration of stagnation. This way the mutations get more and more extreme and once a mutation in decision making turned out quite well, the algorithm will probably settle on this the next time too. If the outcome then is no longer significantly better or even getting worse, then the weighting in the decision-making tree ensures that the AI will go another route next time. This ensures a variation in strategy and a never inactive AI.

Another option to break the defensive character will be the ratio of population per ground. If this ratio gets out of balance, then by the weighted decision making tree will produce worse results, leading us to leading the AI to dislike the bad outcome the defensive decisions produce. Now it's only a matter of time until the AI will realize it has to break the alliance that prevents its military action. That's not a requirement though as it's enough for the AI to simply decide on a massive invasion of an ally target, effectively declaring war with this aciton.

I'm looking forward to an incredibly hard to beat and unpredictable AI. I think an such invincible AI is no bad thing! It will unite people ...

The ultimate dream is this:

330px-Battle_of_Gaugamela%2C_331_BC_-_Op

I really wonder what kind of AI might come up with such a strategy?

PS: Many thanks for your useful and detailed decision tree. Post is bookmarked.

It was an honour. Thank you too!

Our discussions on several threads in this creative forum made clear to me that the combination of an AI in interaction our own human decisions in a more role-tied fashion is what creates the magic. The ally human interaction always has somewhat made my sad (as little guys fellows and me stared on those small settlers wandering around in a different world, but when it came to an ally, ha , no help could be expected, allies will watch you fall into demise, no troops, no help, no diplomatic efforts, nothing. Really making one sad and realizing that what we do here is purely virtual and artificial.).

Our AI human coexisting system is not tied to a 1 AI <-> 1 Human relation within one tribe. It's even possible to have a x AI <-> x Human configuration wihtin the same player on the map.

As an example: consider another AI (perhaps Seafarers) might have landed on your coast and decided to become part of your empire. They settled down and suddenly you had animals striding around your countrysides you had never seen before.

The viking even decided to apply for a council and helped the neighbouring farmer with the harvest because the ox-wagon broke because the weather had turned the street into mud and the wagon was not the youngest.

'Ah look there.' A rider is passing down the little stream into the valley. 'Why'd he hurry that much?' the viking asked. 'Doesn't want to help us with the harvest before winter for sure. Never seen a horse that quick though.', the farmer answered. 'Never seen a horse before at all.', the beardy giant looked no longer as grim now as in the beginning, but wait - something's happend to his face - where is he looking, why has this man simply stopped having three one meter bunches of ears of wheat on his right shoulder - how can this man lift all that? And how can he stand still with this weight? What's he shouting? Why's he coming back?

'Hey viking, it's no good idea to bring the corn back to the field. Winter will bury it deeply in the cold.' He's throwing it away? Running towards me - wants to attack me, good gracious I was short before trusting this man. What's going on? He's looking so different. 'Come ... come ...' this rough voice, he frees the dogs that I tied so carefully to a post in the field. Almighty, what it this? Why is the horizon behind these hilltops looking so different, now I've reached the last tree, can see it soon. The man sees a huge cloud .. and a blackening sky. Masses of people are fleeing the valleys below ... ox-wagons are seen stuck in the mud, not one,- dozens of wagons. They've invaded, the rider must have been a vanguard - or was it the rearguard even? Mighty, where are those hordes coming from, where are they going? How can I get to my command. Where is my command? Where the consul. I need a horse.

'Our fields no longer need us.' the viking shouted towards the farmer while hastily bringing two horses from the stables. Two horses? For three? Will he go by foot?, your scepticism of this stranger rose. 'I will sail north and fetch help. Come.', 'Wait.' the farmer says, he not wants to come with him. You are standing next to them and wonder if it'd not be better if ...

I mean, I might be a simple archer only, never capable of holding back an army, just being freed from military services to help this farmer on his field like many other common soldiers had been forced to. I wonder who had the idea? Perhaps the consul, old and rickety as he, as he warned of an early winter. Oh great benevolent, why did he dissolve the army for helping farmers. How can a man that stood in a hundred battles come to such decisions, why did noone realize that the enemy prepared an invasion. I have to find my division. Alone my arrows can't help us out. Where are they? I only see civilians, chased down and stabbed from the rear or trampled by giant horses, where did the enemy get these horses from? Ha, this reminds me of the viking, where did he come from? Where is he anyway? He'd just still been here some moments ago. Where's he gone? Where is he going? Shouldn't I come with ..- ' No! I can't leave you with all that work, farmer.', the young man suddenly had gathered his wits.

'They will be here soon, you should go and find ... anyone capable to fight. I remember times where we were forced onto an expedition as young men. The consul led us into foreign areas and we ran against a wall of enemy lines. No one ever penetrated. I didn't want to be reminded of that. Now 50 years later they have turned against us themselves. In the meantime I had lost all fear, they seemed not intrested in us once they fought us off. Now go, and don't look for me. At all times don't forget that even the very subtlest spark of hope might relief a complete army encircled and feared to death. Go and never come back, you will not find me anyway. Because ... They've not forgotten. They will take the hillsides like they did fifty years ago, we will run against a wall, but this time.. this time the wall is in the middle of our country .. they will go where the people are fleeing, sure they will ...', the man headed back to the field, caught the freed barking dog and tied it to the post it had been before the viking had freed it. The farmer then gathered the corn the viking had thrown on the ground after his freeze. Realizing that this man will get overrun soon without even trying to escape made you feel like quartered - each quarter following another route .. one the viking, one protecting the farmer, one helping repair the wagons and the last ... facing the enemy.

Suddenly you were shivering: How should I - me, the only archer I see around - ah okay, there are hundreds of black clothed archers ontop of giant horses .. well, very soothering, hm? You had to laugh but felt struck by a bolt while you slowly mounted on the old farmer's rickety but trustworthy horse. You couldn't wait anymore, would die from waiting. .. and off the horse and rider went and neither they went towards the farmer's hut, where the farmer stood watching them as they left - for a short moment you meant seeing him raising some ears of grain towards the clouds to send a dear farewell - nor they followed the vikings path as they rejected to face the enemy. The wagons they surpassed without further notion .. leaving a chaotic situation inbehind .. the horse started accelerating for from the right flank the enemy was approaching. You could not see a gap in the lines as far as you could see in this cloudy wheather. Scarily you looked ahead to seek a path to follow, then to the left - only to see another neverending line. Where do they all come from? you tried to grasp the situation while the battle cry came closer and still the horse's route was straight - did they want to cross the valley inbetween these two hollows? How shall they reach the other end? They must be dreaming ...

Godness. What mess did I produce? ;)

But that's what it all about: Being in the middle of the happenings like it was for the archer in the story above, or maybe you want to go the viking's way - whatever that'll be? It's all upon your decisions.

PPS: This is getting off topic, I'll start another thread with AI strategies. Let's discuss AI tournament obstacles here.

Sorry for all this off-topic.

Edited by Hephaestion
Link to comment
Share on other sites

This random-based spacing will change the rate at which units traverse through the city. If the spacing between the houses is zero, it'll take just that little bit longer for the unit to go around the houses, if the spacing is 30, then units can traverse faster, but any wall building may require more stone and build time to fit them in. Add a bunch more little values like this based off the same random number iterated over thousands of frames, and you have yourself an AI that you'll never know what to expect from, and a unique game almost every time.

Before this idea gets lost in my a thousand line posts, I better reflect on it now (than tomorrow). The idea sparks another idea that fits perfect in the algorithm we forged with agentx, nietkd, waritii and all the other colleages: The buildings that are farther apart are more susceptable to weather, so the 'half-time'/decay of buildings/things could be calculated as a function of:
  • distance to the next building and
  • time difference in minutes* since the point of time when it was constructed (started to build/foundation creation? from the point it was completely surrected? --> If the building material itself already degenerates by time then this would solve this issue as the wood would degenerate following a natural decay-function depending on time and starting from the wood being harvest. The building condition then would be twofold:
  • cohesion (#parts_the_building_is_constructed_of_initialy / #parts_it_currently_is_made_from where part is an object, be it a prop or one of the construction items),
  • condition (= average condition of all individual parts).
  • ).
  • [local weather] <-- at a later point once we have that ready.
The cool thing of this is that the current special case outpost would be covered of this concept too.

Thoughts on the base time: A timescale more sensitive than minutes makes no sense as not even a military action or a falling tree will happen/can make decisions in less than minutes. The whole procress from thinking --> reacting/giving orders --> until these orders have physical effect, so to say the order reaches a unit or the unit decides to go around the fallen tree. This will take its time and probably will not happen within milliseconds. On a purely decisionmaking scale this might happen within milliseconds or seconds [e.g. reaction time] but not if we consider the transmission of the decisions made (the distribution time).

If we settled on seconds instead of minutes I fear we'd get in serious trouble [variable overflow] once we wanted to store a bigger time span (let's say from the foundation of our city e.g. 200 B.C. until it was destroyed 0 A.D).

Here is why:

200 years are 105 189 120 minutes. To store this we need at least 27 bits (so we choose 32 bit of course as we need a power of two as current technology is digital, ie. binary, ie. knows only 0 and 1.). The maximum time we could store in a 32 bit unsigned variable (there is no negative time, so we can drop it and can use all bits for storing a bigger positive number). So we could store at max 4 294 967 296 minutes. This is 8166 years. I think that is enough for most plans we might currently have in mind. :)

If we chose seconds instead as base units, then we could store 4 294 967 296 seconds in a 32 bit variable. This is 136 years and looks quite dangerous. A city can exist much longer ... (the value differ by a factor of 60).

To build up structures of individual units:

  • Houses out of wood, metal, ...
  • War elephants out of elephant, saddle, ...
  • catapult out of wood planks, metal, ropes, ...
  • ...
  • will be increase the polycount significantly. I am aware of that. And should we have severe performance issues we could still:
  • drop GLSL support and use textures only for a significant speedup.
  • Furthermore a LOD (a level of detail) system will save render time if we (read: the cameras) are far away from an object. This object then will not be rendered. This would also make all props on a building disappear if we have a very low zoom.
Not sure if 0 A.D. has a LOD but the performance lag is significant once the zoom is low (high altitude).

The reasoning behind the full-object principle is that should I really work on walkable meshes then I wish to have the real thing: e.g. climb into that castle through the window. Break free the door. Cut the rope of the catapult. (a physics system would be helpful but would introduce another computational problem).

Should we use textures to emulate/trick doors, windows or ropes. Apparently our plans would be doomed from the beginning. Thus to rework the object/entity system in our AI-Human-Hybrid Branch is essential as we wish to make weapons droppable, arrows collectable from the battlefields, ...

@Topic:

AI tournaments between pure AIs will still be possible even if AI and Human coexist in a single team: Put exclusively AIs onto a map and no human can ever interfere and falsen the outcome of the tournament.

Ah, this sparked another idea: Why not have the possibility to start an AI game and join it at a later point. Then you colleague died and says she/he'll exit and go for a walk with his dog. Later this fellow can rejoin by simply choosing a unit. The AI will not even realise it because at the low level the player joins the game it makes no difference at all.

Edited by Hephaestion
Link to comment
Share on other sites

The AI developmemt has made some more progress and another new idea could be put together:
  • The decision tree now is connected to the commanders in chief. That adds yet another level of strategy as it now is meaningful if the senate (in Roman Republic Mod Branch's peacetime mode) or You (in current 0 A.D.'s Emperor/Dictator mode) decide to replace a commander. The new commander will have a completely different set of military experience (decision tree) and might surprise your enemy with unexpected tactics.
I hope we get the AI into contact with Aegis in a few months. Should such kind of AI really turn out invincible (at the later simulation) then we should try to figure its best commanders and capture it or wait until the most capable commanders have died of age. Really can't await seeing this algorithm in action ... Edited by Hephaestion
  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...