Jump to content

Advice on tracking AI bot API changes?


Recommended Posts

There seem at least three people thinking/working on AI bots - wraitii, agentx and me. Last night it sprang to my mind what happens when the bot API changes in a non-backward-compatible way.

By now, there have been four APIs (common-api, common-api-v2, common-api-v3 and now the new common-api). Additionally, bots might be analyzing the engine messages directly, bypassing the common-* stuff for performance. So the data formats used by the engine might qualify as "API" as well.

At least the following ways could be used to deal with changed APIs - each of them has its problems:

  1. Do nothing and if there is a crash, either the author fixes it or, in case of abeyance, the AI is finally dropped from the repository. This happened to testbot, jubot and scaredybot. Sub-optimal when a bot is partially broken and noone files a bug report (at the time when it's still fixable).
  2. Advise people against writing bots until 0AD has "gone gold" and the APIs are stable. At that time, it is by definition too late to change the API should a common desire arise to do so.
  3. An API version number could be implemented which increases on changes and the bots quit when the version is too new. Something similar is already happening when the common-api-v* versions are used, except the count has been "reset to 1" now. Problem is that backward-compatible changes might screw them up, causing unnecessary annoyances (google for the Haskell fixed-upper-bounds problem...)
  4. At startup, each bot runs test routines to validate the API still behaves as expected, and if not, emits warnings/errors. This seems promising, but might require a multiple of plays on different maps to trip a test to fail (e.g. some test checking dock placement and bot is played on 'dry' maps only).
  5. The bot author could run a series of test scenarios, maybe including custom-built maps, to check h(is|er) baby for misbehavior. I think this is the most effective but also most laborious way to do it.

While the latter two methods are real labour-intensive, they would give hints on which parts of a bot are in trouble.

I do some 'dry' testing using jasmine for it's faster than starting a game and waiting to spot a particular behavior/bug, but only so far as the 'innards' of the bot are concerned.

Any recommendations?

Link to comment
Share on other sites

Note that I've removed all AIs and APIs except the latest, aegis, and tutorial bot in SVN, because it is cumbersome to maintain.

It's slightly bad that breaking changes break stuff in non-obvious ways, but so far AI developers have been able to update their code accordingly and they don't happen nearly that often.

Link to comment
Share on other sites

I would prefer api freezes long enough before release to allow adaption. Can trac tell me what is planned? Coding against moving targets is much easier with a good forecast. The only two things I currently assume is a tick and an accessible message queue. Don't need serialisation, bot deals with all maps saved or designed.

Link to comment
Share on other sites

I would prefer api freezes long enough before release to allow adaption. Can trac tell me what is planned? Coding against moving targets is much easier with a good forecast. The only two things I currently assume is a tick and an accessible message queue. Don't need serialisation, bot deals with all maps saved or designed.

The bot gives the same result for every turn, no matter if it has been playing before, or it comes from a saved map? As that's what happens when the network connection of a player dies, and he rejoins.

Serialization also makes it possible to research the source of oos problems in a much nicer way.

For the ticks, don't you get a tick every turn?

And for the planning, we will release v1 when ready, there's no schedule for it.

Link to comment
Share on other sites

> The bot gives the same result for every turn, no matter if it has been playing before, or it comes from a saved map? As that's what happens when the network connection of a player dies, and he rejoins.

That's interesting, did not know a bot has to take care of this. Could you expand on what happens in this case?

> Serialization also makes it possible to research the source of oos problems in a much nicer way.

Well, it depends, how you got oos problems in the first place.

>For the ticks, don't you get a tick every turn?

Yes, I rely on them.

> And for the planning, we will release v1 when ready, there's no schedule for it.

Was talking about next alpha, that's the horizon.

Link to comment
Share on other sites

Next alpha should probably be in a few weeks (end of March, though more likely in April).

And a bot must indeed be deterministic. The scripts are executed on all participating computers, and the results should match. That's also why serialisation is handy. The states can be compared, so if enough data is serialised, you can see the problem earlier. In past OOS problems with the AI, we only saw a difference in the simulation (due to the AI giving different commands). The position of some units differed a minimal amount (but enough to cause an arrow miss, and mess up the entire game). If a big part of the AI state would also be comparable, tracing it back is easier.

As for the rejoining, when you rejoin, the player that is disconnected receives the serialised state on reconnection. That state should contain every part of the dynamic state, so when the bot gets the next turn update, it can just continue as nothing happened. If you don't serialise anything, there also shouldn't be a dynamic state (everything needs to be calculated from scratch every turn).

Link to comment
Share on other sites

And a bot must indeed be deterministic. The scripts are executed on all participating computers, and the results should match.

I hope, I get this right. In a 4 player MP game with 2 human and 2 bots, each human has two copies of bots running. Let's say bot in team green running at human 1 computer decides to build a field, then same moment bot green on human 2 decides to build a field at the same place. As result both human see a field pop up at the very same moment and same location.

Which means both bots running on separate machines are always processing same message queue and have same outcome, right? So, no Math.random? What is needed to set up a MP game locally with bots?

I've checked out new common API, ApplyEntitiesDelta is getting closer to O(n) and this looks great!

var CreateEvents = state.events["Create"];var DestroyEvents = state.events["Destroy"];var RenamingEvents = state.events["EntityRenamed"];var TrainingEvents = state.events["TrainingFinished"];var ConstructionEvents = state.events["ConstructionFinished"];var MetadataEvents = state.events["AIMetadata"];var ownershipChangeEvents = state.events["OwnershipChanged"];
I'm trying to get my head around running multiple copies of 0 A.D. Copy G is used to game. With custom shortcuts it can be used as virtual copy D to develop. Now how can I use copy A running next Alpha from trac and let them all share public.zip and by not downloading .5GB every day?
Link to comment
Share on other sites

Yes, the order of execution is fixed. Human messages are always applied either before or after bot message, so if a human is unable to place a field due to a bot that just did it, this also happens in the other computer.

Or when the AIs will run a separate thread, only the calculations will, the sending of the commands will still have a fixed order to solve collisions in the same way everywhere.

For local network games, if al textures are cached, you should be able to run two games simultaneously, and let them connect to each other. Though not all network problems can be found this way. You can't give two commands in the two instances at the same time (not easily at least), and some problems just arise from platform differences (f.e. different rounding in 32 vs 64-bit).

If you use SVN, you don't have to download 5GB every day. You only download the differences between the files. Which is usually a few kb per day, but it may be bigger if there was a big art commit, or a new library added.

Link to comment
Share on other sites

sanderd17:

"If you use SVN, you don't have to download 5GB every day." - how about the pending git migration? I don't know git at all, but assume its similar to Mercurial (which i use locally). Will i have to download the whole repos again when migration is complete and SVN shut down?

Yes, but it's not for tomorrow ;)

Link to comment
Share on other sites

Posted · Hidden by sanderd17, March 6, 2014 - spam
Hidden by sanderd17, March 6, 2014 - spam

For the bad side, several Diablo Three supporters have portrayed the priority that will real money working would likely draw the game since wagering. The electricity to analyze the particular ebbs along with Diablo iii Gold complements the actions house market may provide which you associates a certain border around other men and women.

Link to comment
  • 1 month later...

There is a gamestate.js?

I don't think there ever was a release wíth save game support.

In case of the events there is indeed nothing ready. I have somewhere an untested event class but there are no events accompanying it.

Maybe the terrainAnalyse script from Aegis/Petra provides something to passability maps?

Edited by niektb
Link to comment
Share on other sites

The simulation can send messages to the AIInterface or AIProxy when something happens. Those messages can be picked up. Not sure what other events you'd want.

The engine also offers saving to the ai. There's a serialize method, which should return a simple object. And the deserialize method which should reconstruct the state using that object.

If every class in the api and the ai code would serialize it's state correctly (as we do in the simulation), then saved games would work.

The problem is that some objects are hard to make simple. F.e some contain lots of circular references, and some stuff is just too big to serialize quickly (like the passability maps), while it gets send on every turn (so there's no need to save it). This means it's very hard to offer a general serialization method, and instead, every class should have it's own method.

  • Like 1
Link to comment
Share on other sites

If every class in the api and the ai code would serialize it's state correctly (as we do in the simulation), then saved games would work.

That's not true and also not the issue. The problem with saved games is when OnUpdate() is called, all above mentioned essential objects are literally undefined rendering any AI meaningless. What is the fastest plan to make the API complete and functional?

Link to comment
Share on other sites

OnUpdate is only called after the Deserialize function is called. And the Deserialize must make sure the object has the same state as before. So if the api and the ai code would serialize and deserialize correctly, it would work.

The fastest way to fix this is if a correct Serialize and Deserialize method is added to all classes in the api and the ai code.

Link to comment
Share on other sites

OnUpdate is only called after the Deserialize function is called. And the Deserialize must make sure the object has the same state as before. So if the api and the ai code would serialize and deserialize correctly, it would work.

The fastest way to fix this is if a correct Serialize and Deserialize method is added to all classes in the api and the ai code.

Look, I said deserialization is not the problem. Even if an AI has properly deserialized past data, the API is not telling the AI what is happening, because there are no events. I can assure you simulation and UnitAI do properly work, each unit picks up the task it had before saving, that can be easily seen. But as said, there are no events, so if an enemy attacks, there are no attack and no destroy events and an AI loses all units without even knowing. AIs are blind and lost at this point.

So again, what can be done to fill this huge API hole?

Link to comment
Share on other sites

The AI stores events e.g. attacks as attack plans (ongoing + planned ones). So if this attack class is serialised completely, then the saved games should indeed work.

How would such a general serialisation concept look like in practise? You said, the simulation components is serialising its own state? So we have a recursive serialisation?

Where is the serialised class' data string put ? how is it concatenated I mean? As the C++ side just has one method and accepts a single serialised chunk of data no? )

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...