Jump to content

AI interface design


Recommended Posts

I'm currently thinking about the AI interface design again.

An essential aspect to consider is how SpiderMonkey reacts to different ways of sharing data and what constraints are imposed for multi-threading. Currently it looks like it has significant performance problems with cross compartment wrappers to the sharedAI data.

For performance in a single-threaded environment it's best to stay within one compartment and one global object to avoid accessing a lot of data through wrappers. On the other hand, having multiple players in one global object most certainly means they can't run in different threads.

I tend to think that we won't have each player in a thread because the overhead of passing data around would be too big. Maybe we could have one sharedAI object per 4 players for example. This way we could have two threads for the AI player calculations in the extreme case of a match with 5+ AI players and the overhead still wouldn't be too big (hopefully).

I've started with an UML diagram to show how I think the AI is meant to work with multi-threading but without this 4 players split. So essentially it just runs the AI calculations off-thread as a whole in the "gap" between simulation updates, but it doesn't have an own thread for each AI player.

It's probably not perfectly valid UML and if you find any obvious errors (content or UML syntax) please tell me.

I should have some more time this Thursday to look over the diagram and improve it.

I'm currently in a discussion with SpiderMonkey developers and I've also created this diagram to show and explain how our interface works.

post-7202-0-40634200-1386712438_thumb.pn

  • Like 1
Link to comment
Share on other sites

A more general question about synchronization: Assuming an AI player (or multiple) were running in a background thread, who deals with the world advancing while the AI is still calculating? I recall in the days of AoM AI scripts you could happen to add an entity to a "plan" (native implementation similar to what is used by aegis) and while the plan was just initializing, that entity got killed ~> problem.

So, assuming someone wrote an AI which started "number crunching" (a HandleMessage()/OnUpdate() call takes several seconds to complete), would this just be considered bad practice?

Link to comment
Share on other sites

That's a good question, but it should be covered with this design.

Basically the "UpdateComponents" part is where "the world is advancing" as you call it.
At this stage the AI proxy listens to all important events and saves the changes (like units dying) in the AI Interface component.
When the AI starts its calculation it has an up-to-date representation of the world. While it updates there are only visual updates happening on the screen like animations playing or parts of the UI updating that aren't tied to data from the simulation. The simulation interpolation makes everything look less choppy and more fluent but it's also purely visual.

The next simulation turn (where the world really changes) will have to wait until all AI calculations are done to avoid this and other problems.

There actually shouldn't be any difference in that regard compared to the single threading.

At the moment the simulation runs about every 200ms in singleplayer games. This means that with this multi-threading design the AI won't delay the simulation update as long as it doesn't need more than 200 ms minus the duration of the components update (assuming it can run in parallel completely, not like on a single core CPU).

Link to comment
Share on other sites

Yves: wouldn't that almost defeat the purpose of multi-threading though? I like the stability it procures AIs, but it might not be perfect.

On the other hand, I see no problem if the AI gives orders to units that have been killed in the meantime to just ignore those orders. The AI could also skip turns, it doesn't really matter.

(obviously though scripts are far from taking over 200 ms except in really rare cases so the point is kind of moot, but it's to consider).

The main issue with the current implementation is that we're actually having to process messages twice every time. the AIproxy/AiInterface get the messages and update their own state, and then the AI updates its state with those informations. That's done in JS entirely which is very slow and very inefficient.

Link to comment
Share on other sites

Yves: wouldn't that almost defeat the purpose of multi-threading though?

No, why?

Let's call these 200 ms - TimeOfComponentsUpdate the "update threshold".

As long as the AI update takes less than the update threshold, this design gets the maximum out of the parallelism.

If the AI update takes longer, it still saves us as much time as the update threshold for each AI update.

You are right though that it does not allow the AI to run only every 10th turn for example but for more than 200 ms without blocking the sim update. But if we wanted to allow sim updates between AI updates we would need to run the same number of sim updates between the AI calculation on all machines in multiplayer games because otherwise it would cause OOS issues. That could be feasible but I don't like the fact that the AI bases its decisions on outdated data. It would have to cope with the fact that all its commands sent to the simulation could be invalid and ignored. That doesn't only affect the specific case of units dying or entities being destroyed it could be a lot more.

Also keep in mind that we also have users with single-core machines. How long do you think an AI update could take to allow relatively fluent gameplay on such machines and how often is an AI update required? The AI needs to react to attacks, so I doubt the time between updates can be longer than 2 or 3 seconds anyway. It's difficult to estimate how big the impact on a single core machine (but with time based multiplexing for "fake-parallelism") would be if we have an AI update taking half a second that runs every two seconds.

The main issue with the current implementation is that we're actually having to process messages twice every time. the AIproxy/AiInterface get the messages and update their own state, and then the AI updates its state with those informations. That's done in JS entirely which is very slow and very inefficient.

Yes, it's not optimal. Do you have any ideas to improve that?

One way would be reducing the amount of data passed to the AI, which is desirable anyway.

A lot of that terrain analysis data will hopefully become obsolete in the future when we get a proper pathfinding interface (which is not the data you're talking about though).

Link to comment
Share on other sites

Mh, right, hadn't thought about the multiplayer issue. So that's settled.

Depends on how slow copying is, this might work: we have a "Real state of the world" in JS which stores actual entity state (updated by JS or C++ listening to events), in the simulation Runtime, copied fully to the AI runtime each time a new turn happens. With a bit of luck, it's quicker to copy everything than reprocess it for big number of state changes.

Another solution would be to process fewer "PositionChanged" messages (and similar very frequent/slightly informative), like only once every 4 turns or something like that. Obviously this implies that there is no warping.

Beyond that we'll need to make our AI-side updating faster, which is closely tied to Entity Collections, which are slow.

The annoying thing being of course that JS is generally slow for all those operations since they involve a ton of arrays.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...