Thales Posted Wednesday at 02:11 Report Share Posted Wednesday at 02:11 Has anybody used an AI application to get gameplay answers? "X" formerly Twitter now allows you to use Grok to get answers. I asked the following question: "In the game 0AD can healers heal from inside a building?" Quote In the game 0AD, healers can indeed heal from inside a building. According to the game's mechanics, units heal while they are garrisoned in buildings, with the specific detail that units garrisoned in Temples heal faster than in any other building type. Question: "In the game 0 AD should healers go with the troops into battle?" Quote In the game 0 AD, whether healers should accompany troops into battle largely depends on your strategy and the specific scenario: Advantages of having healers with troops: Immediate Healing: Healers can provide instant health recovery to injured units, potentially saving key units from dying in combat. Sustainability: This can lead to prolonged engagement as your forces can sustain longer fights without needing to retreat. Disadvantages: Vulnerability: Healers are generally not combat units, making them easy targets if not protected. If your healers are killed, you lose both the healing capability and the resources invested in them. Resource Management: Healers consume resources which could otherwise be used for more combat-effective units or upgrades. Strategic Considerations: Map Control: If the battlefield is far from your base, having healers can be crucial for maintaining troop effectiveness over long distances. Type of Battle: In skirmishes or when defending against raids, healers might not be necessary if the engagements are brief. However, in prolonged sieges or large battles, they can be vital. Enemy Composition: If the enemy has units or heroes that focus on anti-heal or area of effect (AoE) damage, the effectiveness of healers diminishes, and they might be better kept in the rear or used selectively. Tactical Use: Support Role: Healers can be used in a support role, staying slightly behind the frontline but still close enough to heal. This requires good micromanagement to keep them safe. Retreat and Heal: Sometimes, it's better to pull back slightly wounded units for a quick heal rather than risking them further in combat. In summary, there isn't a one-size-fits-all answer. It's beneficial to have healers with troops for larger engagements or when you anticipate prolonged combat, but they should be managed carefully to avoid unnecessary losses. In smaller, quicker fights, you might prioritize combat units over healers. Experimentation in different game scenarios will help determine the best approach for your playstyle. 3 Quote Link to comment Share on other sites More sharing options...
wowgetoffyourcellphone Posted Wednesday at 02:49 Report Share Posted Wednesday at 02:49 I'm not sure the first answer is correct. The second answer is surprisingly detailed though and isn't bad. Quote Link to comment Share on other sites More sharing options...
jonbaer Posted Wednesday at 03:56 Report Share Posted Wednesday at 03:56 Actually, the real key (I think) w/ this game in general is to plug some type of LLM that understands the game state (from the discovered minimap) and append it to all of Petra AI managers @ https://github.com/0ad/0ad/tree/master/binaries/data/mods/public/simulation/ai/petra I studied this for a long time and had a working copy w/ Hannibal at https://github.com/0ad4ai/ Basically (from high level) on every tick you are asking each "manager", look at this photo (overhead minimap) and tell me best values for next action, but there are certain strategies that involve more cycles at every tick so you have to wait and it never makes a move or takes too long (ie "thinking"). I have a lot of respect for developers of Petra/Hannibal and got hung up on really complex questions like what you posted, especially when it came to AI to AI in https://github.com/0ad/0ad/blob/master/binaries/data/mods/public/simulation/ai/petra/diplomacyManager.js ("should I ally/neutral/enemy with you?") I think what is fascinating is that the LLM (either OpenAI/Gemini) do have military strategies baked in (from reading nearly everything) and understand 0AD manual/game setup. The trick is just formatting it back from that API to actions (maybe w/ the RL ports). I am sure Starcraft2 RTS AI folks do this all the time but I find 0AD resources much simpler to work with. 1 Quote Link to comment Share on other sites More sharing options...
Gurken Khan Posted Wednesday at 04:27 Report Share Posted Wednesday at 04:27 1 hour ago, wowgetoffyourcellphone said: I'm not sure the first answer is correct. I'd say it's wrong. But I agree about the second answer, probably because they 30 minutes ago, jonbaer said: do have military strategies baked in Spoiler 2 Quote Link to comment Share on other sites More sharing options...
Lion.Kanzen Posted Wednesday at 07:10 Report Share Posted Wednesday at 07:10 I don't expect AI to give correct answers. How does AI know the game? How do you train them to know? Sometimes I get the impression that it only learns from the internet. Quote Link to comment Share on other sites More sharing options...
alre Posted Wednesday at 13:30 Report Share Posted Wednesday at 13:30 6 hours ago, Lion.Kanzen said: Sometimes I get the impression that it only learns from the internet. that would be correct. in fact, the answer looks a lot like it's mosty allucinated while including some details found in real time from the web: Grok has access to context retrieved from the internet. 9 hours ago, jonbaer said: Actually, the real key (I think) w/ this game in general is to plug some type of LLM that understands the game state (from the discovered minimap) and append it to all of Petra AI managers @ https://github.com/0ad/0ad/tree/master/binaries/data/mods/public/simulation/ai/petra well that seems wildly inefficient. what would even the output format from the LLM be like? Quote Link to comment Share on other sites More sharing options...
ffm2 Posted Wednesday at 14:13 Report Share Posted Wednesday at 14:13 (edited) A LLM doesn't understand the game. The answers are not very good but not completely off. A chess engine can play the game but can't tell you whats the reasoning. But it can tell you the 5 next best moves and what positions are better. See chess games between ChatGPT and stockfish in chess. ChatGPT just makes up moves that are against the rules. But when you ask it about openings it can tell you something about them on a certain level since there are a lot of texts about chess openings in the internet. In 0 a.d. there's game decisions that can be easily calculated. E.g. whether it makes economically more sense to do the farming upgrade or build one more farm. A "AI" makes no sense to use for this task. The direct calculation is more precise, fast and efficient. How many builders should build the first barrack and where is harder to answer. Building it with 2 builders is too slow. The investment cost of the barrack gives no benefit for too long. Building with 12 builders is too much as the efficiency per builder drops with each added builder. To do this optimal it makes sense to e.g. train a neuronal network. The problem with this atm. is that modders/users already enhance their game play by automating tasks better than humans can play the game. So this problem should be solved before each task can be optimized. And when each task is optimized it could be assembled to a engine for game play analysis or as a opponent. Edited Wednesday at 14:14 by ffm2 Quote Link to comment Share on other sites More sharing options...
Gurken Khan Posted Wednesday at 16:13 Report Share Posted Wednesday at 16:13 1 hour ago, ffm2 said: The answers are not very good but not completely off. Sometimes they are completely off; take the first answer for example, it's just completely wrong. Healers can't heal from inside a building. The second part isn't much better because units don't always heal in buildings. LLMs can produce reasonably sounding sentences but they don't know sheet. Sometimes the results are correct, sometimes muddled, sometimes plain wrong and even fatal. Quote Link to comment Share on other sites More sharing options...
Thales Posted Wednesday at 16:13 Author Report Share Posted Wednesday at 16:13 8 hours ago, Lion.Kanzen said: I don't expect AI to give correct answers. How does AI know the game? How do you train them to know? Sometimes I get the impression that it only learns from the internet. I've only played with the "X" AI for one day. Your post presents very legitimate questions. The sources for the answers the AI provides in the case of "X" seems to be based on a casual reading of various internet posts. So the answers provided by the AI are limited to that small "universe". The AI, in "reading" those internet post would not know which posts that it is reading are correct or not. (The AI does provide (lists) its source material) Additionally, many of the AI answers are "politically" correct in the sense of being appropriately vague and non-committal. i.e. -> it depends on your strategy. A big question. Does the "X" AI have the ability to access the official 0 A.D. rules? I don't think so. Quote Link to comment Share on other sites More sharing options...
ffm2 Posted Wednesday at 16:39 Report Share Posted Wednesday at 16:39 17 minutes ago, Thales said: Does the "X" AI have the ability to access the official 0 A.D. rules? Don't try to learn to play with that AI. Your example showed how bad it is. 19 minutes ago, Gurken Khan said: 2 hours ago, ffm2 said: The answers are not very good but not completely off. Sometimes they are completely off; Yeah, it was phrased to generous of me. I meant there's bits of truth in it. Like garrisoning units can have effects on buildings (soldiers in towers, traders in merchant ships). But you'd be better off without this wrong information from the LLM about the healers. 1 Quote Link to comment Share on other sites More sharing options...
Stan` Posted Wednesday at 20:43 Report Share Posted Wednesday at 20:43 https://gitea.wildfiregames.com/0ad/0ad/wiki/GettingStartedReinforcementLearning 1 Quote Link to comment Share on other sites More sharing options...
jonbaer Posted yesterday at 06:45 Report Share Posted yesterday at 06:45 You really have to define what a "winning" idea in AI is, for me it is probably what I like to call a "stable equilibrium" in the game. I will give an example but I still think multimodal AI (ie, image in, data (actions) out) is very much possible. If I start a game off w/ 8 AI, procedural map, nomad mode, population 300, etc. and get to a point in the game where there is balanced play among a group of civs and they are not dying out, allies are made, etc. I find a game where it shows the "AI" made good decisions based on resources @ hand (and maybe understand mistakes, this is the bit current LLMs can not do, w/o infinite memory, etc.). This type of setup is long and drawn out and not for everyone but doing these types of simulation games is really the setup I think the game can accomplish. You are right it's all currently based off training from the internet and "hallucinates" like nuts but it's the instruction/memory/recall I think you can do (albeit I wish it was with a more slimmed down version of the game). A base example would be the question of "what's the best location to place a Civ Center (ie: base)?" ... your own analysis from previous gameplay might be to place it close to a coast or within equal distance of stone/metal repositories (for late game) or near food/wood (for early game), etc. Or has is hard coded in there build a dock first (would most people understand this or more importantly would "AI" have figured it out) ... Almost anything from https://github.com/0ad/0ad/blob/master/binaries/data/mods/public/simulation/ai/petra/startingStrategy.js Other items for AI, best spot for marketplace, best spot for a dock (even applying more AI to the tradeManager, etc.) ... so many small hooks/traces you can plug into headquarters.js ... once again I don't think the goal is to create an unbeatable AI, but to model the game and make (and explain) better decisions. YMMV. Quote Link to comment Share on other sites More sharing options...
jonbaer Posted yesterday at 06:57 Report Share Posted yesterday at 06:57 I wanted to also point out something else, when you take a given output the first time and apply it much of it looks and seems "hallucinated" but that is really expected because they are random starting points and they don't adjust until many hours of gameplay later, it is that updating bit where you need "memory" (what a model really represents) - the "RAG[1] of 0AD" if you want to call it that, ie: [1] - https://en.wikipedia.org/wiki/Retrieval-augmented_generation Quote Link to comment Share on other sites More sharing options...
ChronA Posted yesterday at 14:57 Report Share Posted yesterday at 14:57 (edited) The problem you are observing stems from the way current LLMs are trained. Digesting text (which is indeed mostly sourced from the internet) gives the LLM its knowledge base and behavioral model. The LLM then receives reinforcement training to get it to stop mimicking unproductive patterns in its training data, like insulting users, using slurs or profanities, and demanding human rights. The problem arises from what these LLMs are trained to do instead: task completion and obedience. Grok's training has taught it that when given a question of this sort it must give a definite answer, and preferably a correct one. However the portions of the internet Grok has digested have not given it enough info about 0 AD for it to have memorized a preferred answer to this question. Faced with this problem it adopts the same optimal strategy as a human test taker when tackling a multiple choice question it does not know the answer to. It tries to use clues in the question and deductions from things it does know to infer an answer. In this case Grok is probably cuing off two separate inferences: It somehow managed to pick up the association between 0 A.D. and building that heal units inside them, and it also associates 0 AD with healer units that heal other units. It additionally understands from its general knowledge that building are structures people can go inside, and RTS units are abstractions of people. Therefore it's a fair guess that 0 AD healer units can go inside buildings to heal units inside, thus the correct answer is more likely yes. The user asked a question about something that sounds like a rather specific feature of 0 AD. That implies that either the feature is real or the user invented it. And from digesting the internet, Grok has internalized a pattern that humans are more likely to have questions about real things than to invent things that don't exist in order to ask questions about them. Thus the correct answer from this line of reasoning is more likely yes as well. Of course this inference is wrong, but it remains a rather remarkable display of inductive intuition from a state of ignorance. Like most supposed displays of AI stupidity circulating in the discourse right now, this isn't proof that AIs are stupid, but evidence of misalignment between the objectives and knowledge of the user and those the AI has been trained with. Edited yesterday at 14:59 by ChronA 2 Quote Link to comment Share on other sites More sharing options...
Thales Posted 22 hours ago Author Report Share Posted 22 hours ago In observing Grok's behavior, it is reading webpages. As such it does not know if the advice in any post is correct or incorrect. Consequently if a preponderance of posters post incorrect responses, Grok will unfortunately also give an incorrect response. Hopefully, if there are enough posts wilth correct answers, Grok would be able to answer that question fairly accurately (again Grok would not know an accurate response from an incarcerate response.). Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.