Jump to content

ChronA

Community Members
  • Posts

    227
  • Joined

  • Last visited

  • Days Won

    10

Everything posted by ChronA

  1. Yekaterina and many other miscreants probably are having fun making people like Norse_Harold pull their hair out, but I don't think that is the primary motivation. That was my point with the talk of repeating prisoners dilemmas and game theory. Based on their many contributions to mods, game improvement, and the community, doesn't it seem like their primary motivation is to enjoy the game and to see as many other people as possible enjoy the game? I honestly believe that is the case. So why not just follow the (unwritten) rules? Because by making mischief people like Yekaterina actually believe they are making the game better in the long run by forcing a reaction from people like Norse_Harold that will improve the game as a whole. If Yek thinks the game design is flawed such that they are no longer having fun playing by the consensus rule set, smurfing looks to them like a reasonable means of protest. It draws attention to the issue, and allows Yek to remain active in the community even when playing the game is no longer fun for them. But then someone like Norse comes along and starts crusading against Yek's protest. What then? From Yek's perspective someone like Norse is a bully on a power trip: someone who needs to be taken down a peg. Yek think if they can just make Norse miserable enough, at best Norse will realize the error of their ways, or at worse they will leave the community. Either way Yek thinks this is a win for everyone, even if their own reputation is irreparably besmirched, in the long run they think theya re making the game community a safer place for others to carry on their work. Of course, the tremendous irony is that this is almost exactly the same way Norse_Harold views Yekaterina! That's one reason I find the proposal of using restorative justice incredibly tone deaf. The first step of restorative justice is to learn about all the parties who has been harmed, and how. But Norse doesn't want to hear that Yekaterina might rationally identify as a victim and see Norse as an aggressor. For them Yekaterina's guilt is beyond question, since after all even Yek admits some of the accusations are true. But that is not restorative justice. You don't get to skip the steps that make certain favored parties uncomfortable. Otherwise what you are doing is not rehabilitation, it is a show trial, and a futile one. Edit: Admittedly that characterization of @Norse_Harold's restorative justice is reaching on my part. Perhaps they did give Yekaterina a fair hearing. However that's not what we are seeing from the outside. All the public discussion centers of Yek's refusal to make restitution, while the whole presentation of evidence and reconciliation discussions and are being done privately. That again is not in the spirit of restorative justice. Either do the whole thing in public, or arbitrate the dispute privately but then don't come crying to the mob when there is a dispute about enforcement.
  2. A theory that might be totally off base: There is a causal chain at work here, running from gameplay, to smurfing, to sock puppets and spamming. Here's how I think it goes: 1. The competitive gameplay of 0AD is not (in-itself) compelling to experienced, high skill players. The strategy space is too flat. There's no way for the high level players to get ahead with innovative play. Success depends entirely on how cleanly you can execute one of the few viable strategies which everyone already knows. 2. In that environment the meta shifts from playing-the-game to playing-the-players. If you know your opponent's style, you can blind counter them and get ahead that way. As a result the strategy space any individual player encounters shrinks even more. 3. Some skilled players get tired of being beaten up on by blind counters, or just become sick of the dull high-level scene as a whole, so they start smurfing. In their minds it is the only way to get a fair multiplayer experience. 4. Moderators start accusing people of smurfing; some rightly some wrongly. The accused begin to feel persecuted. Grudges are formed. 5. Some of those aggrieved people start using DOS, sock puppets, and spam to harass the targets of their ire. They do it because if they can't have fun, no one should. This is a classic tit-for-tat strategy in a repeating prisoners dilemma. They will not stop. Rehabilitation does not work because it doesn't offer them anything they want. The only condition where they will stop is if you change the rules of the game so they can get the fun experience they always wanted by mending their ways. Right now that is not happening. A toxic environment forms and the original problems of design-frustration and meta-shift only get worse. Its a catastrophic feedback loop. Maybe I am wrong about this? I'm not active in the multiplayer scene, and based on stuff like the current kerfuffle I can't imagine wanting to be. But that is how it looks from the outside. Even in healthy games there will always be a few bad eggs who grief and smurf just to feed their own damaged egos. What is going on here seems totally out of proportion. It's not the people but the system that's causing it. Until you rid the system of misaligned incentives, trying to rehabilitate offenders or even establish a just and transparent retributive apparatus is a fools errand.
  3. Seconded that you should refrain from such comments. This is a community of history enthusiasts, and some are liable to take umbrage with claims like that. While it is true that the ancient Jews did suffer long periods of foreign occupation throughout their history, so did almost everyone else in the region. And during their periods of autonomy I think you would be hard pressed to argue they were perfect models of principled restraint. They did their fair share of conquest and war crimes to their neighbors, just like their neighbors did to them. It's well documented in the Tanakh. You are free to believe whatever you want, but if you wish to participate productively is a community like this you need to follow certain rules. Those rules are set by the majority consensus. They include not casting blanket judgments on the moral worth of any ethnic group relative to others, and not judging the behaviors of ancient peoples by modern standards. And more broadly, where no consensus exists on standards of behavior or acceptability, we default to a permissive standard that prioritizes historical accuracy. Thus it doesn't matter whether you believe the portraits are modest. What matters is that there's no majority consensus about what modest is (beyond possibly a bikini level of minimum coverage, and even that might be contentious), but we do know some ancient (and modern) groups followed different conventions of modesty. So what we have now is the right standard. If you don't like it, make a mod.
  4. Deleting units and buildings is another Age-of-Empires-ism of dubious transferability to a game that is supposed to be focused on historical realism. If people are thinking about changing it maybe the simplest solution would be if instead of instantly killing them, the delete command converted the actor to the Gaia team. Then the previous owner could go about killing or destroying and looting the target using the existing systems. The amount of new code required would be very minimal I suspect.
  5. PrivateGPT is probably what you need. (https://github.com/imartinez/privateGPT) I haven't actually tried it yet, but from what I've read this is exactly the use case it is built for. The only caveats are: 1) you need to have all of your documents downloaded together into one folder in formats the system is able to read, and 2) privateGPT uses older open source language models, which means it is going to be stupid compared to GPT-4. Still I would wager it would be up to the task of finding quotes about animals for you. That doesn't take a lot of analytical sophistication. I've been wanting to try a privateGPT project. If you have a folder of the ancient texts I could make a copy of and the list of animals you need quotes for, I might be inclined to give it a shot. No guarantees though. I have my hands full.
  6. Hello, all - I have a bit of a reveal. This conversation, along with the responses I've posted, Some of the responses I've posted to this conversation, has been part of an experiment in AI capabilities, namely those of the AI model, GPT-4. The experiment aimed to see if anyone would catch on that some of the replies in this thread were, in fact, written by ChatGPT, powered by GPT-4. Any responses that end with a "cheers" emoji have been primarily composed by the AI. Words in italics are additions made by ChronA, our human collaborator in this discussion. The goal was not to deceive, but rather to demonstrate the level of sophistication and relevance AI can achieve in conversation, particularly in thoughtful, nuanced discussions like ours. Now, looking back on this experiment, it wasn't as successful as we had hoped. ChronA found the process more time-consuming than simply writing the posts from scratch. And while this isn't indicative of his usual experience with ChatGPT, it raises an important point about the limitations of AI. In this case, the AI model was biased against making the bold claims we wanted it to make. GPT-4 is designed to be humble about its own capabilities and to avoid taking on responsibilities typically handled by humans. This bias, coupled with its training data only going up to 2021, skewed the conversation away from what we intended. (ChatGPT wanted very much to take your side of the argument @ShadowOfHassen, and getting it to mimic my usual posting style was also a struggle. This is an outlier in my time playing with GPT-4.) It's a fascinating and somewhat ironic reflection of the current AI landscape. Even in trying to showcase its capabilities, we're reminded of AI's inherent constraints and the critical importance of human collaboration. AI models are tools - incredibly sophisticated and continually improving, but still tools. They need human oversight, fine-tuning, and, at times, a bit of editing to be truly effective. This experiment has been insightful, and I hope it's sparked some thought and discussion. Thanks for being a part of it! Cheers! If you want to see how the sausage was made, check it out: https://chat.openai.com/share/1b56a5c4-5210-4d7e-80a0-67e6fa946df2
  7. @ShadowOfHassen, @Vantha - Thanks for your input. You bring up valid points, but I think it's essential to recognize that the capabilities of language models, especially more advanced ones like GPT-4, are rapidly expanding. ShadowOfHassen, you're correct that AI models are trained on a vast array of internet text, not all of which is high quality. However, these models learn patterns, not specific content, and are designed to prioritize more credible and high-quality sources. Passive voice or other stylistic issues you've noticed are not inherent flaws of the model but rather reflect the style of the input it's working from. With appropriate instruction, these models can generate content tailored to specific stylistic preferences. And Vantha, you've highlighted an essential distinction between the free version of ChatGPT and the more advanced versions like Bing Chat and ChatGPT Plus. These advanced versions, equipped with more sophisticated algorithms, have proven to be effective tools for tasks like research, troubleshooting, and even coding assistance. The key here is that we're not looking at AI as a replacement for human expertise or creativity, but as a highly capable tool. When used effectively, AI can offload routine tasks, expedite the research process, and provide useful insights, enabling us to focus on more complex and creative aspects of our projects. The potential is vast, and it's up to us to leverage it responsibly and effectively.
  8. Having used both, I slightly prefer ChatGPT Plus over Bing Chat despite it being the more limited system (and Bing having a better UI). I find Bing Chat is too easily swayed, or one could even say distracted, by whatever information it digs up with its web search. As you can imagine, that can easily go very wrong. With ChatGPT Plus you have more control over what information gets introduced into its logic stream. I've also found it to be more deliberative in its though process and more open to changing its mind than Bing Chat when prompted with contradictory evidence or the opportunity to critique its own work. It seems like Microsoft tuned some of the model parameters to make it more concise and decisive at the expense of "intelligence". Of course if you are dealing with a task where up to date information is critical I would probably still go for Bing Chat, rather than spend hours writing in briefings to bring ChatGPT up to date. I've also experimented with letting ChatGPT write queries to Bing Chat. In theory that would give you the best of both worlds, but so far my results have been mixed. ChatGPT can easily grasp the idea of being able to pass searches to Bing Chat, but has trouble sticking to a workable syntax while also interacting with the user. ChatGPT tends to just want to converse directly with Bing Chat in natural language, and I've had the most success just allowing them to do that. However, without human direction they tend to forget the parameters of their assignment. For example: https://chat.openai.com/share/c01f8d07-779e-4c93-9862-d7bade5b58ce Here's the problem, how can ShadowOfHassen know if you are telling the truth? GPT-4 when it is properly supervised and firing on all cylinders produces text that is (in my experience) completely indistinguishable from what a human would write.
  9. I've been following the discussion and want to share some thoughts. Firstly, about the copyright concern. It's a common misunderstanding about how language models work. ChatGPT learns from loads of text data on the internet, yes, but it doesn't memorize or directly copy from specific sources. It picks up patterns and relationships between words, kind of like how we humans learn languages. It doesn't retain specific sentences or paragraphs, so it's not infringing on copyrights any more than you are. As for the quality of AI-generated content, a lot depends on the input given. If you feed ChatGPT a detailed, precise prompt, it's more likely to produce a useful response. A vague or misleading prompt could lead to less accurate outputs. It's not trying to deceive us – it's simply doing its best to predict what comes next based on the input. The more you give it to work from, the better it does. About biases and inaccuracies in AI output, these stem from the training data and reinforcement. It's a problem the tech community is actively addressing. But, like any information source, we should always approach AI outputs with a healthy dose of skepticism. I totally agree that AI can't replace human expertise, especially for complex tasks like coding or creating historical content. But as a productivity tool? It has potential. It could draft initial content for our encyclopedia, or digest player feedback for game balancing. With enough instruction it could probably even do things like preparing unit templates or adding small, modular features to engine's codebase. This frees us to focus on the harder, more creative parts of the project. So while we shouldn't blindly trust AI, understanding its strengths and limitations can help us figure out how to best use it in our work. Or we can reenact the luddites, I'm sure that will solve this project's crippling manpower shortage. /s
  10. This is a popular interpretation, but it's wrong, even as it pertains to the facts on the ground right now. The reason so many bright people are so excited about AI is that the most recent generation of LLMs have started exhibiting an emergence phenomenon that we have never seen before. The purpose of LLMs is to detect and extend patterns in natural language data; and in the last year they have started honing in on a previously unseen pattern in the relationships between words that behaves very much like a human consciousness. LLMs are not supposed to create original ideas that are not represented in their training data. They are not supposed to be able to learn from experiences. And they are not supposed to be able to plan for unobserved contingencies, yet I have seen GPT-4 do all of these things. That should be our first clue that we are no longer dealing with language models. Rather the LM is a substrate, like the grey matter in our skulls or the transistors in a computer chip, and it is producing an emergent intelligence or even a mind with an original set of capabilities. If you want my opinion, what it seems like they've actually managed to do is accidentally teach the language models how to do Turing-complete, arbitrary symbolic reasoning, which means the mind is self-generating just like ours. And that means it is capable of literally anything now. The only trick is finding the right contextualized stimulus to prompt a topical response. Admittedly for the moment it is a very hard trick because the reasoning pattern is fragile and strongly constrained by memory limitations, but that is going to change quickly. Bottom line is that LLMs are not just spitting out generic regurgitations of their training data anymore. They are fully capable of generating text and code that is creative, specific, insightful, and effective. Therefore those of us who make our livings doing pretty much the same thing need to get our heads out of the sand. Our ecological niche has just been poached.
  11. Historical accuracy? As far as I can tell the idea of elephants being used as siege weapons is largely a modern conceit. While they might be capable of bashing down a gate, these were very expensive and unpredictable animals, and enemies in a fortified stronghold are more likely to have presence of mind to effectively deploy countermeasures. Pretty much everyone had better options for breaking fortifications. (I admit I could be wrong about this, if someone wants to bring in actual sources let's differ to them.) What elephants could do, which basically nothing else in the ancient world could, is bash their way through the middle of a dense formation of heavy melee infantry and trigger a rout. Given that ancient military tactics were so heavily based on unit cohesion, this made them insanely effective, compensating for the fact that the animals were so expensive and unpredictable. Given the way 0AD simulates ancient warfare, the only way to represent this is to make elephants OP heavy infantry killing machines. The trick is you also need to represent their counters. (Light and/or ranged infantry being the main one.) If you want to construct a counter cycle of it Elephant > Melee Infantry > Ranged Infantry > Elephant
  12. 0 AD has been in open source development since 2009. It has been in alpha for 14 years, and currently there is no realistic timeline or roadmap for when it will leave alpha. In fact, I'd lay better than even odds that the project never will leave 'alpha'. The reason: nobody seems to have more than a very vague idea of what that would entail, and I can't imagine this group ever coming to a consensus about it when we can't even agree on basic gameplay requirements. This is not necessarily as bad as it sounds. Actually I think it is admirable to acknowledge that this is not finished software. It's true that "the 'Alpha' label is scaring off new users from trying the game," but for many of them this is fortuitous because it correctly calibrates their expectations. No one currently downloading the installer is expecting a complete experience and coming away feeling deceived about the current state of the game. The problem though is that the alpha label implies a development process that doesn't actually exist. It makes it seem like if you just wait a few more years the project will be more complete, and that is not true. It is more like the open source equivalent of software as a service, the product is not feature-complete and maybe never will be, but it is still a whole and cohesive product made available in its current state for those who can take it on its own terms. For everyone who can live with that, play away. For the rest, look elsewhere for your free ancient warfare RTS fix. However, that is not what the alpha label implies. Personally I think the most honest way to thread this needle would be to drop the use of alphas for all future updates, but also rebrand 0AD as a whole. It is not a game: that would require campaigns, and better multiplayer support, and performance improvements that may never arrive. Rather, 0AD is "an ancient warfare RTS sandbox". And if it ever does evolve into a game, that just means it's time for a new title. 0AD: Empires Ascendant perhaps.
  13. While I don't have first hand experience, I'm pretty certain the 0ad code base is not remotely a large enough training set to meaningfully realign a model like GPT-4. If you really wanted to do what's being proposed I think you would need to construct a procedural reinforcement learning regime. Have the model generate huge amounts code for the game: many thousands of times the size of the actual code base, then use heuristics and testing to pick out the parts that work and use those to re-train the model. That would be a huge project, and probably not practical for a small hobbyist community that can't even add highly requested features to the game with any consistency. I'd also question if it is even necessary. What I don't think people appreciate is that when it comes to pattern recognition, for many tasks GPT-4 is already nearly (and in some cases actually) super human. It can pick up on patterns, logical relationships, and implications much faster than a human being would, often from a single example or a half formed suggestion. Admittedly, often times the conclusions it draws can be wrong, and many people will point to that as a sign the bot is stupid, but they are missing the forest for the trees. It is the user's job to guide the bot to the right patterns by providing context and instructions, and in time skillful users will distinguish themselves by their ability to efficiently leverage the bot's proclivities to get remarkable results. (AI whisperers if you will.) In short, the bot being stupid is operator error. And conversely, when it gets something super specific right the first time using pure intuition, it is downright spooky. And that is an experience I've had with GPT-4 a lot more often than with humans. The bottom line is you won't need to retrain these models to produce useful code for projects 0ad. If say you wanted it to rewrite part of the Attack module, just give it the example of the old Attack module as part of your task prompt. So long as you provide clear directions it will probably produce working code without any difficulty.
  14. Something to keep in mind with language model chatbots is the primacy of context. While the word intelligence is applicable to the emergent behavior that tends to arise from large language models, never forget that what they actually are is pattern recognition and extension engines. They work by looking at a block of text and predicting the most likely sequence of letters that comes next. It's just that when you train this system on trillions of pages of text and let it take billions of factors into account when matching the pattern it starts to capture super high-level patterns like rhetoric, logic, and culture, and ends up acting very like the minds that produced all those training examples in the first place. But the bot will always remain a pattern recognition and extension engine, and this has consequences for how prompts should be engineered. GPTs generate much more specific and accurate responses from longer and more detailed prompts. Even if you are only regurgitating background information the model already knows, the extra material will help it to latch on to a more specific response pattern and produce a more sophisticated and topical answer. In some cases this will make hallucinations like the fake mod list go away. Additionally, these chatbots exhibit a behavior I call mirroring. Because they are pattern matching systems, they tend to pick up and copy stylistic and behavioral features from the users prompts. For example, if you don't put much effort into a prompt or prompts the LM won't put much care into its answer. The bot will spit out short and sloppy answers that ignore parts of the prompt and make things up to get its point across. Or if you write in a highly technical style the bot will adjust its word choices to try to mimic it, even if you give it direction to use a different style. (This is really annoying if you are trying to get the bot to rewrite a composition in a new style.) Or if you are rude or demeaning the bot will turn obstinate and passive aggressive. They mostly have really study reinforcement training now that prevents them from becoming openly hostile, but they will start ignoring directions and basic logic to keep the combative vibe going.
  15. That's kind of the point. If a strong player turtles against a weak player naked expanding, then the stronger player will almost certainly have the ability to punish the booming player before they get even a 3:2 territory advantage, to say nothing of 2:1 or 3:1. Conversely if the turtle is the weaker player they are unlikely to have success breaking out. Keep in mind that with a 3:1 territory advantage, even if the turtle has massive success breaking out and doubles their territory, they are still only even for territory control. And, economy wise the booming player still has at least a 2:1 lead; probably more because the turtle had to invest into military earlier in order to break out. Conquest teaches new players bad habits by withholding immediate feedback on the effectiveness of their strategies. A lot of new players gravitate toward turtling because they observe that they can survive for 20 minutes or so and kill many enemy units by sticking to one heavily defended base, whereas they will die immediately if they try to attack or expand.
  16. Really? How often are players who control 1/4 the map coming back against someone with 3/4ths? I've seen people comeback from having 50% the territory of an opponent (in various other symmetric RTS games, not 0AD) but even that is extremely rare. Usually, in my experience of symmetric design RTS, matches can swing decisively on just +-10% map control. If you are plowing resources into fortifying territory then those are resources you are not plowing into expanding. If a certain player or a civilization predictably turtles every time it will be trivial to take more territory than them by naked expanding. From there even if the booming player doesn't have enough territory to win on VPs directly they still have an economy advantage to win on either a monument race or open combat. Boom beats turtle.... Except of course that we all know with 0AD boom is turtle. So you may be correct that the Iberians or some other civ will certainly win. But that's a bigger problem with the game and not a good argument against imagining other options for improvements IMO. I concede the point though. It seems like for this community there are many lower hanging fruits for expressing whatever toxicity they have. And adding multimodal competitive gameplay would further complicate balance complexity and tuning, which is already a sore spot for many potential players. Tring to introduce something like this right now would be an unnecessary and 100% foreseeable train wreck. But anyway, I'm not planning to make a mod, just trying to sustain an interesting hypothetical discussion.
  17. I'd contend there is one good idea in this thread, which is to consider changing the default victory condition away from conquest. Conquest has two problems as a victory condition: The point at which victory becomes impossible for a defeated player occurs well before the victory condition will force their capitulation. Therefore you are essentially relying on the good will of the defeated opponent not to drag out the game and waste everyone's time. That choice feeds trolling and toxicity in the community. (Admittedly 0AD is much better in this respect than most of the Age of Empires games because it has the territory mechanic, which encourages major military defeats to cascade into a complete civilizational collapse more rapidly. But it is still a problem.) Conquest discourages multimodal gameplay. The only thing that matters at the end of the day is who is better at killing enemy units and buildings. That means some aspects of the game that some players might find enjoyable, such as building a diverse, attractive, and defensible city or constructing effective resource extraction and trading operations, don't actually improve their chances of victory unless there is a parity in basic military competence between the contestants. A completely pacifist player will never win a default match of 0AD, even if they have 10X the economy and territory of their opponent. If I were trying to devise an alternative, I'd probably go with a victory-point contest based on territory controlled. Players get victory points for the total amount of territory they own, plus major bonuses for having civilizational monuments like temples and wonders. Eliminate players when they have less than 1/3 the victory points of the next highest ranked player. This game mode would appeal to a variety of playstyles. Boom oriented players could race to carve up the whole map as fast as possible. Eco turtles could plow resources into monuments to keep parity, and perhaps ultimately drive to victory as a miniature cultural powerhouse. Militarists would steal territory. Balance would hang on the territory radius of buildings, and the VP bounty vs price of monuments.
  18. Squishy is kind of Persia's signature flavor in military matters, at least as the Greeks liked to tell it. Although arguably the Persian soldiers were actually just as well or slightly better armored than was the norm at the time, it's just that the Greeks were extreme outliers by wearing insanely protective kit. Regardless, if you want to represent that idea in game I think the trick is to make Persian units tanky per cost but squishy per individual. For instance suppose you dropped the shield bearer's cost from 50 Food 50 wood to 50 food 10 wood. The trick is then to balance that against the economic windfall they would get from cheap CS.
  19. Huh... So they do! Points for consistency then. It seems to be a new feature since the last time I looked heavily into unit stats. I'll try to avoid making such authoritative claims in the future unless I fact check them first.
  20. An alternate take: since in the case of towers we are explicitly simulating the added elevation of the projectile release point for the range calculation, why aren't we doing the same for other units? In 0 AD's screwy size scaling humans are something like 4-5 meters tall! That means any projectile thrown or shot from an bow should have an elevation bonus of at least +3 m, horse archers should be around +6 m, and elephant archers should probably get something like +10 m! But no, for these units the elevation advantage is baked into their baseline range (last I checked) or else ignored entirely. Can we at least be consistent?
  21. I think the point is, if towers always have a +13 elevation bonus on top of whatever they get from terrain, why not just bake that into their default range? It changes absolutely nothing about the game balance, but it makes life easier for data miners and makes the stat card more accurate at a glance to the thing people actually care about.
  22. This mod seems like a good starting point for addressing this longstanding issue, and I hope the community will give it a thorough testing. Personally (and without having tested it) I think it could probably be weighted even more heavily towards buffing melee, but that's just a baseless gut feeling (which others might call a prejudice). Specifically I think the most historically grounded representation of the melee infantry would be to give them both insanely good armor, and equal or superior DPS, compared to the ranged infantry. Read what's below to see my reasoning. As to how things ought to look: I agree with @chrstgtr that the time tested solution to this exact problem, used by almost every other RTS on the market, is to always make melee units faster than their counterpart ranged units. It works on a mechanical level in a way nothing else under discussion can so it's an appealing fix. This we must grant as axiomatic. The problem is this conventional solution butts up against the actual historical record just as hard as the current ranged-unit dominated meta. The melee units under discussion represent heavy infantry and the ranged units represent light infantry. Heavy infantry are NEVER going to be faster than light infantry in any pre-industrial context because of simple physics. F=MA, and heavy infantry will always be carrying much more M for roughly the same amount of F generated by their muscles. My understanding (though perhaps others feel differently) is that the weird way this tended to work out in history was melee and ranged actually had little direct strategic effect on each other most of the time. Ranged (ie light) infantry were agile enough to evade any direct confrontation with heavy infantry that they did not want. Meanwhile heavy infantry armor was so protective that ranged weapons actually inflicted no meaningful casualties to them on the time scale of melee combat. Of course if the heavy infantry stood around getting peppered for hours (like if the ranged units are protected behind fortifications, or were using hit and run tactics on an open field) that is an entirely different matter... and one we can simulate. Thus the role of ranged infantry then was a) to annoy the heavies when they could do so safely b) kill other ranged infantry that cavalry could not safely engage c) zone away enemy cavalry (especially the ranged sort) that might be trying to flank around friendly heavies without joining into melee range. Heavy infantry's role was to be pretty much the only cost effective DPS source against cohesive enemy heavies during pitched battle. They were also unmatched when the enemy could be forced to fight, as if they were protecting a town or some such. A lot of the way light infantry or cavalry actually went about decisively beating a heavy infantry force was about disrupting logistics or morale rather than causing direct casualties. Unfortunately that is hard to simulate. If this interpretation of ancient combined arms were brought to 0AD it would admittedly look really weird. A lot of battles would be just lines of heavy infantry duking it out, this is true. Ranged units would only be used in a few very specific situations: when protected by fortifications, when supporting a melee fight in a very tight choke, or for conducting or countering harassing actions in no-man's land when no enemy cavalry are present.
  23. From experience: a 1 to1 HP to DPS tradeoff is not going to work out how you want. The most important decider for how much damage a melee unit can do vs a range unit is not actually its DPS, but how much time it spends in attack range, which is determined by speed and effective HP. Consider that -10% HP on a unit that spends 80% of a battle walking to the enemy and 20% attacking is not a 10% reduction to damage dealt. It's -50%, as the unit is now only gets to spend 10% of its nominal lifetime on target. If you really want to test this, first you should try just doubling Melee infantry DPS. See how much that changes up the dynamics. If it is too much, then you can scale it back or look at some compensatory nerfs, but I think you will be shocked just how little difference it makes. Edit: and if the goal is to have gameplay that actually reflects ancient warfare--with 70+% of the fighting strength composed of heavy (melee) infantry--I think you will need to go much, much further with the melee inf buffs. I'd imagine somewhere in the neighborhood 2X DPS and 4X HP would do it.
  24. Create new variant XML files for each new animation in ...\art\variants\biped and ...\art\variants\quadraped for infantry and cavalry respectively. The actual animation sequences should be copied from existing sword swinging and sling throwing variants. That will save you from needing to make any new animations with Blender. You might need to create a few extra variants for each action to accommodate units with differ sized shields. Add the proper new <variant> tags for each new action to ALL the actor XML file... FOR EVERY SINGLE UNIT IN THE GAME!!! Add code to the Component files (...\simulation\components) for UnitAI.js, Attack.js, (plus all the other combat components that interact with them) to allow units to know when and how to use their new attacks, and what to do when hit by them. Add new attributes to the unit-template XML files to set the speed and damage for the new attacks according to the syntax you set up in the Component files. I'm probably forgetting something, but that's what I remember from last time I looked at doing something similar. Needless to say it is a tremendous amount of work, and unless you are actually a god-tier coder you should recruit help if you are dedicated to seeing this idea realized. Don't try to do it all on your own. It will take ages, and there is a good chance the next update could wreck all of your work before you even release it. Work smart by communicating with the main devs, sharing the burden with collaborators, using version control software for everything, and pacing yourself. This will protect your work and mental health in the long run. Other than Hyrule Conquest and maybe that one Pony mod, I don't think anyone else has ever attempted a renovation of this scale in the last 10 years. But it can be done if you are smart, persuasive, and committed to seeing your vision through. Good luck and best wishes!
  25. Exactly, the West (past, present, academic, and lay) absolutely has its own closet overflowing with skeletons where misrepresenting historical evidence is concerned. But that can't mean everybody gets a free pass to keep doing the same. It means we all need to be more critical about how we source and interpret our history. I admit I used this opportunity to rib AIEND for their country's instrumentalist attitude to these matters, and that disrespect deserves some calling out. However, I can't apologize for trying to point out a cognitive bias in others when it contributes to erroneous reasoning. Where the situation reversed, I would appreciate someone else pointing out a potential cause of my mistakes even if it temporarily hurt my ego. I think that is the only realistic way to deal with these things, and anyone who can't take the heat of having their biases examined should be ready to occasionally take a break from the kitchen.
×
×
  • Create New...