Jump to content

Flamming_Python

Community Members
  • Content Count

    4
  • Joined

  • Last visited

Community Reputation

3 Neutral

About Flamming_Python

  • Rank
    Tiro
  1. Yeah that's my line of thinking too; I can't predict whether the CLIPS or other rule-based system approach will be faster than the current one. But it's definately worth a shot and branching out into this direction. It's indeed fairly efficient when dealing with vast quantities of data; it has been employed in a variety of industrial uses, some of them real-time. In fact rules engines are not so much slowed down by the actual amount of facts and rules; it's the amount of partial matches that are generated by sub-optimal or poorly defined rules that kill them. I'll give you an example - let's say you have to check for 5 conditions to match a rule. You would want to search for the most specific conditions first; because those are the least likely to be true. If you don't find the first condition (the most specific one), you can call off the search and not waste any more processor time. The same rule but poorly-written however, might have the least specific rules near the top and thus the ones searched for first. As such it could well end up that the first 4 of the 5 conditions are found; but then the final most specific condition isn't found. You'll end up the same as you did in the first case; except now instead of having wasted processor time just on searching for the most specific condition; you would have wasted the processor's time on searching for all the other conditions too. What's more, the algorithm will search not just for 1 match of any the first 4 conditions; but it will find all of them; you'll end up with a whole load of these partial matches, before you even get around to checking for even just 1 match of the final condition Well the specifics of what I explained differ from rule-engine to rule-engine; this is just one example of what could happen. Anyway, as long as the system is coded efficiently and the rules are constructed optimally in order to avoid such pitfalls; the system will probably be able to handle the gameworld without too much trouble. BTW Attempting to write our own rules-based engine is just asking for trouble with slow-downs. All of today's rule-based engines have been worked and worked on to make them more efficient. By coding your own rules engine you would be depriving yourself of all these improvements. And trying to make an AI out of a bunch of if-else statements isn't going to cut the mustard either. The only solution I see is to use a ready rules engine; I think I saw some for JS; so hopefully it won't be a problem. A slight fear is though that data might have to be converted 2 times; once from C++ to JS, and then from JS to the JS-based rules engine. But that probably won't be the case; JS objects should be supported by a modern JS-based rules-engine without having to do any additional processing. If the team adds support for AI development in C++; then I could start on using CLIPS which would be even better as I have some experience with it and its syntax.
  2. C++ would be nice; at least for me - it would mean that I would be able to call C tools/libraries such as CLIPS and FuzzyCLIPS almost directly. Well whatever you guys decide I guess. Keep the JS AI structure though as well for now I would say; for backwards compatibility. For what concerns gamestate duplication - I presume you mean converting C++ objects to another language (e.g. JS) and visa-versa? To a certain extent it may be unavoidable in my approach, even if you switch to C++; because rule-based engines often have their own specific representations of objects/facts. Well we'll see. One of the properties of rule-based systems; or rather the Rete algorithm specifically that most of them are based on - is that the speed of the final code is largely dependent on the amount of partial matches generated in each rule before a complete match is found; and several other subtle things too. Programmers at first rarely write such code optimally - which means that there is usually a hell of a lot of scope for optimisation if they focus their attention back on it again. The good side of this is that it's very straightforward to optimise - just reorder/reconstruct the rules, and test your changes regularly to see if you're headed in the right direction. Put your head to it; and you're almost guaranteed to get your AI running as fast as it possibly could be sooner or later; and don't have to bash your head against the desk to find all these hidden details that are chugging down your Aegis or qbot AIs
  3. Thanks for the replies guys. I already had a look at that thread; and the AI source files too. Didn't look too deeply; the conversations seem to be hovering around enhancements to the aegis AI In terms of getting up to scratch; I think that I would be able to accomplish it quicker with a hands on approach - beggining from scratch on a new AI myself. From my brief 20-min overlook of the source; the AIs are written using a somewhat rule-based approach, albeit too procedurally for my tastes. The AI approach I want to try first is with a full rule-based approach, and a completely declerative style of coding.. as far as JS would allow me one anyway. A rule-based system in other words; that can later be expanded into an expert system where the AI would be able to formulate its own unique strategies (some of them will be completely weird but possibly devastatingly effective) from its own knowledge-base. I thought up of such an expert-system variant at work but I never had the chance to put it into code. I'll elaborate in due course on my ideas; but for now it's too early to go into details - first things first and all that. A shame that we are limited to JS really - but with a little cleverness it should be perfectly adequate for the time being (a rules-engine extension to JS might be just the thing I need though a little later on, assuming it won't tank the processor) Well I say from scratch; but in reality I'll just copy and paste the qbot folder; gut most of it out immediately, begin work on steadily replacing the rest with my own code, and then build it up from there. Competing design bureaus are a good thing. It looks like there is already a not insignificant host of programmers working on the current AI systems. By adding another AI to the mix and developing it concurrently; I would be able to draw some lessons from them and they might be able to gain some insight from my approach.
  4. Hi guys, wanted to introduce myself; I love history, and I'm a Java programmer by trade, based in St. Petersburg, Russia. I did a joint-degree on Computer Science and Artificial Intelligence; so I have some experience with AI-coding from back in my uni days; although mostly it was theory and reading - I still refresh my knowledge from time to time. Also gained some experience with rule/knowledge based systems at work (JBoss Drools & CLIPS/JESS); although what we have is not an expert system in the true sence of the word. Anyway; 0AD looks really promising and I would like to be a part of it. I am particularly enthusiastic as I can see the potential here for putting theory into practise in terms of AI; symbolic AI techniques, expert systems, ANNs, genetic algorithms, fuzzy logic - I really want to see what might work here in terms of putting together effective computer behaviour (without breaking the bank/processor time, of course). I understand that the AI code here is all based on JS; I have some experience with JS from web-programming, although it's fairly rudimentary. Hopefully I can bring myself upto scratch quickly, if you guys would have me. In the longer-term I might be interested into branching out into other areas; graphics processing, or perhaps UI development which I might know a thing or two about too. So tell me - what's the first step? Download the source and research the AI files? Anything in particular that I would do well to focus on?
×
×
  • Create New...