Jump to content

Ykkrosh

WFG Retired
  • Posts

    4.928
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by Ykkrosh

  1. I'm pretty sure V8 snapshots aren't portable, since V8 doesn't even have the concept of portable bytecode (unless it all changed recently) - everything gets compiled directly to x86/x64/ARM/MIPS machine code, so the snapshot would have to include that. (Apparently that's the case). It'll probably even vary between debug and non-debug modes on the same machine, since it's just dumping the data structures directly. Looks like a snapshot could contain arbitrary unchecked machine code, so we couldn't safely let players share saved games (which is necessary if we'll e.g. allow players to rejoin an in-progress multiplayer game - clients won't want to let the server execute arbitrary code on them). Also it seems it probably won't give precisely the same output for every user (since it'll depend on e.g. GC and allocation order, I think) so we can't use it to verify that multiplayer games stay in sync. So it won't really work for any uses (Also there's no way we're rewriting all of our existing SpiderMonkey-using code to use V8 because that'd be far too much work ) Objects won't be passed between AI players, but they'll be passed from the engine to the AI players (to provide the list of entities etc). Those objects can be quite large (thousands of entities each with dozens of data fields), and duplicating those objects for every AI player's context would be a waste of time and memory, so that input data should be shared by all player contexts. I think that's the only place where scripts might notice that arrays don't match their own context's Array. Closures are still an issue - if the script does a setTimeout and then gets serialised and deserialised, there's no way we can reconstruct the timer state because we don't have enough information or enough API to set up the closure and its bindings again. Expose in what sense? It's all just scripts and anyone can edit any of it, so I'm not sure how useful it is to try hiding parts from AI scripters (and distinguishing them from AI API library script developers). Probably we should strongly encourage people (via documentation) to use the standard higher-level APIs since they're easier and more stable, but I don't know that adding artificial barriers to hide the low-level API would help and wouldn't just be unnecessary complexity. Seems like quite a few game developers actually don't know it, and are much happier with Lua . (E.g. some people forked the Syntensity project and are moving away from its already-implemented JS support, to Lua). But yeah, JS is pretty popular and only seems likely to grow, so it's good if we can exploit that That'd be great . I'll try committing my currently primitive working code in the next couple of days, and then it should be possible to experiment with new designs.
  2. Thanks, interesting thoughts I believe that's not possible (which is unfortunate since it causes pain throughout the rest of the design). Serializing closures requires deep access to engine internals, so it can't be done by application code. SpiderMonkey has support for XDR which seems to basically do this, but it's almost totally undocumented, and I don't trust it to provide the security and portability and deterministicity that we need. (It's designed mainly for optimising Firefox startup by caching parsed scripts, where it's okay if the cache gets invalidated by changes to the machine or to SpiderMonkey - we have stricter requirements for saved games and for network synchronisation). Also it would be inefficient since it'd be serializing all the bytecode for all the functions that are defined, instead of merely serializing the variable data. I'm not aware of any reasonable solution to these problems, other than restricting the state stored by AI scripts to the subset that can be safely serialized That's using closures as part of the script state, so the serializer can't allow it. My goal was to avoid having multiple global scope objects in the AI system, because that's often a pain. In SpiderMonkey you need a separate JSContext per global (unless you do certain tricks which are ugly hacks and cause JIT problems and are likely to break in the future), each with their own set of standard global values like 'Array' and 'undefined'. That means you get weird behaviour when passing objects between contexts (e.g. "x instanceof Array" fails if x was an array constructed in a different context).Maybe that's not such a big deal, though... Using a separate context per AI player would prevent name clashes between them, and if the serializer was extended it could probably support global variables (while skipping over global function definitions), so it should simplify life for AI scripters, and is probably worth the added engine complexity. When all AI players share a global object, we need constructors so the engine can create multiple independent instances. If they're each in independent global scopes then maybe that could be simplified. CommonJS requires that each script is run in its own local scope. It seems that's usually achieved by running in a function scope (i.e. wrap with "(function(){ ... })()") but that makes serialization impossible (we can't look inside the executed function to see what variables were declared). I suppose we could run in global scope and have a new global object (hence new context) per 'module', but that sounds a bit complex and I'm not sure we really need that much modularity. We could do everything in a single global scope and have a "include('name');" function instead of specifying includes ahead of time in the .json - I'm not sure either way is particularly better. I want to do as much as possible in JS instead of C++, so the API provided by the engine is very raw and low-level. My vague plan is that higher-level wrappers can and should be implemented in JS - have a script library doing something like function Unit(entities, id) { this.entity = entities[id]; } Unit.prototype = { get hp() { return this.entity.hitpoints; }, get isHurt() { return this.entity.hitpoints < this.entity.maxHitpoints; }, move: function (x, z) { Engine.PostCommand({"type": "walk", "entities": [this.entity.id], "x": x, "z": z, "queued": false}); }, }; var u = new Unit(entities, 10); if (u.isHurt) u.move(100, 100); so you can construct a Unit object based on the raw data provided by the engine, and the object implements whatever API you want without changing any C++. In practice scripts will probably want objects that represent and control squads of units, which they'll have to define themselves, so they might as well define objects for individual units too. There should eventually be some standard wrapper library like this, which all AI scripts can use, so hopefully the ugliness of the raw API can be hidden. I don't really know what it should look like, though, and whether it will really be possible without extra engine support. Maybe the best approach is to just do the raw API for now and then experiment with it and encourage people to design higher-level APIs for it, and see what the major complaints are
  3. There wouldn't be any CPU cost, the blending would just be done on the GPU (probably via a second render pass on all buildings, which shouldn't be particularly expensive). I think the main problems are the complexity of implementing any new features in the renderer and the effort of producing new art, versus the value the feature provides.
  4. If we reward players for training different types of units (e.g. they get 10XP for the first unit of type X they ever train, 9XP for the second X, 8XP for the third, 10XP for the first of type Y, etc), then a player who's nearly won a match will think "I should drag this out for as long as possible and use my spare resources to train a bunch of those rubbish ships that I never otherwise bother using in order to get the XP". Training units needlessly isn't a fun gameplay mechanic, so our reward system would be encouraging players to be bored, and I'm not quite sure how to avoid that.(I think there's plenty of evidence that players will willingly do boring things if you reward them for it (even with essentially meaningless tokens) - see e.g. grinding for experience or loot in MMORPGs, which is sometimes so boring that players will pay other people to do it for them, or see people who buy terrible Xbox games to get achievements and boost their gamerscore. The reward structure is addictive and makes people play the games for longer, which is good business sense when you charge for subscriptions, but if the player's not actually enjoying the game then it seems exploitative and dishonest and should be discouraged. And it's not just a problem in games - people will aim to improve measurements regardless of the real value of what's being measured.) ((I'm definitely vulnerable to this exploitation myself. When Team Fortress 2 had a week-long Halloween event where a present would randomly spawn in the level every few minutes, and whoever (out of ~24 players per server) picks it up first wins a wearable paper bag with a face painted on the front (from a set of 9, where if you collect them all you can convert them into a rare new paper bag with a slightly different face on it), I'd set a stopwatch and make sure I was ready to run around the level searching for the present as soon as it was about to spawn, instead of actually playing the game. It wasn't fun and it kind of ruined the game, and I recognised that but I kept on doing it. And I only got two of the hats.)) If we can fine tune the system so it only gives players incentives to do things that are inherently fun, then I think it'd be great. I just don't think I've seen any suggestions yet for accumulating experience points that achieve that, without also giving incentives to exploit the system by playing in non-fun ways. Do people agree with this as a general principle? If so, that probably makes it much easier to evaluate specific ideas - imagine a devious player who'll do anything to maximise all their scores and unlock everything as quickly as possible, and then see if they'll be doing non-fun things to achieve their goal, and if so then it's a bad idea. If the principle's wrong or confused and I'm just crazy then I'd like to understand why I'm mistaken That'll cause the devious player to play easy maps and wait until victory is assured, and then keep playing and rack up experience points by doing whatever grants them. So it'd probably be even less fun than granting the same points regardless of victory, because the player will waste extra time defeating easy opponents. In that case, why have the experience system at all? For single-player we can reward players for winning (which is always a fun thing for them to do, unless our gameplay is fundamentally broken) with a traditional campaign structure, or with some kind of non-linear conquer-the-world mode where there's multiple paths leading to regions with new maps/enemies/units, or with the more abstract thing you're suggesting where we count number of victories vs each faction. What value would an experience points system add in this case, without distracting the player into doing non-fun things?
  5. Interesting thoughts I think an important factor is whether the meta-game (of collecting achievements and unlocking features and beating high scores etc) reinforces the primary gameplay (which we design to be fun by itself), or undermines it, or is indifferent to it. With racing games, the primary gameplay involves winning races and getting around tracks as quickly as possible, and that involves components like accurate steering and accelerating and handling particular corners of particular tracks. It sounds like the GT5 license tests reinforce those components, and the unlocks reinforce the main goal of winning races, so they're encouraging you to play the game more and better, which is a good thing. In our game, the basic goal is to win matches by defeating your enemies, and that involves various low-level micromanagement tactics and some higher-level strategies. We need to be careful that anything we add should reinforce that gameplay. In that case, I think it could be good to follow the license tests approach and have lots of short missions that focus on particular aspects of the gameplay (optimising your economy, choosing troops to counter your enemy, deploying your troops for battles, etc) that can function as an introductory tutorial and also as ways for skilled players to practice and improve. We'll have to have tutorials anyway, and if they can be reused as advanced challenges (by tightening the time/resource constraints etc) then that's a cheap way to provide more value to players. But our game isn't about winning matches as quickly as possible, or using as few or as many resources as possible, or training as many units as possible. If we rewarded players for those things, outside of specific training missions where those things are directly relevant, I think it would undermine the gameplay: players will always use rush strategies, or always train huge numbers of expendable troops, because it gives them more experience points. Players who want to take a different approach, e.g. slowly building a heavily defended city then doing a sneak attack on the enemy's vital buildings, would be penalised in the XP system even though they're doing something good and fun within the main gameplay. The result would be harmful to the game and to the players' enjoyment - the meta-game would be distracting them from the game itself. So I think we shouldn't reward the player for artificial achievements; we should just reward them for playing the game properly and well, which pretty much means rewarding them for winning matches. Single-player campaigns do that - the reward for winning on one scenario is access to the next. A conquer-the-world mode would do the same - the reward for winning is expansion into new areas and new maps, and perhaps new civs and new units. It doesn't matter how you win, so players are free to take whatever approach they fancy without worrying that they're missing out on anything. Victory is a very binary thing, though - it can't tell the difference between an hour-long struggle in which the player nearly wins several times and is unlucky in the end, and a match where they fell asleep after five minutes and eventually got slaughtered, so it doesn't seem an ideal measure. Are there other ways we could measure progress and reward players, that wouldn't have the danger of introducing artificial goals that distort the fundamental gameplay?
  6. Good point - will do that instead. Yeah, I suppose it's unnecessary complexity here. We should probably let players change difficulty between campaign maps, but it's not really needed to change it during a game, and AI will probably be less buggy if we avoid dynamic changes like that. I think the problem with this is we might want classes other than the main AI class, in order to have more modular code design - e.g. an economy manager class that doesn't care about the rest of the AI, or a class representing a group of units. Making those classes be members of the AI class would require ugly syntax ("DummyBotAI.prototype.UnitGroup.prototype.Foo = function() { ... }", "new DummyBotAI.prototype.UnitGroup()", etc), and also it would be hard to share functions or constants between classes (UnitGroup.Foo would have to say "DummyBotAI.prototype.MaxHouses" instead of "this.MaxHouses"). It doesn't seem good to add that complexity to all AI scripts forever, when it's only going to help the rare cases of creating a new AI based on an old one. For campaign maps, we want the user to have some control over difficulty, but we don't want the AI to act radically differently depending on the setting (because that's unnecessary and it makes testing harder). So I think we need some numerical difficulty setting for the AIs we use in campaigns (which will cause them to lower their pop cap, slow down production, skip some more advanced tactics, etc), and might as well support that in skirmish matches too. If we have multiple independent AIs then I agree we should let skirmish players select from those, but that should be in addition to the difficulty control for each AI.
  7. There's two relevant things that have changed: console commands shouldn't be started with ":" or "?" any more (just type in the JS expression directly with no prefix), and the simulation system was rewritten so none of those examples work any more. (And most of the examples aren't possible any more - the console commands run in the GUI script context, and the GUI can't directly access the simulation state since they're isolated from each other.)
  8. Yeah, name conflicts are an issue with this approach - that's probably the aspect I'm least confident in Putting everything in the global scope seems to have a number of benefits. It simplifies serialisation of AI state - the engine can save an object's class name, then when deserialising it can look up the name in the global scope to reconstruct it correctly. I think it lets us prevent dynamic changes to global objects (which would break the serialisation system) by 'freezing' the global scope after loading (so all dynamic state must be stored in the AI player objects). It works nicely with SpiderMonkey JITs (which dislike fancy tricks with fake global scopes). But name conflicts are bad, and I suppose they're sometimes hard to avoid, e.g. if I copy-and-paste one AI player into a new directory because I want to tweak it a bit to make a new version, then want to play the old version and new version against each other, I'd have to rename every single function and constant in the new version. We could perhaps avoid that problem by running every AI player in a completely independent JS context with their own global scopes, so we don't load both AI versions into the same scope, but that's not great for performance (we'd need to clone the input data into multiple JS contexts, and they couldn't share JIT caches, etc) so I'd prefer to avoid it until it's possible to measure that it's not a significant problem. I'm not sure what else to try, though
  9. There isn't a directory called "binaries" - I think you need /usr/share/games/0ad/mods, then create the directory "china" and put the mod file in there. (Looking at the Ubuntu packages, the game seems to put data files in /usr/share/games/0ad/ (equivalent to binaries/data/ in SVN), and puts executables in /usr/games/ and /usr/lib/games/0ad/ (equivalent to binaries/system/ in SVN).)
  10. Terrain analysis is expensive and preferably should only be done once regardless of number of players, but if that's handled outside of AI scripts then I think there won't be any other significant computation that ought to be shared between scripts for efficiency - it's fine if they each run totally independently. Code should be shared between AI player implementations: some utility functions might be widely useful; some AI modules (e.g. an economy manager) might be reusable by multiple player designs; some AI scripts might simply be minor customisations of others. We need to be able to serialise the complete AI script state (for saved games, network sync stuff, etc), which may restrict the ways in which AI scripts can be written (e.g. the state can't contain closures, and global variables should not be modified). An attempt at a design: Say I want to create an AI called Dummy Bot. I put all the files in a directory "mods/public/simulation/ai/dummybot/". First I create a file "dummybot.json" containing various metadata, like { "name": "Dummy Bot", "description": "An AI that does nothing very interesting.", "constructor": "DummyBotAI", ... } The game setup screen will search for simulation/ai/*/*.json to find the options it should offer to the player. Then I create "dummybot.js": function DummyBotAI(playerID) { // The constructor for the AI player code // Initialise some stuff for testing: this.playerID = playerID; this.turn = 0; this.suicideTurn = 10; /* There are some read-only global values that can be used here or later, along the lines of: var g_EntityTemplates = { "units/celt_cavalry_javelinist_a": { "Health": { "Max": "130" }, ... all the other stuff from the XML file ... }, ... }; var g_MapSettings = { "Difficulty": 0.5, // chosen by the user; possibly could be changed in the middle of a game "GameType": "conquest", ... all the other settings from the game setup screen etc ... }; */ } DummyBotAI.prototype.HandleMessage = function(game, entities, events, terrainAnalysis) { /* This gets called once per simulation turn. 'game' is like { "Time": 1.5, "Players": [ { "Resources": { "wood": 100, ... }, ... }, ... ], ... } 'entities' is like { "10": { "Template": "units/celt_cavalry_javelinist_a", "Health": 100, "Owner": 2, ... }, ... } 'events' is like [ { "Type": "EntityCreated", "Id": "10", ... }, { "Type": "PlayerDefeated", ... }, ... ] 'terrainAnalysis' is not designed yet */ var commands = []; if (this.turn == this.suicideTurn) { // Suicide, for no particular reason var myEntities = []; for (var ent in entities) if (entities[ent].Owner == this.playerID) myEntities.push(+ent); commands.push({"type": "delete-entities", "entities": myEntities}); } this.turn++; return commands; }; The engine will load this script. It will execute something equivalent to "var ai = new DummyBotAI(1)" (based on the "constructor" specified in the .json file), and then "var commands = ai.HandleMessage(...)" each turn (for each AI player) and push the commands into the command queue for the next turn. Now let's say we want a new improved AI player based on the old one. Create superdummybot/superdummybot.json: { "name": "Super Dummy Bot", "description": "An AI that does nothing very interesting, but for longer.", "include": ["dummybot"], "constructor": "SuperDummyBotAI", ... } Then create superdummybot/superdummybot.js: function SuperDummyBotAI(playerID) { // Call superclass constructor DummyBotAI.call(this, playerID); // Make this subclass super-strong this.suicideTurn = SUPER_LIFETIME; } // Inherit superclass's methods SuperDummyBotAI.prototype = new DummyBotAI; and superdummybot/constants.js: const SUPER_LIFETIME = 100; The "include" line in the .json means that before loading any superdummybot files, the dummybot files must be loaded if they aren't already (so that "prototype = new DummyBotAI" doesn't happen before DummyBotAI was defined). (Everything just gets loaded into a single global scope, so the files can refer to each other's contents like this). Most will probably include "commonutils" or similar. An AI might have lots of .js files that define classes, which all get instantiated by the main AI class constructor. All the superdummybot/*.js files get loaded in an arbitrary order - it's rarely important to control the order they're loaded (e.g. this example works either way), and this avoids the need to make people explicitly list the order to load files. (I'm not confident this file/module loading thing is the best way to do it, but I think it's reasonably simple and sufficiently powerful for now - it could all be changed later if we find problems.) I think this is probably enough of a design to start implementing things and get something primitive basically working, which hopefully shouldn't take long, and then I should have a better idea of how to break down all the remaining tasks
  11. Referencing the main game's content is fine - you could make and sell a proprietary commercial mod if you wanted (though everyone would probably pirate it ). The problem is just when looking at the game's CC-BY-SA model/texture/XML/etc files and modifying them or reusing any parts of them to make your new content, because then the new content must be released as CC-BY-SA too. Easiest solution is just to release everything as CC-BY-SA anyway, since there's no problem doing that for either new or derived work, unless you specifically want it to be proprietary instead of open.
  12. Sounds neat . (Maybe you could add a mirror for the download for people who want to try it out now, without waiting for ModDB to authorise the release?) That's not a slightly different license, it's a fundamentally totally different license . The NC and ND clauses mean it's not open source (see the definition, points 1 and 3), and would prevent it being distributed by most Linux distros (who only want open source content) or e.g. magazine coverdiscs (if such things still exist) or similar channels. Also it prevents you from basing any of your mod's content on the game's CC-BY-SA content (without permission from whoever originally made the game's content), since the SA clause means any derived work must be CC-BY-SA too, as well as preventing the game from ever making use of any of your content. It's about as incompatible as it's possible to be.
  13. Currently we just have unit AI, where each unit independently looks for e.g. nearby enemies to attack. We'll keep this for AI players - the player AI code will handle the higher-level tasks of moving groups of units into the right places and telling them what orders to carry out, while the unit AI code handles the low-level details of moving and gathering and hitting the target etc. Hmm, interesting - it may be useful for human players, but probably not in the way you're thinking . Saving already works (it just needs testing and UI), so that won't be affected. Finding idle workers is trivial and it wouldn't hurt to write that code twice. But maybe some terrain analysis stuff shouldn't be AI-only - e.g. the article in GPG3 about terrain analysis suggests using it to find forests (which may change over time) then if the player right-clicks inside a forest their units can decide to gather trees from the edge of that forest. Similarly we could use forest area detection for hiding certain units. Also it can detect shore tiles, which a player's transport ships can search for when unloading. So maybe terrain analysis is a special thing that should be part of the common simulation code, not part of the AI, so it can be used by more than just the AI. Then the input to the AI scripts won't be raw terrain data, it'll be the output of the terrain analysis (a list of forests, towns, choke points, etc, and tile data annotated with shores and islands and hills etc). That takes some flexibility away from AI scripts (they'll all be given the same post-analysis data instead of analysing it themselves), but maybe not much (we can give them most of the raw terrain data too, in case they really want to do something unique), and I think it allows the design to be simplified in some ways. So that's probably a good idea AI can still use random numbers - we just need to make the random number generator synchronised between all machines (which we already do for Math.random() in the component scripts). There shouldn't be any other sources of non-determinism, so synchronisation shouldn't be any more difficult than it is for the other simulation components. Sending AI commands over the network means slightly greater latency (the AI's computed commands can't be executed until they've propagated across the network, rather than running immediately on the next turn) and greater bandwidth requirements, and makes it harder to save multiplayer games (the AI state needs to be saved, but not everyone will have been computing the AI state), and I'm not sure if there are advantages to make that worth the cost.
  14. This article is quite interesting on the value of a simple approach to processing game statistics.
  15. Player AI seems pretty useful for testing the game, and I think we've got sufficiently stable gameplay to make it feasible to start now, so I'd like to do that. This post is an overly wordy attempt to understand and explain my initial thoughts for the design. Comments welcome Currently I'm not particularly interested in the details of designing a competitive AI player in terms of strategy etc - I want to focus more on the interface design, and sort out how AI will interact with the rest of the engine, and develop a simplistic AI player to prove the system. The goal is that it should be quite straightforward for anyone to develop their own AI within the system, so we can get people trying a range of approaches (from hard-coded barely-dynamic build orders (like in Bos Wars (near the bottom)) to complex reactive algorithms). On open source RTS games, it looks like people like writing AIs in lots of different languages (e.g. Spring has at least C, C++, Java, Python, C#). I think we should focus on only supporting JS: it's easy to distribute (just drop in the files, and it doesn't need any extra language runtimes in the engine), safe to run (we can download AIs automatically and not worry about security), fast (with JIT, and typed arrays to save memory), and can be automatically saved and loaded (for saved games and network syncing). C++ fails at distribution (AIs would have to be bundled with engine releases) and at security, and saving/loading is usually a pain, so it's probably not worth supporting that natively. As much AI code as possible should be in scripts rather than in the engine, because script code is easier to write and more flexible and I don't think there's a good reason to do things in C++ instead. (If it's not much extra effort, it'd be nice to support people doing AI research in other languages (as opposed to making AIs for real players to use). I think that should be fairly easy with IPC (like with the Broodwar proxy): run the AI in a separate process, have it connect to the game engine over a TCP socket, then send and receive data with some kind of JSON-based API that's equivalent to the normal internal JS<->engine interface. That means we just need to document the API and expose a socket, and people can use whatever language they fancy, and we can stay focused on JS.) We shouldn't run all AI code as a part of the normal simulation turn update function. If the renderer is going at 60fps, but the AI spends 100ms computing stuff, it'll be noticeably jerky, so we'd have to be careful to ensure the AI never takes more than a few milliseconds per update. It seems better if we run it asynchronously, so it can be in parallel with the renderer - if it takes 100ms then that's okay since it won't delay anything. So the AI should take a read-only snapshot of the world state at turn n, then run in a background thread to compute a set of commands to execute in turn n+1. (Then we can benefit from multi-core CPUs, too.) Output should be a set of commands exactly like what the GUI generates (via PostNetworkCommand), so we can reuse the same command-processing code and ensure the AIs don't accidentally cheat or break the game state. I think we probably want AIs to run on every player's computer in multiplayer games, rather than just running on one and sharing its command output over the network, but if AI is expensive and some players have very slow CPUs and fast networks then maybe that's a tradeoff to explore later (the design shouldn't make it impossible). I think one main design concern is exactly what the input to the AI scripts should be. Since AIs will run in threads, and since shared-memory concurrency is evil, they won't be able to directly query the simulation system to pull details about the current state (which seems to be what most games do) - at the end of a turn we'll have to push the relevant data to them. There's potentially quite a lot of data (a hundred thousand map tiles, thousands of entities, maybe 8 AI players), so I expect we'll have to be at least a bit careful about performance. Probably best to start from what features the AI might want to implement: * Planning economic and military strategies - if the AI sees the enemy has lots of cavalry, it can search for counter unit types and find the spearmen and recognise it needs to build a barracks to train them. (The AI scripts shouldn't have to hardcode these details, else they'll break when we change the unit design or add mods.) * Building placement - find free space, with various constraints (e.g. near resources, near friendly units, far from enemy units, certain distance from friendly buildings of the same type, etc). * Resource gathering - needs to know where the resources are, so it can choose where to construct dropsites and send workers. * Finding idle workers - needs to detect them and give them something to do. * Terrain analysis - finding islands, hills, choke points, forests, towns, etc, for use with strategic decision-making. * (I can't think of anything else relevant here.) So we could do with the following data: * Static data about entity templates: what can be trained/built, what role it can fulfil, what it will cost, what the prerequisites are (builders, buildings, phases), etc. This would basically be the collection of all entity template XML files. * Basic game state: current resource counts, pop counts, time, list of players, diplomacy status, etc. Also a list of recent special events: defeats, tributes, chat messages, etc. * Entity data: type, owner, location, health, stamina, current task, resource count, construction percentage, etc, for every entity in the world. * Maybe a list of entity events: entities newly trained/built, entities destroyed, entities garrisoned, recent attacks, etc. These could mostly be derived from the complete set of entity data but that would be relatively painful and the engine already has the event data, so it's better to expose it as events. * A grid approximation of obstructed tiles (like what the pathfinder uses), for finding free space. (We don't need the precise geometric obstruction shapes - the tile approximation is adequate and simpler, probably). * Other terrain tile data: heights, wateriness, movement speeds, etc, for terrain analysis. * FoW/SoD grid data, so AI can respect visibility constraints (unless it wants to cheat). Rather than doing this explicitly, it could be merged into e.g. the obstruction/terrain/entity data so we don't tell the AI about anything it can't see, but that makes it harder to share the obstruction/etc data between AI players and it's probably cleaner to keep the different data sources separate. I think that's about all, and it doesn't sound too bad. So the basic idea is that the game engine will gather some data (entity templates) at the start of the game, and gather some other data (e.g. entity states) after every simulation turn, and gather some other data (e.g. terrain states) only when necessary (e.g. not unless it's changed, and not more than once every few seconds); and then it will send the data to the AI thread, which will process it and produce a list of commands before the start of the next turn. Next: Need to design the precise API, and the way of writing and executing AI scripts.
  16. Start the game and then press alt+enter
  17. I think you shouldn't use libtxc_dxtn at all, since it's old and unmaintained (so much that the official page is a 404 now), and the game doesn't need it any more - we'll automatically compress and decompress textures as needed. The only difference it makes is to a warning message when you start the game, and I think we should just disable that warning for the next release since it's not helpful (#712). So it'd be easier and better to just skip installing that library entirely
  18. That doesn't look like a circular map, it looks like a thin rectangular map displayed in a circular area . It seems what's common is a desire to rotate the minimap so it always matches the camera orientation - AoM does it by rotating the whole square minimap, CoH does it by rotating the rectangular map within a circular area (which can waste a lot of screen space), AoE3 (and now 0 A.D.) does it by rotating a circular map in a circular area.
  19. Is the mouse movement slow in the menu, or just in the game? Do you get better performance if you disable shadows and/or fancy water (via the menu button in the top-right of the game screen, then 'options', I think)? I think the game ought to work with the free drivers, and they should be doing hardware acceleration, but we might be hitting a few features they don't implement properly.
  20. The problem I had personally with the gathering aura concept is that I couldn't see how to make it really clear to players. If they tell their people to gather some trees and it just pops up an error message saying "Sorry, you need to build a wood dropsite within an unspecified distance of these trees before I'll let you gather them" then that's not much fun. If they build a dropsite and the units chop down all the trees within range, and then stand around idle for five minutes because all the remaining trees are one tile too far away, without the player even noticing, then that's not much fun either. Shuttling is much easier for new players (since they don't have to bother creating any dropsites at all, it'll just get shuttled back to their civ center and take longer), and also it's easier for players to learn more advanced strategy (if they see their units are taking 30 seconds to return resources then the advantages of building a dropsite are immediately and visually very obvious), and we don't need to add any artificial graphical effects to represent maximum ranges or gathering efficiencies or anything. Nobody seemed to strongly disagree with that, so that's what we've got for now . And now that we've got it, I think it makes the game look more dynamic and alive and interesting, with the units constantly moving around by themselves, which is nice. (As far as I could tell, the main reason for the original gathering aura concept (~7 years ago) was to save on pathfinding cost. But pathfinding is pretty cheap in this case, since the units are sparse and rarely colliding (unlike e.g. combat which is much more expensive), so I don't think that's a relevant reason.) If you have ideas of how to solve the problem of making resource auras clear to users, it probably shouldn't be technically hard to implement them as an alternative to the shuttling system.
  21. I'd vote against Ruby, primarily because I don't know it and don't want to bother learning it and I'm terrible at relinquishing control of things . Also I expect I'll end up hosting and sysadmining it myself, so I'd want to understand it regardless of who writes it. Everyone but me hates Perl, and JS doesn't have any mature web frameworks, and PHP is a mess with no redeeming features that I've ever heard of, so that leaves Python by process of elimination . (Also, Python is a nice language with good libraries.) SQLite is worse than I expected for storage. There's necessarily a transaction each time a piece of user-provided data is saved, and SQLite flushes to disk on each commit so I can only process about 5 POST requests per second on my local machine (and some of them fail, complaining the database is locked). With flushes disabled I can get about 100/sec but then the database will probably be corrupted whenever the server crashes, and there's no recovery tools. Also, any slow read query (e.g. backing up the database, or extracting the data for further processing, or even just looking at the data in an admin interface) will block any writes, which is not good when the aim was to save data quickly. MySQL with InnoDB with innodb_flush_log_at_trx_commit=2 gets around 100/sec, and should recover from crashes; and it can seemingly execute queries concurrently with inserts so new data shouldn't get held up. So that's probably better. (I imagine Postgres would work similarly, but I already run a MySQL server so it's easier to reuse that.) I'm thinking data would be like { "version": "8832-release", // or "custom build" if it's SVN since we can't tell the revision "generated_date": "2010-12-12T03:14:32Z", "data_type": "hwreport", "data_version": 1, "data": ... // structure depends on data_type and data_version } The user will typically have more than one piece of data like this, and will upload them individually. Each contains one type of data (plus a version number in case we change the structure and want to tell the difference). For the "hwreport" type, the "data" field can be like {"os_unix":1,"os_linux":1,"os_macosx":0,"os_win":0,"gfx_card":"Tungsten Graphics, Inc ","gfx_drv_ver":"OpenGL 2.1 Mesa 7.9","gfx_mem":0,"gl_vendor":"Tungsten Graphics, Inc","gl_renderer":"Mesa DRI Mobile IntelĀ® GM45 Express Chipset GEM 20100330 DEVELOPMENT ","gl_version":"2.1 Mesa 7.9","gl_extensions":"GL_ARB_copy_buffer [...] GL_OES_EGL_image","video_xres":1024,"video_yres":768,"video_bpp":24,"uname_sysname":"Linux","uname_release":"2.6.35-gentoo-r5","uname_version":"#1 SMP Wed Sep 1 11:53:07 BST 2010","uname_machine":"x86_64","cpu_identifier":"Intel Pentium Dual T3400 @ 2.16GHz","cpu_frequency":-1,"ram_total":3924,"ram_free":2221}which is what we already construct for our hwdetect.js script (and basically the same as system_info.txt). The game would generate and transmit that data once a month (or whatever) so we can usually tell if it changed. Other data types can be added whenever we feel like it.
  22. What framerate do you get? (Shift+F should display it in the top left of the screen, I think.)
  23. This is a driver bug - we ask it to enable S3TC texture compression, and it says okay, but then it doesn't actually support it. I think you can fix it by creating the file ~/.config/0ad/config/local.cfg containing the line nos3tc = truewhich will stop the game trying to use S3TC. (It'll show a performance warning message when starting the game, which you should ignore.)
  24. I was thinking in a bit more detail about how this could be implemented. For simplicity and flexibility, I think the basic idea should be that the client sends the server an HTTP request containing a JSON document and optionally some binary files, with a pseudonymous user ID and timestamp. The binary files are needed for error reports with crash logs, or other relatively large pieces of opaque textual data (e.g. simulation command logs). The JSON document is an unconstrained structure, and depends on what type of data is being transmitted (error reports, hardware settings, various types of gameplay stats, etc) and on what game version the user is running (we'll still accept data from users on old versions and SVN versions). The server will blindly store all this data. About scalability: Currently we get something on the order of 10K downloads per month. Assume they all successfully install and run the game, and use it long enough to send us 10 pieces of data (saying what maps they've played etc). In total that's about 2 pieces of data per minute, and about a million over a year. If each is maybe 1KB then that's 1GB per year. That all seems fairly easy to cope with, and even if we become 10x more popular it shouldn't be much of a worry if we have a sensible storage architecture. Then we need to analyse the data, which is probably the hard part. I don't know exactly what reports we'll need - they'll probably be relatively arbitrary queries over the JSON data, e.g. counting number of users over the past month who had <10FPS on the main menu grouped by their hardware report's GPU, or whatever, to let us search for patterns of problems. It doesn't matter if the reports lag behind newly reported data by a few hours. So I think it'd make sense to store the incoming data in a simple non-queryable database (e.g. SQLite with the JSON in a text field, with binary files on the filesystem), then batch convert it into a queryable database (extract the records of a certain type for a certain time period, then parse the JSON and push the interesting fields into a new SQLite/etc database with indexed columns), so we can easily throw away the queryable database and redesign it without disturbing the data collection. Stick on a simple web front-end (probably using Django, because Python is less objectionable than most other languages) with some graphs and it should be alright. For users' privacy I currently think we shouldn't expose the raw data (particularly crash logs which may contain random RAM content) to public users, but aggregated data should be public as far as possible. Seems like it should be reasonably straightforward...
  25. Fixed some. That warning is intentionally kept, to remind me that there's missing functionality in that function
×
×
  • Create New...