Jump to content

Ykkrosh

WFG Retired
  • Posts

    4.928
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by Ykkrosh

  1. Looks like it's drawing shadows for transparent textures as if they were fully opaque. I've no idea why, and don't have any suggestions other than updating graphics drivers.
  2. Yeah, a lobby server would definitely be useful. I haven't thought about technical design much, but I'd guess it may be best for the server to provide a JSON-over-HTTP API - that way it can be used easily by the game engine (we just need to add libcurl and give the GUI scripts access to it), and could also be used by web-based interfaces or other tools if people find those more convenient. There's lots of other things we could do with a central server too - update notifications, firewall checking, NAT punchthrough, collecting gameplay statistics, etc. Ideally we'll have automatic downloading of new maps, and they'll be fairly small files, so that shouldn't be a problem. (Need to worry a lot about security before we implement that, though.) Is this information available? I thought people usually just waited until there were enough players and/or they got fed up, and then started the game immediately - they don't explicitly plan to start the game in n seconds, so there's no timer to show to other players. The engine design makes that technically feasible, and it would be a good thing to have, though it'll probably take a lot of effort to make it work reliably. I think testing isn't the highest priority now - we're already aware of a lot of errors and missing features and have plenty to work on, but it'll become more important in the hopefully-not-too-distant future when we've fixed the obvious problems so testers can find new subtler problems. Would be good to have a system set up by that point so that people can test as easily as possible.
  3. Should be fixed in r8026. Alternatively you could recompile wxGTK with USE=tiff
  4. The error message isn't in that part of the log, it'll be somewhere higher up - could you post more of it? What version of x11-libs/wxGTK do you have installed?
  5. I think some people have compiled on Slackware before, but I don't know exactly what they had to do. The build instructions page has all the information we have.
  6. #144 - having a solution to that would be good. Extending the map seems better than the immediate edge we currently have. I'd like to have that eventually, but it might take a bit of work - need to update the camera movement code, pathfinder (don't want units walking outside the bounds), map editor (need to show designers the border's edge), etc.
  7. Hmm... Does it look better if you add "noframebufferobject=true" to the config file (see binaries/data/config/default.cfg)? I guess we need someone with an ATI card to debug it and work out what's triggering the problem.
  8. Black background should be fixed in SVN now.
  9. Do you have enough free disk space on D? Do you have a virus scanner that might be interfering with the installation? (I can't think what else the problem might be.)
  10. Can you run it in the debugger and get the call stack?
  11. The idea is you can hold shift and click a number of times, to queue larger batches. The batch only gets completed (and added to the training queue) when you release the shift key. (I don't like that idea, but it's problematic to increase batches once they're already added into the queue (because we want time/cost discounts for larger batches), which is why we do this delayed-addition thing, and I'm not really sure what the best solution is.)
  12. Hmm, I'm afraid I'm not sure what to try then (other than installing an older version of Boost if that's possible). It looks like we'll have to deal with this API change some time soon, but I can't easily install the latest Boost to try it and fix it myself right now.
  13. That's the textureconv tool which isn't particularly good at compression quality and should preferably be avoided It could fairly easily be replaced with a tool that has the same effect using the NVIDIA tools library for compression, but that wouldn't solve the problems of wanting to change the mode/mipmapping/filtering/etc after the artist has uploaded the file, so I'd still prefer the conversion to be run automatically by the engine rather than being run by artists. Most of the GUI textures appear to be compressed.
  14. Looks like it's probably related to the version 3 changes to Boost.Filesystem in 1.44. But it says version 2 is still the default in that release, so it ought to still work fine. Does it work any better if you add #define BOOST_FILESYSTEM_VERSION 2into source/lib/external_libraries/boost_filesystem.h just before the "#include" line?
  15. What version of Boost do you have installed?
  16. That's one place where I imagine texture atlases would be helpful - we can create the icons as separate files, then some conversion process will pack them together into a single texture for more efficient rendering once we care about that level of optimisation. That would still be possible when we only have .dds textures (you can pack them together without having to recompress as long as they're the same DXTn format), but it's probably much easier to add if we've already got an automatic texture-converting process set up.
  17. I have some (minor) concerns about the way we're using DDS files. Currently they're compressed using a variety of tools, with a variety of options for mipmap filtering, and only the lossily-compressed version is stored in SVN. There's a few things I'd like to be possible: * Files should be compressed using the best available compression algorithm. (Particularly the textureconv tool supplied with the game is far from optimal. Some of the textures in the game have ugly compression artifacts, most visibly on the GUI buttons (button_wood.dds).) * We shouldn't have people failing to get their textures into the game because they've not got right DDS exporter or the right settings for it. (That seems to have been a common issue recently.) * We should reliably generate mipmaps whenever they're needed. (A few of our current textures don't have mipmaps, which slows down loading and causes serious rendering errors on Linux systems which can't automatically generate compressed mipmaps.) * We should be able to modify the mipmap filtering - in particular it'd be nice to remove our current LOD bias hack (which was added to make textures look sharper, but sometimes causes ugly texture swimming effects) and sharpen the mipmaps so that terrain and objects still look good at a distance. This needs to be done consistently for all textures, and we need to be able to experiment and tweak the settings. * We should be able to easily modify textures (e.g. add a new icon to an icon sheet) without losing quality every time they are saved. * We might want to automatically combine some files into a texture atlas to improve renderer performance, so there won't be a strict 1:1 correspondence between source texture files and the files loaded by the engine. I really want to keep the process simple on the artist side, particularly for modders - they shouldn't be expected to run some special art pipeline tool before they can see their stuff in the game. That means conversions should either be automatic at runtime, or shouldn't be needed at all. (It seems quite common for commercial games to have complex pipelines for preprocessing data before it reaches the game engine, but they don't care about giving all their tools to modders, and they have plenty of network bandwidth to share frequently-updated large binary files. I think our requirements are simple enough that we can get away with not doing that.) A proposal: * Only store lossless files (.tga, .png) in SVN. * Always load and render the uncompressed textures when running the game from SVN. (Developers should probably have enough VRAM that it'll be okay. (We only use about 20-30MB on most maps with the initial units, and it would expand by about 4x without compression.)) * Alternatively the game could compress and cache textures at runtime, so that it will take longer on first load (about thirty seconds on dual-core Core 2) but will be normal speed on subsequent loads and will use less VRAM when running. (This is the same as how it converts and caches .dae and .xml files on first load into a more efficient runtime format.) * When packaging the game for distribution, automatically compress all the textures (using some metadata stored in text files to say what DXTn mode and mipmapping etc to use) and only distribute the compressed form, so that normal players will get the efficiency benefits and fast loading times and small downloads. Benefits: * We'll have the uncompressed textures readily available, so the lossy compression will be easily repeatable and tweakable without compounding any quality loss. This will let us fix the problems with compression algorithms and mipmapping and filtering, improving the game's visual quality. * It'll be easier for artists and modders to get textures into the game. Drawbacks: * It'll take some coding effort (maybe a week) to get the compression tools integrated into the engine and the packaging process. * It'll take some work to replace the lossy .dds files in SVN with the original lossless versions. We don't have to do this immediately (the engine can continue loading the old .dds files until they're replaced) or for every texture (we might have lost some of the uncompressed versions) but the goal should be to eventually fix as many textures as we can. * If the game uses the uncompressed textures directly: It'll typically run at a lower framerate when run from SVN (particularly on lower-end hardware), and lossy compression artifacts won't be visible when testing the game. * If the game automatically compresses: It'll take a long time the first you start the game from SVN, while it's compressing and caching all the visible textures. (I think I prefer this option since it's only a small short-term pain and has long-term benefits.) * SVN checkouts will be a bit bigger. (If we use .png then it'll probably add about 40MB.) Thoughts?
  18. I think this description is accurate - that .psa file has 29 bones, 97 frames, and BoneState is 7 floats, so that's 78764 bytes (plus 37 for the header) which seems to match the file. (There's no mesh data in a .psa file.)
  19. I tried one example (hele_gym_a.dae) in Blender 2.53 and it looks like it dislikes the <library_effects> - if I delete that then it imports fine. I've no idea why that happens. It'd probably be good to investigate Blender's Collada plugin and work out where the problem is (since it may be a fixable bug in the importer). m_tunic_long.dae works if I change the TEXCOORD inputs to say set="0" instead of set="1". I don't know whether that should be fixed in our data or in the importer. With that fix, the skeleton and skin imports properly - I can rotate a bone and the mesh deforms properly, so that's good. (I haven't tried exporting back into the game.) For .pmd files, the work here might be useful for at least static models (buildings etc). Currently that's non-public since it's huge (~6GB when I last checked) and mixed in with a load of copyright-infringing reference images and suchlike. Maybe we should spend some time working out what files can be released and then put them up on a (non-SVN) server for anyone who's interested.
  20. Do we ever want the camera to look above the horizon and see the skybox? It'd be easy to simply disable the skybox and fill the background with black instead. (Water reflections would still render the skybox as normal.)
  21. Ah, glad it works now . Do we need some documentation to prevent other people doing it wrong? If you register on Trac then attach to the ticket, that'd probably be best.Skeletal meshes and animations should be possible (I think), but not easy. One problem is that the skeleton hierarchy gets flattened in the PMD/PSA files - you'd probably need to check the <standard_skeleton>s in binaries/data/mods/public/art/skeletons/skeletons.xml for the hierarchical version, then figure out which skeleton each file uses (they're probably almost all "Standard biped" but some might differ and the files don't actually say what skeleton they are), then unwrap the bone data arrays back into the nested structure. Then we still need all the skin weights and bind pose stuff. If you check something like meshes/skeletal/f_tunic.dae then I think everything there (except the materials etc) is needed. Hmm... Looks like there's only 39 skeletal .pmd files. Do we have all the original .max files, that someone could load and export as .dae? Maybe that'd be less effort than writing the conversion tool for non-static meshes. Similar question for .psa files, though I see 100 of those.
  22. That's quite strange... What graphics card do you have? Have you updated the graphics drivers recently? (if not it might be good to try that) Could you open the game's logs folder (either run OpenLogsFolder.bat which is in the location where you installed the game, or enter "%appdata%\0ad\logs" in the location bar in Explorer) and upload system_info.txt?
  23. (By "will" I mean "really should". I don't see why it'd fail - if you could post the problems you get (maybe in a separate thread) it'd be good to try debugging that.)
  24. The .dae files will be readable by any 3d software, so there shouldn't be any need to do anything software-specific. What I might like to do is use this tool to convert all the .pmd files that are currently in the game (and also convert .psa), and then we won't have any more .pmd files and can throw the tool away. Then the engine will only ever be loading .dae files, and we won't have to bother maintaining compatibility with the old file formats, which would make it a bit easier to change our graphics engine (to use a more efficient mesh/animation format or whatever).(The tool would have to support skeletal meshes before we could convert everything, and that doesn't sound trivial. If the current one works well for static meshes then perhaps it could be uploaded (as a Trac attachment or something) for people who want to play with it now?) It'd be possible to approximate smoothing groups based on the normals: if two triangles share an edge, and have the same normals for the shared vertexes, then they may be in the same smoothing group, and then you search for the largest sets of triangles that may all be in the same group as each other. But I don't know if smoothing groups can be represented in Collada files. But can't 3ds Max do this automatically? From the documentation it sounds like the Collada importer has a smoothing group option that should recreate them.
×
×
  • Create New...