Jump to content

Ykkrosh

WFG Retired
  • Posts

    4.928
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by Ykkrosh

  1. Me neither - did you forget to include the error report here? It ought to be impossible for an application to lock up the whole computer, so it might be a problem with drivers or with X. It sounds like you can't alt-tab or click on other windows at all? Can you do ctrl+alt+f1 to switch to a text console (and maybe ctrl+alt+f7 to switch back)? Or are you able to connect to the computer over a network with SSH, from another machine? If you can do either of those, it may be possible to use a debugger to see where things are failing. Otherwise, you could try disabling some of the graphical features - create a file in .../binaries/data/config/local.cfg saying fancywater=false shadows=false renderpath=fixed and maybe it'll avoid graphics driver bugs. Not sure what else you could try.
  2. We want to focus on getting the first two civs working properly first, which will probably take quite a while. Once we've done that, we'll need to get the other civs' art and data files up to the same standard, which will probably also take quite a while. I don't think we can be any more definite than that
  3. Hmm, I don't understand why that'd happen, but I suppose I shouldn't complain . The only change aimed at performance was avoiding a little memory allocation in the GUI, which shouldn't have much effect...
  4. (You can use Valgrind's suppression files to remove spurious results.) Looks like the main problem is that it's trying to use an entity after it's been deleted, which is a reasonably common problem with the old simulation system (since there's lots of pointers and a lack of robustness in the design). I'm currently working on rewriting the entire simulation system, partly due to this kind of thing, so these problems should go away (the new design tries to minimise the chances for these errors, and also has a lot more automated tests), so I think I'd be happy to ignore this problem for now.
  5. Hmm, that could work - looks like it was added in Boost 1.35. But the latest stable version of Ubuntu (9.10) only has Boost 1.34, so I wouldn't want to make 1.35 a dependency . And it looks like their implementation for Windows just calls _fpclass, which is only defined for doubles and will give incorrect results for denormal float, so it wouldn't solve the problem anyway. (glibc seems to implement fpclassify by looking at the float/double as uint32s.)
  6. I've added some tests and fixes for our POSIX fpclassify/isfinite/etc emulation on Windows (in lib/posix/posix.h), but it's revealed a problem I'm not sure how to fix. The tests for denormalized numbers fail, seemingly because the fpclassify implementation loads them into 80-bit FPU registers where they're no longer denormalized, so it always thinks they're normal numbers. I don't rely on this functionality (my code only cares about finiteness), but if we claim to implement the POSIX functions then we ought to give the right answers. But I've got no idea how it should be implemented.
  7. Hmm, hadn't thought of that, and I suppose SHA-1 could work - apparently it's only almost twice as slow as MD5, and the bottleneck is likely to be elsewhere. But if we used the DLL for SHA-1, we still couldn't use Crypto++'s zlib, and it seems a bit silly to have a 1.2MB DLL for a single hash function... Seems like the quickest solution for now though, so I guess I'll go with that
  8. (Asian characters are generally in the BMP, via Han unification; the other planes are used for historic and obscure scripts and characters.)If I copy-and-paste a U+10000 character into a filename in Explorer on Vista, then it acts like a single character in Explorer (for rendering and cursor movement etc), but FindFirstFileW returns it as two wchar_ts (0xD800 0xDC00), and it matches the wildcard "??" (not "?"). If I call CreateFileW with a name with containing 0xD800 0xD800 (sic), it roundtrips fine through FindFirstFileW, and Explorer displays the name as containing two characters, and I can copy-and-paste them and they remain as 0xD800. If I create 0xDC00 0xD800 then Explorer displays two characters, but the first character is impossible to select (the cursor skips straight over it and treats it as part of the previous character, unless it's the first character; if it's the first character and I paste a 0xD800 in front of it then the pair starts looking and acting like a single character). So... NTFS really doesn't care. It just stores arbitrary 16-bit values. They get passed unchanged (and unvalidated) through the *W functions. Explorer's filename rendering handles properly-paired surrogate code units as if they were the appropriate non-BMP code point, and its filename editing UI seems to work by basically ignoring any high surrogate. So on Windows/NTFS you can have filenames that contain unpaired surrogates and therefore cannot be losslessly converted to valid UTF-8. But if you use the normal UI for creating files, you'll end up with valid UTF-16 names (which always can be converted to UTF-8). It's not just those two choices, it can be any byte-based encoding. I expect some people use e.g. Shift-JIS in practice. (It can even be EBCDIC (which turns most of my filenames into question marks), though locale-gen warns "not ASCII compatible, locale not ISO C compliant". At least it can't be UTF-16/UTF-32.) That doesn't sound like it solves the roundtripping problem - if we see we're in the path /home/andré/... and decode it as UTF-8 or Latin-1 (depending on whether é is one byte or two), and then we want to write the file /home/andré/.config/..., how do we know how to encode the path again?So I still think the only 'proper', robust, theoretically mostly correct approach is: * On Windows, treat paths as strings of arbitrary 16-bit values, as much as possible. (Characters like "/" still have special meaning; but 0xD800 is just an arbitrary number, it's not a Unicode anything.) * On Linux, treat paths as strings of arbitrary 8-bit values, as much as possible. (Characters like "/" still have special meaning; 0xC0 is just a number.) * In both cases, conversion to a Unicode string (in any encoding) may be lossy, so don't do that unless lossiness is acceptable. * When filenames have to be exposed to the user (e.g. in log file output, or in saved game names (if we don't just give them numerical filenames)): - * On Windows, encode/decode as UCS-2. (Our user input can't contain non-BMP characters, because we're restricted to the BMP internally, so we don't need to bother properly encoding as UTF-16.) - * On Linux, encode/decode based on the current locale environment settings (defaulting to UTF-8 if unspecified). (The settings might be wrong, but this is the best we can do.) - * In both cases, encode/decode as little of the path as possible (e.g. encode the saved game name before concatenating it onto the opaque data directory pathname, rather than decoding the pathname first then concatenating then encoding). * And use numerical filenames for saved games, so we don't have to worry about users entering Unicode or slashes or quotes etc. (We don't have to do things correctly - it should work in 99% of cases if we just hard-code it as UTF-8 or whatever. But I think there are non-zero cases where it would break, because the user has a weird environment, and it's possible for us to make it work more reliably, and I don't like leaving intentional bugs.)
  9. I want to compute the MD5 of a stream of bytes (for simulation state hashing), feeding in many arbitrary-sized (typically small) chunks of bytes. Currently I'm using Crypto++ which makes it trivial (create an MD5 object, call md5.Update(data, len)). The problem is that Crypto++ seems to be a pain on Windows: it can compile as both dynamic and static, except the dynamic library only includes FIPS Approved algorithms (which doesn't include MD5), and the static library is 45MB (and we want to provide pre-built libraries for Windows in our SVN, and that's unreasonably big). Also its .h files trigger a load of warnings in MSVC. (There are some future cases where other Crypto++ algorithms might be useful - zlib for stream compression, and some kind of proper encryption for networking. Currently it's just being used in the one place, for MD5.) Some options I can think of: * Strip out all the bits of Crypto++ that we don't use, before compiling it as a static library, to save space. (But that's a pain with dependencies, upgrades, testing, etc, and recompiling when we upgrade the standard compiler.) * Use a lighter-weight crypto library which provides similar functionality but doesn't have zillions of templates. (Which ones are good?) * Copy the MD5 parts from an existing library and hack it to remove unnecessary dependencies, then import it into our code. (That solves all the bother with external libraries, and MD5 is pretty straightforward so there's not much code involved. Got to be very careful not to break padding and endianness etc, though. Doesn't help us with non-MD5 algorithms, but zlib is probably easy enough to use as a raw API or a custom wrapper, and secure multiplayer is a complete unknown at the moment so we can't plan for it anyway.) Any thoughts on the best approach?
  10. As far as I'm aware, that doesn't make sense. NTFS just deals with strings of arbitrary 16-bit values, and Win32's *W functions deal with strings of UCS-2/UTF-16 characters (depending on version of Windows), and there doesn't need to be any conversion in there, and there's no UTF-8 or other encodings involved. Am I misunderstanding how these things work, or what the problem is? Not on Linux - filesystems don't have encodings, they just handle paths as strings of bytes. Encoding is an application-level concept. Even if your system consistently uses UTF-8 everywhere, a rogue application could create a file whose name is not valid UTF-8. And even if filesystems did have consistently-used encodings that we could detect, any subdirectory could be symlinked to a different filesystem with different rules. These things aren't very likely in practice, but nor are they unimaginable (and filesystem encoding on Linux is a bit of a mess and easy to break), and I think we ought to implement something that's actually correct rather than something that's similarly complex but not quite right.(Technically, some filesystem drivers (e.g. the NTFS ones) let you configure how characters on disk get encoded before they're returned via the byte string APIs, so encoding is sometimes also a filesystem concept. But that's just an implementation detail.) (This article seems usefully informative about filesystems.) (I'm not entirely sure how these concepts apply to OS X. HFS+ stores filenames as UTF-16 (in NFD) and probably converts to UTF-8 for the POSIX APIs; I've no idea what it does with non-ASCII on FAT filesystems. I guess the safest assumption is that if you get some bytes through the API and send them back out through a similar API, they'll be handled consistently but you can't rely on anything else.) Yeah, that's a bit of a pain. We can probably require that none of our own data files use non-ASCII characters, but I'm not sure how much that simplifies things.
  11. I see something strange with texture mipmaps on Linux when forcing S3TC - I assume you're getting the same problem. Haven't looked into what causes it - I guess we might just be lacking some mipmap levels and the drivers can't compress automatically, or something like that, so it can probably be fixed later. -quickstart works on Linux too. The sound quality is a known problem with OpenAL and PulseAudio on Ubuntu.
  12. Try running driconf and turn on the "Enable S3TC texture compression ..." option.
  13. I think it would, because they might be wrong. E.g. the user may have LANG=en_GB.UTF-8 but try to run the game from a CD or a FAT32-formatted USB drive that has ISO-8859-1 names instead. If we try to decode them as UTF-8 we'll get unrecoverable errors. The OS and filesystem don't care about the encoding (they just see bytes), and there's no reason we need to care (except in a few cases where we print filenames to users, like in log files), so it seems best for us to avoid encoding conversions as much as possible.
  14. Why risky? The only problems I imagine with roundtripping through UTF-8 are if the input contains noncharacters or surrogates, in which case a lot of our code will be unhappy, and which should be extremely rare. Still, avoiding needless conversions sounds like a good idea.I expect it'll actually be a lot more risky on Linux, because (as far as I'm aware) paths are strings of arbitrary 8-bit bytes (whose interpretation as characters by applications typically depends on environment variables) and we incorrectly assume they're always UTF-8 and convert them to Unicode. As soon as someone runs the game on a filesystem with ISO-8859-1 names (which is quite common) containing non-ASCII characters, it's going to break. Could we avoid unnecessary string conversions in that case too? i.e. handle pathnames internally as fs::path on Linux, fs::wpath on Windows, so they're stored in their native format with no conversions, with some wrapper functions for e.g. appending ASCII strings to native paths; and then have a "convert native path to std::wstring for display" function that accounts for locale settings on Linux (and if the user's got the wrong locale settings it'll just display funny, it won't break the game)?
  15. "free for non-commercial use" sounds to me like it's not open source, so we wouldn't want to use that. But given that the last blog post was in October, in which he says he "will be announcing some of those details in the coming week", it's hard to work out what's actually going to happen. Guess we just have to wait and see (Incidentally, this is an interesting perspective on Google's Chromium - it's not really a well-behaved system, at least on Linux, because it's provided as a monolithic lump of code (an 800MB download) with everything bundled and little attempt to integrate properly with the host system. It's less of a problem when we just want the WebKit part, not the browser UI, but it's still a problem. Other ports of WebKit seem to be much better at that.) The basic idea is to render the web page to a GL buffer once per frame, so it's always up-to-date, and pass mouse events from the game back to WebKit so it can update its internal representation of the page which will be used to render the next frame. When you render, WebKit can tell you what region of the screen has actually changed since last time, so you only need to re-render that region into the GL buffer and upload it to the GPU, and if little is changing then it should be fast.
  16. Did you run the automated tests? I'd have thought some of those would create files (indirectly even if there's nothing specifically testing the VFS code). If they don't, I guess we need more tests
  17. That's a recently-introduced bug - see this thread.
  18. Hmm, I get that error too. Sounds like an issue for Jan . (Call stack is CXeromyces::Load -> VFS::CreateFile -> RealDirectory::Store -> File::Open -> FileImpl::Open -> sys_wopen with mode=0xffff -> _wsopen_s with mode=0xffff.)
  19. The "update" function should work - it'll keep downloading whatever files are currently out-of-date or missing. The problem you're getting is that you put the code in a location containing space characters ("c:\Program Files"), which causes errors. You should move the files to a location with no spaces, and then it should work better ((We really ought to fix the bug with spaces - I guess it just needs a few quotes added into the commands generated by Premake...)) (By the way, the game's name is spelt with a 0 (zero) not an O (capital o), but that shouldn't break anything serious )
  20. Hmm, those errors look the same as before - it might be that Visual Studio is getting confused (not an uncommon occurrence), so you should try exiting VS, running update-workspaces.bat again, opening it in VS again, doing a "Clean Solution", and then building again. Hopefully it'll then pick up the project changes correctly.
  21. Did you run update-workspaces.bat after updating from SVN? Sounds like you forgot to do that and so your project settings are out of date
  22. It's an openly documented format and there are several open source tools to work with the files - e.g. NVIDIA has some, the DevIL library can handle DDS too, and our game has its own code for reading and decompressing them. We need to use compressed textures (to save video memory), and the only option supported by the hardware is S3TC/DXTC (they're the OpenGL/DirectX names for the same thing), and DDS is the only common format for storing S3TC/DXTC textures, so it seems to be the best choice. It's less widely supported than formats like PNG but it's not too obscure or hard to convert so I think it works fine. (The game does support PNG/BMP/JPEG/TGA too, so people modding the game can use those if they want.)
  23. I believe so. Why wouldn't it be?
  24. I don't think there's any strong technical reasons to stop working on Win2K, so we'll probably continue to aim to run on it. (The only problem is finding someone to test it, though I do have my old Win2K computer in a cupboard somewhere so I could probably dig it out in the future.)
  25. I think it's very unlikely the game will work in Cygwin (since nobody's ever tried that before, and the game does lots of system-dependent things). Hmm, it sounds like the JS_THREADSAFE option is missing from the project settings - I made some changes to that in the past few days, so you may have an outdated project file. Have you run update-workspaces.bat immediately before building? I think that might be caused by using a directory name containing a space ("h:\Documents and Settings\") - it would work better if you moved it to a different location with no spaces.
×
×
  • Create New...