Jump to content

Ykkrosh

WFG Retired
  • Posts

    4.928
  • Joined

  • Last visited

  • Days Won

    6

Ykkrosh last won the day on October 31 2013

Ykkrosh had the most liked content!

3 Followers

About Ykkrosh

Contact Methods

  • Website URL
    http://

Profile Information

  • Gender
    Male

Recent Profile Visitors

3.382 profile views

Ykkrosh's Achievements

Primus Pilus

Primus Pilus (7/14)

42

Reputation

1

Community Answers

  1. Ah, looks like the ALPHA_PLAYER is the problem - that still needs to be 8-bit alpha. I think there are approximately no cases where we can use 1-bit alpha formats (since even if the original texture has only 1-bit alpha, its mipmaps will need higher alpha resolution). I think we currently use no-alpha DXT1 (4bpp) for any textures whose alpha channel is entirely 0xFF, and DXT5 (8bpp) for everything else.
  2. Mmm... Having read the ASTC spec, I'm not sure there's much chance of a real-time encoder with decent quality - there's a huge number of variables that the encoder can pick, and if it tries to pick the optimal value for each variable then it's probably going to take a lot of time, and if it doesn't then it's just wasting bits. ETC2 might be more appropriate for that case, since it's much simpler and will presumably take much less encoding time to get close to optimal output. Might be interesting to do a test of quality vs encoding speed though, to see if that's true.
  3. The Android Extension Pack requires ASTC. The AEP is not mandatory, but GLES 3+ isn't mandatory either - what matters is what the GPU vendors choose to support. (I think even GLES 2.0 wasn't mandatory until KitKat, but pretty much everyone supported it long before that.) ASTC does seem like it will be widely supported in the future - Mali-T622, PowerVR Series6XT, Adreno 420, Tegra K1. Support is not great right now, but any decent Android port of the game is going to be a long-term project, and if we're going to spend effort on new texture compression formats I think it may be better to spend the effort on one with better long-term prospects. (And ASTC has better quality at the same bitrate, and much more flexibility in choosing bitrates (possibly the most useful feature - we can trade off quality against memory usage and texture bandwidth), so if compatibility was equal, it would be the better choice.) I still don't see how that's possible - S3TC and ETC2 are both 4bpp for RGB (and for RGB with 1-bit alpha), and both are 8bpp for RGBA, and the DDS/KTX headers should be negligible, so there should be no difference, as far as I understand. Slightly tangential, but a possible use case for on-the-fly compression: Currently we render terrain with a lot of draw calls and a lot of alpha blending, to get the transitions between different terrain textures - I think a blended tile needs at least 7 texture fetches (base texture + shadow + LOS, blended texture + shadow + LOS + blend shape), sometimes more. That's not very efficient, especially on bandwidth-constrained GPUs (e.g. mobile ones). And it's kind of incompatible with terrain LOD (we can't lower the mesh resolution without completely destroying the blending). I suspect it may be better if we did all that blending once per patch (16x16 tiles), and saved the result to a texture, and then we could draw the entire patch with a single draw call using that single texture. It'll need some clever caching to make sure we only store the high-res texture data for patches that are visible and near the camera, and smaller mipmaps for more distant ones, to keep memory usage sensible, but that shouldn't be impossible. But if we do generate the blended terrain textures at runtime like that, we need to compress them, to get the best quality and memory/bandwidth usage. So we'll need a suitably-licensed compression library we can use at runtime (though one that focuses on performance over quality; and if I can dream, one that runs in OpenCL or a GLES 3.1 compute shader, since the GPU will have way more compute power than the CPU, and this sounds like a trivially parallelisable task that needs compute more than bandwidth). (Civ V appears to do something like this, though its implementation is a bit rubbish (at least on Linux) and has horribly slow texture pop-in. But surely we can do better than them...) DDS is also flexible enough to be used in combination with any texture format, so I don't think that's an argument in favour of KTX . Using two different texture containers seems needlessly complex. We could switch the generated textures to KTX on all platforms, but we still need to support DDS input because lots of the original textures in SVN are DDS (from the days before we supported PNG input), and it sounds like more work than just updating our DDS reader to support a new compression format. We currently use cube maps for sky reflections, though the actual inputs are single-plane image files and we generate the cube maps in a really hacky and very inefficient way. That ought to be rewritten so that we load the whole cube map efficiently from a single compressed DDS/KTX file. In general, changing our texture code is perfectly fine - it has evolved slightly weirdly, and it supports features we don't need and doesn't support features we do need, so it would be good to clean it up to better fit how we want to use it now. (I don't know enough to comment on your specific proposals off the top of my head, though . Probably need to sort out the high-level direction first.) NVTT hasn't had a release for about 5 years, and we want to rely on distro packages as much as possible, and they only really want to package upstream's releases, so changing NVTT is a pain . (The distros include a few extra patches we wrote to fix some serious bugs, but I doubt they'll be happy with a big non-upstream patch adding KTX support and changing the API). It'd probably be less effort to write some code to transcode from DDS to KTX, than to get NVTT updated.
  4. That looks like a non-GPL-compatible (and non-OSI) licence, e.g. you're only allowed to distribute source in "unmodified form" and only in "software owned by Licensee", so it's not usable. That just calls into etcpack, so it has the same licensing problems. Licence looks okay, but it sounds like quality may be a problem (from the blog post: "As for the resulting image quality, my tool was never intended for production usage").
  5. Hmm, ETC2 sounds like it might be good - same bitrate as S3TC (4bpp for RGB, 8bpp for RGBA) and apparently better quality, and a reasonable level of device support. (Integrating it with the engine might be slightly non-trivial though - we use NVTT to do S3TC compression but also to do stuff like mipmap filtering and to generate the DDS files, so we'd need to find some suitably-licensed good-quality good-performance ETC2 encoder and find some other way to do the mipmaps and create DDS etc.)
  6. Oh, sure, but then it's just a problem of the game being fundamentally not designed to run on that kind of hardware, which is a totally different problem to a memory leak bug (though probably a harder one to fix) It's not available on any devices I've used (Adreno 320/330, Mali-400MP, VideoCore) - as far as I'm aware, only NVIDIA supports it on mobile. But I think the game will use it automatically whenever it's available. (public.zip always contains S3TC-compressed textures, and the game will detect whether the drivers support S3TC, and if not then it will just decompress the textures in software when loading them.)
  7. 1GB isn't that much, depending on the map and game settings - if I run on desktop Linux with nos3tc=true (to mirror the lack of working texture compression on Android), on "Greek Acropolis (4)" with default settings, I see resident memory usage ~1.3GB. With S3TC enabled it's ~0.6GB. I think that indicates there's a huge volume of textures, and we don't (yet) have any options to decrease the texture quality or variety, so you just need a load of RAM. (Texture compression is kind of critical for Android, but ETC1 is rubbish and doesn't support alpha channels, and nothing else is universally available. It would probably be most useful to support ASTC, since that's going to become the most widely available in the future, and has good quality and features.) The problem with having a very large number of GL calls is usually the CPU cost of those calls, so the bottleneck might still not be the GPU
  8. 12.04 is supported by Canonical for 5 years, i.e. until 2017. Over the past 6 months we had roughly the same number of players on 12.04, 13.10, and 14.04 (~3000 on each, who enabled the feedback feature). That time period overlaps the release of 14.04, so there would probably be different results if we waited another few months and checked the latest data again, but it suggests there's likely still a significant number of people on 12.04 today. So we shouldn't drop support for 12.04 without significant benefits (and I don't think there are any significant benefits here, since 12.04 / GCC 4.6 should be good enough for the SpiderMonkey upgrade).
  9. About 10% of our Windows players over the past 6 months were on WinXP (and that's about 2/3 as many players as for all versions of OS X), so I think it's important to continue supporting running the game on XP for now. I'm not too concerned about dropping it as a development platform though.
  10. To integrate properly and straightforwardly with distro package managers, we should only use the default compilers in the distros we care about, which I think currently means GCC 4.6 (for Ubuntu 12.04). On Windows, VS2010 should be sufficient for compatibility with SpiderMonkey (since I think that's what Mozilla still uses), but many of the more interesting C++11 features are only available in VS2012 or VS2013. One problem there is that VS2012 won't run on WinXP, so any developers currently using WinXP (I've got data that suggests there's still a few) will have to upgrade to Win7 or above. (Players can still use XP, this only affects people wanting to compile the code). Is there anyone here who would be seriously affected by that? Also, updating VS would be a bit of a pain for e.g. me, since I use Visual Assist (and can't stand VS without it) but only have a license for a VS2010 version. And the VS2012/2013 IDEs have a terrible UI design (uppercase menus, really?). Those aren't blocker issues, but they are downsides.
  11. The alpha 16 package is apparently in wheezy-backports - can you install it from there (possibly with these instructions)? That should make sure you get all the dependencies too.
  12. There might be ways to use a GPU to implement certain kinds of pathfinding efficiently, but that article is not one - it looks like about the worst possible way you could use a GPU . If the maximum path length is N (with maybe N=1000 on our maps), it has to execute N iterations serially, and GPUs are very bad at doing things serially; and each iteration is a separate OpenGL draw call, and draw calls are very expensive. And it's only finding paths to a single destination at once, so you'd need to repeat the whole thing for every independent unit. It would probably be much quicker to just do the computation directly on the CPU (assuming you're not running a poorly-optimised version written in JS on the CPU). AMD's old Froblins demo used a similar (but more advanced) technique ("Beyond Programmable Shading Slides" on that page gives an overview). So it can work enough for a demo, but the demo has much simpler constraints than a real game (e.g. every unit in the demo has to react exactly the same way to its environment, it can't cope with dozens of different groups all wanting to move towards different destinations) and I doubt it can really scale up in complexity. (Nowadays you'd probably want to use OpenCL instead, so you have more control over memory and looping, which should allow more complex algorithms to be implemented efficiently. But OpenCL introduces a whole new set of problems of its own.) And performance on Intel GPUs would be terrible anyway, so it's not an option for a game that wants to reach more than just the high-end gamer market.
  13. I believe I tried that when first implementing the algorithm. If I remember correctly, the problem was that sometimes a unit starts from slightly inside a shape or precisely on the edge, and the silhouette-point-detection thing will mean it doesn't see any of the points on that shape at all. That makes the unit pick a very bad path (typically walking to corner of the next nearest building and then back again), which isn't acceptable behaviour.
  14. The 4x4-navcell-per-terrain-tile thing and the clearance thing were part of the incomplete pathfinder redesign - #1756 has some versions of the original patch. None of it has been committed yet. (I think the redesigned long-range pathfinder mostly worked, but it needed integrating with the short-range pathfinder (which needed redesigning itself) and with various other systems, and that never got completed.)
  15. You probably need "./update-workspaces.sh ..." rather than just "update-workspaces.sh ..."
×
×
  • Create New...