Jump to content

DDS compression process


Recommended Posts

I have some (minor) concerns about the way we're using DDS files. Currently they're compressed using a variety of tools, with a variety of options for mipmap filtering, and only the lossily-compressed version is stored in SVN. There's a few things I'd like to be possible:

* Files should be compressed using the best available compression algorithm. (Particularly the textureconv tool supplied with the game is far from optimal. Some of the textures in the game have ugly compression artifacts, most visibly on the GUI buttons (button_wood.dds).)

* We shouldn't have people failing to get their textures into the game because they've not got right DDS exporter or the right settings for it. (That seems to have been a common issue recently.)

* We should reliably generate mipmaps whenever they're needed. (A few of our current textures don't have mipmaps, which slows down loading and causes serious rendering errors on Linux systems which can't automatically generate compressed mipmaps.)

* We should be able to modify the mipmap filtering - in particular it'd be nice to remove our current LOD bias hack (which was added to make textures look sharper, but sometimes causes ugly texture swimming effects) and sharpen the mipmaps so that terrain and objects still look good at a distance. This needs to be done consistently for all textures, and we need to be able to experiment and tweak the settings.

* We should be able to easily modify textures (e.g. add a new icon to an icon sheet) without losing quality every time they are saved.

* We might want to automatically combine some files into a texture atlas to improve renderer performance, so there won't be a strict 1:1 correspondence between source texture files and the files loaded by the engine.

I really want to keep the process simple on the artist side, particularly for modders - they shouldn't be expected to run some special art pipeline tool before they can see their stuff in the game. That means conversions should either be automatic at runtime, or shouldn't be needed at all. (It seems quite common for commercial games to have complex pipelines for preprocessing data before it reaches the game engine, but they don't care about giving all their tools to modders, and they have plenty of network bandwidth to share frequently-updated large binary files. I think our requirements are simple enough that we can get away with not doing that.)

A proposal:

* Only store lossless files (.tga, .png) in SVN.

* Always load and render the uncompressed textures when running the game from SVN. (Developers should probably have enough VRAM that it'll be okay. (We only use about 20-30MB on most maps with the initial units, and it would expand by about 4x without compression.))

* Alternatively the game could compress and cache textures at runtime, so that it will take longer on first load (about thirty seconds on dual-core Core 2) but will be normal speed on subsequent loads and will use less VRAM when running. (This is the same as how it converts and caches .dae and .xml files on first load into a more efficient runtime format.)

* When packaging the game for distribution, automatically compress all the textures (using some metadata stored in text files to say what DXTn mode and mipmapping etc to use) and only distribute the compressed form, so that normal players will get the efficiency benefits and fast loading times and small downloads.

Benefits:

* We'll have the uncompressed textures readily available, so the lossy compression will be easily repeatable and tweakable without compounding any quality loss. This will let us fix the problems with compression algorithms and mipmapping and filtering, improving the game's visual quality.

* It'll be easier for artists and modders to get textures into the game.

Drawbacks:

* It'll take some coding effort (maybe a week) to get the compression tools integrated into the engine and the packaging process.

* It'll take some work to replace the lossy .dds files in SVN with the original lossless versions. We don't have to do this immediately (the engine can continue loading the old .dds files until they're replaced) or for every texture (we might have lost some of the uncompressed versions) but the goal should be to eventually fix as many textures as we can.

* If the game uses the uncompressed textures directly: It'll typically run at a lower framerate when run from SVN (particularly on lower-end hardware), and lossy compression artifacts won't be visible when testing the game.

* If the game automatically compresses: It'll take a long time the first you start the game from SVN, while it's compressing and caching all the visible textures. (I think I prefer this option since it's only a small short-term pain and has long-term benefits.)

* SVN checkouts will be a bit bigger. (If we use .png then it'll probably add about 40MB.)

Thoughts?

Link to comment
Share on other sites

  • Replies 51
  • Created
  • Last Reply

Top Posters In This Topic

I think it is good overall. Not having access to the originals has bothered me. (Performance is an issue, but I think the advantages are worth it. I'll have to get a new computer soon enough anyway).

One thing I would like to see is all the icons split out to separate files instead of the icon sheets (they can be organized in folders).

Link to comment
Share on other sites

That's one place where I imagine texture atlases would be helpful - we can create the icons as separate files, then some conversion process will pack them together into a single texture for more efficient rendering once we care about that level of optimisation. That would still be possible when we only have .dds textures (you can pack them together without having to recompress as long as they're the same DXTn format), but it's probably much easier to add if we've already got an automatic texture-converting process set up.

Link to comment
Share on other sites

Sounds good to me. Haven't had too much success opening .dds files (not to speak of saving them, not sure I tried that the times I actually managed to open one though =) ), so leaving the trouble to the engine/automated process seems like a huge step forward. Consistency etc benefits sounds good as well, and I think the coding time is outweighed by ease of modding alone.

Link to comment
Share on other sites

Philip, do you still have your nifty little .dds tool in the binaries that artists can drag and drop a variety of file types over the .exe and with some minor command line tweaks it spits out a .dds file that is optimal to what the game uses?

For GUI graphic elements are the artists using the lossless .dds format or lossy or were those inefficient?

Link to comment
Share on other sites

That's the textureconv tool which isn't particularly good at compression quality and should preferably be avoided (y)

It could fairly easily be replaced with a tool that has the same effect using the NVIDIA tools library for compression, but that wouldn't solve the problems of wanting to change the mode/mipmapping/filtering/etc after the artist has uploaded the file, so I'd still prefer the conversion to be run automatically by the engine rather than being run by artists.

Most of the GUI textures appear to be compressed.

Link to comment
Share on other sites

Nobody's objected to this yet, and I think it's still a worthwhile thing to do. So I'll suggest the following:

* From now on, artists should upload all new textures as PNG, not as DDS. The GUI and actor XML files can reference the .png filenames and it should already work fine.

* Gradually, DDS files should be replaced with the original uncompressed PNG files. (Don't open the DDS and save it as PNG - only replace it when we have a higher-quality uncompressed copy.)

* (If we also have the PSD/etc source files then those should go in the art SVN repository instead, but I don't think they're so important.)

* I should do this in the next month or so.

Link to comment
Share on other sites

I see no harm in leaving the current DDS files in place. Part of the reason we picked DDS in the first place was file size, so switching everything over would not only take a considerable amount of effort, but would drastically increase checkout time and the size of complete packages.

Would it be possible to implement something like textureconv that detects PNGs and TGAs are present in the mods folder when the game starts up and then converts them to the proper DDS as necessary? The only major issue I can foresee is converting a file to DXT3 when it needed to be DXT5, etc. This could present the player with a loading bar at the beginning to let them know that the extra loading time is attributed to their mods being implemented.

Link to comment
Share on other sites

Would it be possible to implement something like textureconv that detects PNGs and TGAs are present in the mods folder when the game starts up and then converts them to the proper DDS as necessary?
That's pretty much exactly what I'm proposing here (y) (applied to the official content too, not just user-made mods)

The harm of the current DDS files is described in the first post - compression artifacts, incorrect mipmaps, non-ideal mipmap filtering. Most textures are okay, so there's no rush to replace every single one, but some cause compatibility problems and should be fixed soon, and if we can incrementally replace many of the rest then we can improve the game's graphical quality.

Switching every texture from DDS to PNG only adds about 10% to the total checkout size. The release packages will just contain the automatically-converted DDS, not the PNG, so it wouldn't make any difference there.

Link to comment
Share on other sites

Since we don't have most of the originals
I find it odd that you say we don't have the originals... I believe they should all be in the art SVN repository or files folder on the server.
SVN checkouts will be a bit bigger. (If we use .png then it'll probably add about 40MB.)
I think this was the main drawback of why we didn't do this previously. 7 years ago, internet connection speeds and bandwidth were not what they are today.
Link to comment
Share on other sites

By converting existing textures to PNG, we wouldn't be solving any of those problems. Since we don't have most of the originals, the existing artifacts and ugly compression would remain, the file would just be bigger.. and lossless for future changes (ie. icon sheets).

Yeah, when we don't have a higher-quality PNG version then there's no reason to convert the current DDS files. I don't know what proportion we can't easily find the originals for, but it looks like there's quite a few PNG/TGA/BMP/PSD files in the art SVN that we can use without much bother.
Link to comment
Share on other sites

Unfortunately it's at something like 100+ MB
By my count it's at something like 7.5GB for the entire art/trunk/ directory, which is indeed 100+ MB.

(Probably best to use the web interface to browse, and then only check out subdirectories that contain particularly interesting things.)

Link to comment
Share on other sites

Hi guys. I just got A0D built and working on Ubuntu on VBox on Vista, but ran into a huge performance hit due to the missing terrain XML files for the mmap values for almost all of the terrain tiles for the public mods. Presumably because VBox's OGL driver decided to use the textures from system memory rather than video memory due to the minimap's call into CTextureEntry::GetBaseColor which used glGetTexImage to get a 1x1 scaled base color for the tile's texture.

So, would it be possible to get the xml's automagically generated for the artists, either in the map editor or via a tool called during the packaging process? Just to make sure none of the xmls are missing. What I did was add code to have the missing xml generated in-game, maybe that's an option too?, since you're considering autogenerating dds files?

Anyway, thanks for making all of this open-sourced/free/etc., btw. I'm having lots of fun making small tweaks and hope to soon be up to speed enough in order to begin helping out dev-wise.

Link to comment
Share on other sites

Hmm, I wasn't aware of that problem, but I see a comment in the code saying "this is horribly inefficient (taking 750ms for 10 texture types)" and I can imagine some drivers will be particularly bad at these operations. It looks like this only happens on first load, so it shouldn't affect runtime performance - is that right?

It might be nicer to continue getting the default colour from the texture data rather than requiring an XML file for every single texture, since the default is often reasonable. That sounds like it should be straightforward with a slightly rejigged texture loading system - it can just grab a pixel from the lowest mipmap level (since we'll guarantee all terrain textures are DDS with explicit mipmaps) before we send the data to OpenGL. That way we can avoid needing an extra file for metadata about each texture. I'll see if I can do this when I change that code.

Link to comment
Share on other sites

It looks like this only happens on first load, so it shouldn't affect runtime performance - is that right?

When running within VBox, it affects all renderings after the 1st game load. FPS drops from a playable 20~ (w/shadows+fancy-water) to 2 (w/o shadows or fancy water), whether in-game or back to the main menu or map selection screens. Everything's fine as long as the glGetTexImage isn't called.

The VBox driver also crashed the app and sometimes the VM itself on the minimap's glInterleavedArrays call when using color, e.g., GL_C4UB_V2F or GL_C3F_V3F. GL_V2F worked fine.

Overall, it's just bad driver issues I had to workaround to do some dev on the game.

Link to comment
Share on other sites

Ah, so the call to glGetTexImage causes some state change in the texture that makes all subsequent usages slow? I guess that makes sense if it's a VM forwarding its GL calls to the host machine (using Chromium?), since reading back texture data would break its attempts to optimise rendering by storing all the texture data on the host. Sounds like it's possibly an architectural feature rather than a driver bug, so it's worth fixing our code. (The crashes sound more like bugs, though). Presumably the same problem will be triggered by Atlas's terrain texture previews (in GetTerrainGroupPreviews), so ideally we should try to avoid that too.

Yes, it's true that we have many of the originals.. but not all.
Yes, and that's fine (y). When we don't have the original we don't need to change anything. We'll still get improved quality for the hundreds where we have the originals, and for all textures added in the future.
Link to comment
Share on other sites

  • 2 weeks later...

Starting to implement some of this now. Current plan is to add a new texture manager, which can construct a texture object given a filename and set of texture parameters (wrap modes, etc), and then can perform the actual loading (possibly involving slow conversion/compression/packing/caching based on the source textures plus some metadata files) either synchronously or asynchronously (not multithreaded, just spreading the set of textures over multiple frames). The asynchronous bit means you won't be blocked for a minute when first starting on an empty cache - it can display a placeholder and let you test the game while it's doing the compression in small bursts between frames. Hopefully that'll reduce the biggest pain with this approach.

Link to comment
Share on other sites

Sounds good!

I am wondering about the synchronous bit, though - is there any reason to tell code whether the texture has already been loaded? There is the mipmap color hack, but we shouldn't be doing that anyway for performance reasons.

Instead, I think it'd make sense to always load asynchronously (ideally in a background thread), return a handle that is immediately valid, and at a defined point in Frame(), upload the textures that have just finished and update the underlying "texture" object. (It'd be even nicer to do the uploading in the background thread, but multithreaded OpenGL has some gotchas and is probably better avoided.)

A thread is definitely helpful in that it'd reduce the performance impact of loading new stuff (which happens at runtime) - regardless of whether they need extra (de)compression. That's fine on single-processor systems too, since it is idle most of the time.

Link to comment
Share on other sites

For normal players (who will have pre-converted cache files), I think rendering the placeholder texture (currently solid black) for a few frames would be ugly and disturbing, and worse than just pausing for a fraction of a second. We don't know every single texture that a frame will use until we actually render that frame, so the renderer will have to ask for the texture then synchronously wait if it's not loaded yet.

To fix the performance impact of loading new stuff, I think it's better to focus on prefetching. Players are very likely to see almost all the terrain and all the units in the world, by the end of a game, so we'll have to load it eventually and design it to fit in their memory. So we might as well load it earlier - the first frame can render as quickly as it does now, and then there'll be some background activity for a while as it's prefetching the rest, and then it should be perfectly smooth when you scroll around. (Prefetching involves looping over all terrain tiles, and all entities in the world, and peeking in the command queue for new entities that are about to be trained, etc, and pushing all the needed resources onto a queue (and maybe doing part of the processing in a background thread) and spending up to n msecs per frame loading them into the main thread). Then it should be very rare for the renderer to need a resource that hasn't been loaded yet, so it doesn't matter that that's synchronous, and in those rare cases it'll just pause for a tiny bit instead of rendering something incorrect.

(There's some special cases like the terrain previews in Atlas which need to extract the texture data after it's loaded, and making that asynchronous would be nice but probably a lot of complexity since it's crossing threads and DLLs and languages (C++ to JS) and I'd rather not bother yet. So there should be a synchronous API at least for that case.)

Is it feasible to load files and textures in a background thread with the current lib code? I thought that was all non-threadsafe and probably very hard to fix. We could do the texture compression in a thread but that wouldn't be used by normal players and doesn't seem worth the complexity.

Link to comment
Share on other sites

I have a surprisingly strong feeling that the synchronous API would be a mistake - perhaps because I've always found this topic interesting and have been thinking about it off and on.

Is it feasible to load files and textures in a background thread with the current lib code? I thought that was all non-threadsafe and probably very hard to fix. We could do the texture compression in a thread but that wouldn't be used by normal players and doesn't seem worth the complexity.

It's certainly feasible. The biggest culprit is probably the file cache, that'd blow up fairly quickly.

It'll need to be locked, but that's certainly doable and I don't foresee too many problems.

I think rendering the placeholder texture (currently solid black) for a few frames would be ugly and disturbing, and worse than just pausing for a fraction of a second. We don't know every single texture that a frame will use until we actually render that frame, so the renderer will have to ask for the texture then synchronously wait if it's not loaded yet.

Let's take a step back here. The problem is that the renderer suddenly decides it needs a texture.

If we have a fairly studly prefetcher (a non-trivial task), it might reduce the load, but surely it won't always keep up (either due to high system load, e.g. antivirus, or lots of in-game activity, e.g. a transport unloading new stuff or lots of training or viewing a new base for the first time, etc.).

Therefore, we have to be prepared for the worst case, i.e. wait for everything to load, or provide for placeholders. What happens if the load takes a while (HD is busy, conversion to do, temporary read failure while the HD allocates a replacement sector, ...)? If the API is synchronous, the main loop (renderer) is frozen and we're unreactive to input and also network messages (and other players may think we lagged out).

Let's instead consider an asynchronous API that provides placeholders and swaps them out transparently, and provides a notification that the loader is still working on stuff that's needed now (as opposed to just prefetching). With this notification, you could achieve the same perceived effect as the synchronous API (graphics are paused for a bit), but you're still running the main loop and are responsive to network/input events (boss key ;) ).

When the prefetcher can't keep up, we could reduce the wait time by leaning a little bit on the placeholders. I agree that black stuff flickering in would be distracting, but it needn't be that bad. A nice example is Jak&Dexter, which makes the player stumble if the loader isn't keeping up (silly idea for us: cover the terrain with clouds :)). Instead, we could go with a more neutral color, or a rough approximation of the actual hue. That of course suggests a level-of-detail scheme, where a really low-res texture might be loaded first and replaced by the full version when available.

Let's forget about fancy placeholders not being discernable, that's probably too much work. The important point is that an async API allows the same behavior as the sync version, but avoids blocking and has better information (how many files remain, vs. always having to wait until the next texture has been read). It seems a shame to have this nice centralized loader, only to waste it via an API that just has the renderer call SyncLoadTexture().

Players are very likely to see almost all the terrain and all the units in the world, by the end of a game, so we'll have to load it eventually and design it to fit in their memory.
Disagree - only one of several terrain biomes would be active at a time, non-water maps skip all the naval stuff, and we have a rather large number of units/buildings, of which 50% are probably unused (think back to our game a few weeks back - did we really see even a quarter of ALL units and buildings?).
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share


×
×
  • Create New...