Jump to content

Progress reports on funded work


Recommended Posts

Shouldn't the red zones not be so angular at this point? at the corners of the fence, building, and coast, shouldn't the impassible-navcell shape resemble an arc?

The corners of the impassability fringe could be rounded; loci rather than just grown bounds?

Yeah, it would probably be a bit nicer to do rounded corners, though it shouldn't affect the correctness of the pathfinding (it just might look a bit visually nicer when units move in smoother paths around corners). Currently I'm effectively doing a convolution with a box shape over the terrain passability grid; convolution with a circle would give rounded corners around the terrain (though I guess it'd be a bit slower since the circle isn't a separable filter; need to measure it to see whether it matters). Static obstruction passability is done by rasterising a square (after adding clearance radius to width/height), and could be fairly easily changed to rasterise a rounded square instead

Could the foot of the siege tower be non-axis-aligned?

No. It's not really axis-aligned anyway, since it's sort of a circle (though sometimes sort of a square, and it gets rendered as an axis-aligned square) - the pathfinder and unit-movement code assumes units can turn on the spot instantly, and won't get stuck when turning, so orientation is ignored entirely. (This should only really matter for things like ships, which are long and narrow and slow, and they probably need a largely separate implementation of the movement code anyway to deal sensibly with momentum and ramming and turning circles etc.)

Link to comment
Share on other sites

A slight concern: what about horses? Are they assumed to be able to turn 90° in an instant?

Yes, same as what's currently implemented (and same as in AoM/AoE3 etc). (The 3D model only rotates at a constant slow rate, which makes the instant turns much less noticeable, but it's still instant as far as the simulation code is concerned.)

They will one day be able to charge, how to implement this?

Maybe with some simple hack (like apply an attack bonus if the unit was moving recently), or maybe a special movement mode that's not based on pathfinding and just moves in a straight line until it hits something (which is fine as long as we can fall back on proper pathfinding when the unit has to navigate a tricky environment without getting stuck).

Link to comment
Share on other sites

Day 15

Worked on various minor fixes to make the pathfinding less broken, and updated the JPS A* to work with the real technical requirements (correct diagonal movement, non-point goal areas, etc), so it's nearly possible to test performance in a proper gameplay environment.

Some more thoughts on JPS:

Intuitively (i.e. very informally and maybe wrongly), JPS works because of two observations:

* Shortest paths will wrap tightly around the corners of obstructions. (Imagine the path is a piece of string from the source to the destination, winding around various obstructions, then pull the string tight. It will get pulled until it's going in straight lines between the corners of the obstructions. If it's not already tight around the corners, it's not a shortest path.)

* In a grid (only allowing horizontal/vertical/diagonal steps), when there are multiple equal-length paths between the same two points, you have to make an arbitrary choice between them, and you can always prefer the one that moves diagonally as early as possible.

Therefore, if the pathfinding algorithm is extending a path in a horizontal/vertical direction, it can continue in a straight line until it reaches a corner to wrap around. If there is no corner, there's no point turning diagonally or 90 degrees from the current direction, because it would always be possible (and preferable) to have turned diagonally on an earlier step. (That continuance in a straight line is the "jump" in "JPS", and is what makes JPS a worthwhile optimisation.)

If the algorithm is extending a path in a diagonal direction, it can continue in that direction; or it can turn 45 degrees and go horizontally/vertically (since that might still be a most-preferred shortest path); or it can wrap around a corner and turn 90 degrees.

The definition of "corner" depends on how you define connectivity between grid cells. The original JPS paper says that any horizontally/vertically/diagonally adjacent passable cells are connected. Here's some diagrams of the shortest paths (in blue) from the red blob to every other cell, where black cells are impassable:

pathfinder9.1.png

pathfinder9.2.png

In the second diagram, note that the paths can squeeze through the diagonal gap between the two impassable areas. That's a bit nasty because unit movement is not tile-based: units move along the path in arbitrary-length steps, and they might end up standing precisely on the impassable tiles' corners or (due to non-infinitely-precise maths) actually stand inside an impassable tile, which is bad. But we don't want to forbid diagonal movement entirely, because it allows much higher quality paths than purely horizontal/vertical movement.

To fix that, we can declare that diagonal moves are only allowed between two passable cells (i, j) and (i+1, j+1) if the other two adjacent cells (i+1, j) and (i, j+1) are also passable. This means the diagonals do not connect any two cells unless they are already connected by horizontal/vertical steps (which incidentally simplifies other tasks like reachability testing). Now the shortest paths are like:

pathfinder9.3.png

pathfinder9.4.png

This means the paths always stay at least half a tile away from the obstructions, so we won't suffer from numerical precision problems as before. Interestingly, the path around the corner in the very first diagram has two turns (from horizontal to diagonal, then to vertical), whereas in the new approach it only has one turn (from horizontal to vertical), so the new version might result in fewer processed A* nodes and work a bit faster.

The changes to the original JPS algorithm are straightforward (unless I'm misunderstanding it and introducing bugs) - horizontal/vertical jump points occur one cell later, and have 2 forced neighbours instead of 1, while diagonals have no forced neighbours at all, so it just needs small tweaks to the code.

  • Like 1
Link to comment
Share on other sites

Keep up the good work, Philip.

It seems tedious to do, but the possible advantages are needed. Also remember, if you succeed you will be the first open source contributer to program a decent RTS AI. :banana: In my eyes that is very significant groundwork for all next open source RTS games.

History is yours, my friend :cheers:

Link to comment
Share on other sites

1330232949' post='235022']

It's nice too see the game improves and advances with a steady rate everyday thanks to your hardwork Philip :)

And since the Pledgie donation campaign has finished with (at least) 13$ dollars over goal, I wonder that will you continue to work full-time for another month?

Yes great to see another Pledgie campaign successfully raised.

Link to comment
Share on other sites

Also remember, if you succeed you will be the first open source contributer to program a decent RTS AI. :banana: In my eyes that is very significant groundwork for all next open source RTS games.

Could the new pathfinder be released as a standalone library, so that other open source projects can use it (and, eventually, also improve it for us :))?

Link to comment
Share on other sites

Well either way the code will always be there for others to use.

Yes, but this way it's more difficult to keep them in sync, they will eventually diverge and improvements become incompatible. Obviously developing a library that can be used by different projects is more difficult.

In another news Spring 0.86 got recently released with many pathfinder improvements: 0.86 changelog.

Link to comment
Share on other sites

Not sure how relevant it is, but I thought I'd forward this comment from the ModDB repost of the day 15 writeup:

The other, less computationally intensive approach, is to just make sure that your obstacle assets have a 1 cell border around them that's passable for that annoying diagonal passthru. This is what the old Command and Conquer games did .. even a things like starcraft and TA do it.

Granted you've got more processing time to work with than they did so It'll be interesting to see how it works out. :-D.

Nice bit of work and enjoyed the post, will be following this game.

Link to comment
Share on other sites

  • 1 month later...

Days 16 and 17

Trying to do something other than pathfinding for a bit, since I'm not very good at concentrating on that (continued getting distracted by doing the release and reviewing and life and other stuff).

The game's current renderer is hard-coded to support three different materials:

* Plain old diffuse maps - a mesh has a single RGB texture, which is just multiplied by lighting and shadows etc and then drawn.

* Diffuse maps plus player-colouring - a mesh has an RGBA texture, where the A channel determines where to use the RGB channels and where to use the player colour (blue, red, etc) instead. (Most units use this.)

* Diffuse maps plus alpha-blending - a mesh has an RGBA texture, where the A channel determines where the mesh should be drawn as opaque or transparent or semi-transparent. (We use this a lot for trees and other vegetation.)

Also, each model is rendered in several different modes:

* The basic mode that draws a visible model onto the screen.

* Shadow-map generation: the scene is rendered from the direction of the sun, computing only the depth (i.e. distance from sun) of each pixel, not the colour, to support shadowing calculations.

* Silhouette blocking: to support silhouettes, i.e. units being rendered as solid colour when behind a building/tree/etc, the buildings/trees/etc are drawn to a 1-bit stencil buffer (no colour, it just wants to know which pixels were covered).

* Silhouette display: after rendering the blockers, it then renders the units that will display a silhouette, as a solid colour, using the depth and stencil buffers so it's only drawn when behind a blocker.

Different materials behave differently in each mode. E.g. in shadow-map generation we ignore colour, so non-alpha-blended models don't have to load their texture at all, which can improve performance; but alpha-blended models do have to load their texture so that the transparent areas don't cast a shadow.

The renderer therefore has a "shader effect" per mode per material, where a shader effect defines a "shader technique" that defines how to render a mesh (i.e. what vertex data it needs, what textures it needs, what computation to perform to produce the colour of each pixel, what OpenGL state to set up (blending, depth test, masks), etc). The renderer stores a separate list of models for each material, and in each mode it renders each of those lists with the appropriate technique.

Alpha-blending is special because you have to draw polygons in order from furthest to nearest, to get the correct rendering: Graphics cards store a depth buffer so that you can draw opaque objects in any order, and if you try to draw a pixel that's behind another previously-drawn nearer pixel then it will be rejected (so you'll end up with only the nearest object being visible). If the pixel in front is meant to be semi-transparent, you actually do want to draw behind it, but the hardware doesn't store enough data per pixel to be able to detect that case. In practice, you have to sort transparent objects by distance from camera and draw each one twice to get it working well enough, which is not fast.

As an extra complication, there's an "instancing" optimisation for non-animated models. Animated models store a separate copy of their mesh data for every unit (since we compute the vertex positions on the CPU, and each unit will be in a slightly different animation state, so they can't share), but for non-animated models we only need to store a single copy of the mesh data and can easily tell the GPU to translate/rotate it as necessary for each instance, which saves memory and helps performance.

As yet another complication, we want to maintain support for old graphics cards that don't support shaders at all (or that have unusably buggy or slow support), since there's a non-trivial number of them. Every shader effect actually defines three techniques: one that doesn't use real shaders (for maximum compatibility), one that uses GLSL shaders (for compatibility with OpenGL ES, currently just for Android), and one that uses GL_ARB_fragment_program shaders (for typical use, since GLSL is less widely and more buggily supported than ARB shaders).

The problem with this rendering system is its inflexibility. Say we wanted to add specular lighting to some models to make them look shiny - that would be a new material, and it would require a significant number of changes to the C++ code and a new shader effect (with non-shader/GLSL/ARB variants). But actually we'd want to combine the specular lighting with all the other materials: diffuse+specular, diffuse+playercolour+specular, diffuse+alphablend+specular. Add in the instancing vs non-instancing versions of each of those, and the number of combinations explodes and becomes unmanageable.

The most useful new material (and what prompted me to work on this) would be one that uses alpha-testing instead of alpha-blending: that is, it has a texture with a 1-bit alpha channel and every rendered pixel is either fully opaque or fully transparent. (The image here gives an example - compare the sharp edges of the tree on the left, vs the softly faded edges of the tree on the right). That means you avoid all the ordering problems of semi-transparent blending, so performance can be much better. If we could use that for most of the game's vegetation, framerates should improve significantly. The compromise is that artists probably have to be more careful to make it look good - light fluffy branches are generally out, but you can still do things like this/this/this/this/this/this etc (if I'm not mistaken) without alpha-blending, and it's what basically every other game seems to do.

As well as materials, we sometimes need to render models in slightly different modes. E.g. if you're constructing a building and dragging the placement preview object around the screen, it looks a lot like the normal rendering of a building, but if you drag it into fog-of-war or shroud-of-darkness then it shouldn't turn grey/black like normal buildings do (it should remain visible and bright red to indicate you can't build there). That would require a change to the shader code used to render that model (to stop it applying the LOS texture), but we currently have no way to implement that other than creating yet another variant of every single material and shader effect.

To improve on that, I've been changing the renderer to work more flexibly, and to be more data-driven rather than code-driven.

Every model is associated with a single material (via its actor XML file). The material refers to a shader effect, and also defines a set of, uh, "definitions". (Need a better name for them...). E.g. the material XML file might say


<material>
<shader effect="model"/>
<define name="USE_PLAYERCOLOR" value="1"/>
</material>

The material might include some dynamically-generated definitions too, e.g. "PLACEMENTPREVIEW=1". The C++ rendering code will have its own definitions, e.g. "MODE_SHADOWCAST=1" when it's rendering the shadow map. It will collect all models into about one list, regardless of material. Then, for each model, it combines the mode definitions with the material definitions, and loads the material's shader effect.

The "model.xml" shader effect file might say


<effect>
<technique>
<require context="MODE_SHADOWCAST || MODE_SILHOUETTEOCCLUDER"/>
<require shaders="glsl"/>
<pass shader="glsl/model_solid"/>
</technique>
<technique>
<require shaders="glsl"/>
<pass shader="glsl/model_common"/>
</technique>
</effect>

so it will select the "model_solid" shader if one of the relevant modes was defined, else it'll pick the next suitable technique. Then the shader might say


...
#ifdef USE_PLAYERCOLOR
color *= mix(playerColor, vec3(1.0, 1.0, 1.0), tex.a);
#endif
...

which is depending on the USE_PLAYERCOLOR defined by the material.

So the renderer loads the appropriate technique for each model based on the material and mode. Then it can group together models that use the same shader (to improve performance by minimising state changes) (then it groups by mesh, then by texture), and render them all.

There's lots of caching so that loading shaders for every model for every mode, every frame, has a very small cost. It's not perfectly fast but it seems no worse (and sometimes better) than the old renderer implementation, and it allows much more flexibility, which is nice.

Still to do: Clean up the code; merge with the non-shader-based code as much as possible; add the new materials (at least alpha-testing); document stuff; then it should probably be alright.

Link to comment
Share on other sites

So there is an XML for every model, and a XML for every material? Is that different from the current setup?

That's the same - each model has an actor XML file (of which we have hundreds) that points at a material XML file (currently there's about four). The difference now is that the material XML file explicitly points at the shader effect XML file, whereas the old renderer had C++ code that picked which shader effect to use for each material.

Also, I noticed in that unity link, they mentioned alpha blending, but with alpha testing to minimize the sorting problems. Any viability with that?

That's what we currently do. The downside is that you have to draw every model twice - the first pass draws with alpha testing, and the second pass with alpha blending. That means twice as many draw calls and twice as many polygons to render, which hurts performance when there's a lot of transparent models.

Link to comment
Share on other sites

Day 18

I finished and committed those renderer changes (74 files changed, 2315 lines inserted, 3503 lines deleted).

In general, there ought to be no visible changes. The exception is that I fixed the non-shader-based rendering mode so that its lighting matches the shader mode - the difference is that it allows the sunlight to be brighter than pure white (by a maximum factor of 2). I've also been experimenting with specular lighting, so this seems like a good opportunity to show some vaguely pretty pictures of the lighting system. (This is all very technically simple - other games have been doing this for most of a decade, but at least we're advancing a little bit.)

The first component is the basic textures for models and terrain: (click images for larger higher-quality versions)

lighting1.tex.jpg

Then there's the diffuse lighting - surfaces that are facing towards the sun are bright, surfaces that are perpendicular to the sun or facing away are dark:

lighting1.diffuse.jpg

The scenario designer can control the colour and brightness of the sun, which affects this diffuse lighting.

Surfaces that aren't lit directly by the sun shouldn't be totally black - they'd still be lit by light bouncing off nearby objects. As a (very rough) approximation, we add an ambient lighting component:

lighting1.ambient.jpg

The scenario designer can control the colour and brightness again, with separate values for terrain and for models to give them more control over the final appearance.

Finally there's the shadows:

lighting1.shadow.jpg

All these components get multiplied and added to produce the final result:

lighting1.comb1.jpg

This is what the game currently looks like. If you compare it against the first image, you can see that some parts of the scene are brighter than the unlit textures - that's what happens when the ambient plus diffuse lighting is brighter than pure white. (OpenGL generally clamps colours to the range [0, 1] so you can't exceed white, so what we actually do is compute all the ambient and diffuse lighting at 50% of its desired value and then multiply everything by 2 just before drawing it onto the screen.)

I also added some shader code to do specular lighting, to simulate the sun reflecting off shiny surfaces. For testing I've applied it to every model, which looks like:

lighting1.spec.jpg

and that gets added to all the previous lighting so you end up with:

lighting1.comb2.jpg

Unlike diffuse lighting, specular depends on the position of the camera, so it looks better in motion. Also it obviously shouldn't be applied to every model, and should preferably be controlled by a new specular texture for models that want it (so e.g. the metal parts of a soldier's texture could be marked as highly reflective and the cloth parts as non-reflective), but that should be easy to add thanks to the changes I made to the renderer, and then it might allow some nicer artistic effects.

Performance of fancier lighting is a potential concern, since it does extra computation (in this case a vector normalisation and an exponentiation) for every single pixel that's drawn. In practice, with specular lighting applied across an entire 1024x768 screen, the extra cost on an Intel GMA 4500MHD on Linux (which is barely fast enough to run the game decently anyway) looks to be about 2msec/frame, while on Intel HD Graphics 3000 on Windows it's too small to easily measure. So it should probably be optional to help the bottom-end hardware, but is fine for anything slightly more advanced than that.

  • Like 1
Link to comment
Share on other sites

so e.g. the metal parts of a soldier's texture could be marked as highly reflective and the cloth parts as non-reflective
Cool, shiny trees :victory: , so you would probably have to do that with a separate image correct? In the typical soldier texture, the alpha is already being used for player color.
Link to comment
Share on other sites

When we want normal maps, I imagine it might be easier for artists if we keep them in separate files from the specular maps, given the apparent troublesomeness of editing alpha channels, then the game can be made to automatically combine them both into a single DXTC-compressed DDS file on first load for efficient memory usage. (For better DXTC compression of normal+specular it actually seems common to use the RGBA channels to store not XYZS, but something more like 0XSY, so the game will need to do some swizzling when importing the textures anyway.)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...