Jump to content

'Very Easy = too hard' frustration.


DanW58
 Share

Recommended Posts

9 hours ago, DanW58 said:

2)  Deer have become hard to kill, to the point that my cavalry ends up galloping half way across the map before the deer dies, which is so far away it's not even worth processing the meat.  I manually sent my cavalry to kill a deer closer to the base, and the same thing happens again:  The deer escapes fast and its health comes down slowly and my cavalry ends up almost in enemy territory before the deer dies.  Had to give up on deer hunting altogether.  Which is actually okay;  I'm vegetarian, and I hate the obligation to play as if I'm not;  but according to the tutorial videos, hunting is mandatory...

5 hours ago, Stan` said:

2) You can build farms :) But yeah hunting is a bit harder. Is there something planned @Nescio
@Freagarach  @wraitii
(I mean other than the turrets patch where cavalry will be able to attack move)

https://code.wildfiregames.com/D3397

Link to comment
Share on other sites

Been winning a bit more often... By building walls and turrets galore and researching ranged everything before anything else.  Gonna start looking at the code now.

What environment do you guys use in Linux?  And, do you have project files for the preferred environment?  I'm going to be opening files in gedit for now...

EDIT:  Reading Coding Conventions.  Glad you indent {} the way you do;  that was my biggest fear...

EDIT2: R.e. "Prefer global variables over singletons, because then they're not trying to hide their ugliness"...  The purpose of using a singleton is to ensure proper order of initialization, even during static initializations;  not a cosmetic concern.  I like the solution packaged in the Eiffel language... Bear with me, because understanding how another language solved a problem sometimes clarifies what the problem is...  In Eiffel there's a class of function called "the once function", which only computes its result the first time it is invoked, and subsequently always returns the same value it returned that first time.  They are used to replace constants (and I know that globals and constants are not the same thing, but MOST globals are constants).  But the difference between once functions and simple constants is that once functions are NOT computed in just any arbitrary order;  they are invoked following an initialization tree.  Many once functions compute their results on the basis of other once functions, to arbitrary levels of depth.  In other words, constants in Eiffel benefit from "lazy initialization".  In C or C++ the order in which constants are initialized is undefined.  If a constant requires another constant, but is being initialized first, the other constant may be assumed to be 0.

The only thing in C++ that resembles lazy initializations is template resolution.  However, templates resolve at compile time;  NOT static time.  Eiffel's once functions resolve at static time, meaning that they CAN be computed on the basis of, say, hardware attributes.

EDIT3:  What I was driving at is three things:

1) There's a role for globals;  and it makes code more readable and execution faster to use them, rather than passing them on the stack millions of times a second over a philosophy concern.  Use without abuse should be the beacon.

2)  It is important to distinguish between global constants and variables.  The latter can be very dangerous;  not usually the former.  Where global variables seem to be needed, it is often the case that a "header inversion" idiom is screaming to be implemented.

3)  The problem with global constants depending on other global constants across translation units is that order of initialization is not guaranteed in the language.  Singletons should not be demonized, as they offer a solution to that, if a bit clunky...  They are the closest thing to Eiffel once functions C++ has to offer.

Edited by DanW58
Link to comment
Share on other sites

43 minutes ago, DanW58 said:

What environment do you guys use in Linux?  And, do you have project files for the preferred environment?  I'm going to be opening files in gedit for now...

Some use eclipse, some are more conservative and use the default text editor. I usually use Visual Studio Code. Premake could generate some workspaces, I've never tried it

I'm not sure how to address your second point. Nice to see you don't have given up :)

Link to comment
Share on other sites

I LOVE your code, guys.

Never seen anything so clean and thorough and well organized (since last time I looked at MY code, of course ;-)).

Also reading the Timing Pitfalls and Solutions document.

I hit that wall before;  just didn't know how thick it was.

 

PRIVATE NOTE:  The engine I worked with, before, was Vegastrike.  If you've never looked at it, thank your favorite god and perform the appropriate sacrifices.

Edited by DanW58
  • Like 1
Link to comment
Share on other sites

 

Quote

Write pointer/reference types with the symbol next to the type name, as in


void example(
  int* good,
  int& good,
  int *bad,
  int &bad
);

 

There's one gottcha there, for declaring multiple variables...

int& a, b, c;     //only a is a reference.

int &a, &b, &c;   //3 references

Same goes for pointers.

Seems to me it pays to advocate the exact opposite, when accounting for that.

Edited by DanW58
Link to comment
Share on other sites

2 hours ago, Angen said:

nah, that's function :)

You mean for function definitions only?  But it's nice to have var declarations and function definitions in a common style.

Anyways;  not important, really.

The Timing Pitfalls and Solutions paper at the end uses a double precision time tracker;  I thought that was a sad choice, as the precision of floating point depends on the value magnitude.  Floats don't make for good accumulators;  they make terrible accumulators.  Always use fixed point for accumulation.  I'm sure double precision having so much precision it masks the horrible choice, but it is a horrible choice anyhow.  I hope you guys haven't adopted that solution verbatim.

By the way, my own engine I was working on, 20 years ago, was going to represent the world using integers, and motion through the world was going to be integer-computed as far as translation.  Only upon conversion to perspective (once every 64 frames or so) was everything going to be "floatified" (and held in video memory for the next 64 frames).  This way I was going to be able to have continuous worlds;  the origin of the current "floatified" world geometry was to always be not far from the camera.

Unlike the game Strike Commander (Origin) where you could fly far off the map, but the farther you went the more coarse your flight dynamics became.  At 200 miles or so, your altitude would change by meters increments/decrements...

But none of this is applicable to an RTS;  just mumbling...

This is about exactly one little bit relevant, in MathUtil.h:

#define DEGTORAD(a)                    ((a) * ((float)M_PI/180.0f))

could be more precisely defined as

#define DEGTORAD(a)                    ((a) * (float)(M_PI/180.0))

Though for performance you'd want to make sure the division is done only once;  maybe using a global constant of type float...  Too bad C++ doesn't have once functions...

once float DEGTORADF () { return (float) M_PI/180.0; }

#define DEGTORAD(a)                    ((a) * DEGTORADF())

 

EDIT:

Every time I svn up, it takes an hour.  Are you guys really working so hard?  Or are you adding and deleting a space in a Collada header?  :-)

 

EDIT 2:

Quote

Favour early returns where possible. The following:




void foo(bool x)
{
    if (x)
    {
        /* lines */
    }
}

is better when written like:




void foo(bool x)
{
    if (!x)
        return;

    /* lines */
}
This may have a negative impact on performance.
The AMD manual on code optimization, which applies to Intel 90% of the time,
recommends to avoid having more than one return per function.
I can't remember what they said about compilers' troubles optimizing this
themselves, but they recommend single return from functions at source, or
the result is usually return cache pollution.
Edited by DanW58
Link to comment
Share on other sites

2 hours ago, DanW58 said:

The Timing Pitfalls and Solutions paper at the end uses a double precision time tracker;  I thought that was a sad choice, as the precision of floating point depends on the value magnitude.  Floats don't make for good accumulators;  they make terrible accumulators.  Always use fixed point for accumulation.  I'm sure double precision having so much precision it masks the horrible choice, but it is a horrible choice anyhow.  I hope you guys haven't adopted that solution verbatim.

I ended up deleting all of that code since it messed up with new ryzen CPUs. We use doubles because that's what function returns.

2 hours ago, DanW58 said:

 

By the way, my own engine I was working on, 20 years ago, was going to represent the world using integers, and motion through the world was going to be integer-computed as far as translation.  Only upon conversion to perspective (once every 64 frames or so) was everything going to be "floatified" (and held in video memory for the next 64 frames).  This way I was going to be able to have continuous worlds;  the origin of the current "floatified" world geometry was to always be not far from the camera.

The simulation uses fixed point calculation (Fixed CFixed CFixedVector2D etc)

But you have to be very wary of overflows and the performance isn't always that grear. For instance perf will show you we spend quite a long time in isqrt64. (Integer square root function)

We need to do this to ensure an exact calculation on every machine. Else we'd go out of sync since they all compute the state at the same time.

 

2 hours ago, DanW58 said:

define DEGTORAD(a)                    ((a) * (float)(M_PI/180.0))

Though for performance you'd want to make sure the division is done only once;  maybe using a global constant of type float...  Too bad C++ doesn't have once functions...

You can use constexpr that are evaluated at compile time iirc.

2 hours ago, DanW58 said:

EDIT:

Every time I svn up, it takes an hour.  Are you guys really working so hard?  Or are you adding and deleting a space in a Collada header?  :-)

These days yeah we're getting ready for a release and we're trying to nuke bugs and improve balancing as fast possible. Feature Freeze is tomorrow.

2 hours ago, DanW58 said:

This may have a negative impact on performance. The AMD manual on code optimization, which applies to Intel 90% of the time, recommends to avoid having more than one return per function. I can't remember what they said about compilers' troubles optimizing this themselves, but they recommend single return from functions at source, or the result is usually return cache pollution.

I'd like to see actual proof of this since it highly depends on the compiler and flags and may only have been true a long while ago. Also might want to share links when doing such statements :)

Could use godbolt to see how compilers optimize that stuff.

Link to comment
Share on other sites

You may be right;  it was many years ago, the days of the K6;  Athlon was the new thing.

I found this, in the code optimization guide for h19 processors (latest Ryzen, etc):

Quote

2.8.1.3
Return Address Stack
The processor implements a 32-entry return address stack (RAS) per thread to predict return
addresses from a near call. As calls are fetched, the address of the following instruction is pushed
onto the return address stack. Typically, the return address is correctly predicted by the address
popped off the top of the return address stack. However, mispredictions sometimes arise during
speculative execution that can cause incorrect pushes and/or pops to the return address stack. The
processor implements mechanisms that correctly recover the return address stack in most cases. If the
return address stack cannot be recovered, it is invalidated and the execution hardware restores it to a
consistent state.
The following sections discuss some common coding practices used to optimize subroutine calls and
returns.

So, the return chache is small, and easily polluted if there's multiple return points in each function.

The reason a return cache is needed is for speculative execution not to halt at returns.  The processor doesn't know where a function begins and ends;  all it sees is a stream of instruction, some being return instructions, and needs to know where to return to in its speculative execution, to keep the pipelines full.  The more returns, the more entries are used up in the cache.  However, I can't see why the compiler can't optimize this.  This AMD document is written for Assembly programmers, it seems.

https://www.amd.com/system/files/TechDocs/56665.zip

 

EDIT:
 

Quote

 

The simulation uses fixed point calculation (Fixed CFixed CFixedVector2D etc)

But you have to be very wary of overflows and the performance isn't always that grear. For instance perf will show you we spend quite a long time in isqrt64. (Integer square root function)

We need to do this to ensure an exact calculation on every machine. Else we'd go out of sync since they all compute the state at the same time.

 

Are you sure you NEED a square root?  90% of the time square roots are performed unnecessarily.

If for comparing distances, for example, which are always positive,
 

A = a.x*a.x + a.y*a.y;  B = b.x*b.x + b.y*b.y;
return sqrt(A) < sqrt(B)
can be replaced by
return A < B;

or  return sqrt(A) < 5 can be replaced by return A < 25;

In fact, ieee floats, if they are positive, can be compared in magnitude using integer arithmetic, by reinterpretatively casting them to unsigned ints of the same size.  Only useful for sorting big tables, though, as otherwise you incur the cost of passing data from the fp pipeline to the integer pipeline;  not worth it for just one or two comparisons.

Renormalizing vectors is another area where people feel they need square roots, but in many cases the amount of renormalization is small.  Rotating a matrix or vector can introduce tiny errors.

In such a case, you only need a quick renormalization.
 

class Cvec3
{
    float x, y, z;
    void normalize()
    {
        correction = 1.0f / sqrtf( x*x + y*y + z*z );
        x*= correction;  y*= correction;  z*= correction;
    }
    void quick_renormalize()
    {
        correction = 1.5f - 0.5f*( x*x + y*y + z*z );
        x*= correction;  y*= correction;  z*= correction;
    }
}

 

Edited by DanW58
Link to comment
Share on other sites

I WAS a programmer, many years ago;  decades.  Back then I was using Visual Studio.  I don't know what to use now;  I would need something that can search through the code, tell me all the places a function is called from, etc.  Right now I've no idea where to start.  Heard about Eclipse, but never used it.

Alright, I'm going to look into it right now.

Link to comment
Share on other sites

2 minutes ago, DanW58 said:

I WAS a programmer, many years ago;  decades.  Back then I was using Visual Studio.  I don't know what to use now;  I would need something that can search through the code, tell me all the places a function is called from, etc.  Right now I've no idea where to start.  Heard about Eclipse, but never used it.

Alright, I'm going to look into it right now.

You can still use visual studio if you use Windows :P

Feel free to drop by IRC and ask questions :)

Link to comment
Share on other sites

52 minutes ago, Loki1950 said:

Have a look at Code::Blocks IDE

You must be psychic, Loki;  that is EXACTLY the one I'm trying to use.  I installed it 2 or 3 months ago but never used it.

Now, I managed to edit the Tools menu to svn up, update-workspaces, make, test and run.  Everything works.  The problem is I don't know how to add the code files to the project;  can't find a menu to do it;  not in File, not in Project.  I tried importing the Visual Studio workspace, and nothing happened.  I'm totally confused...

EDIT:

Which "premake" are you referring to?  I found a folder "premake", and inside there's a premake4.lua and premake5.lua;  no idea what that is...

Edited by DanW58
Link to comment
Share on other sites

@DanW58 Sorry seems I missed you by a few :/

I think you should be able to generate codelite project files using premake according to this https://github.com/premake/premake-core/wiki/Using-Premake

echo "Premake args: ${premake_args}"
if [ "`uname -s`" != "Darwin" ]; then
	${premake_command} --file="premake5.lua" --outpath="../workspaces/gcc/" ${premake_args} gmake || die "Premake failed"
else
	${premake_command} --file="premake5.lua" --outpath="../workspaces/gcc/" --macosx-version-min="${MIN_OSX_VERSION}" ${premake_args} 		gmake || die "Premake failed"
	# Also generate xcode workspaces if on OS X
	${premake_command} --file="premake5.lua" --outpath="../workspaces/xcode4" --macosx-version-min="${MIN_OSX_VERSION}" ${premake_args} 	xcode4 || die "Premake failed"
fi

To do just replace the code above in https://trac.wildfiregames.com/browser/ps/trunk/build/workspaces/update-workspaces.sh by the following

echo "Premake args: ${premake_args}"
if [ "`uname -s`" != "Darwin" ]; then
	# ${premake_command} --file="premake5.lua" --outpath="../workspaces/gcc/" ${premake_args} gmake || die "Premake failed"
  	${premake_command} --file="premake5.lua" --outpath="../workspaces/codelite/" ${premake_args} codelite || die "Premake failed"
else
	${premake_command} --file="premake5.lua" --outpath="../workspaces/gcc/" --macosx-version-min="${MIN_OSX_VERSION}" ${premake_args} 		gmake || die "Premake failed"
	# Also generate xcode workspaces if on OS X
	${premake_command} --file="premake5.lua" --outpath="../workspaces/xcode4" --macosx-version-min="${MIN_OSX_VERSION}" ${premake_args} 	xcode4 || die "Premake failed"
fi

EDIT: @Loki1950 Do you know if CodeLite is compatible with Code::Blocks ? It seems the latter isn't supported.

Link to comment
Share on other sites

Yeah, many thanks.  Last night I hit a snag, though.  These errors in ScriptTypes.h are triggered:
 

#if MOZJS_MAJOR_VERSION != 78
#error Your compiler is trying to use an incorrect major version of the \
SpiderMonkey library. The only version that works is the one in the \
libraries/spidermonkey/ directory, and it will not work with a typical \
system-installed version. Make sure you have got all the right files and \
include paths.
#endif

#if MOZJS_MINOR_VERSION != 6
#error Your compiler is trying to use an untested minor version of the \
SpiderMonkey library. If you are a package maintainer, please make sure \
to check very carefully that this version does not change the behaviour \
of the code executed by SpiderMonkey. Different parts of the game (e.g. \
the multiplayer mode) rely on deterministic behaviour of the JavaScript \
engine. A simple way for testing this would be playing a network game \
with one player using the old version and one player using the new \
version. Another way for testing is running replays and comparing the \
final hash (check trac.wildfiregames.com/wiki/Debugging#Replaymode). \
For more information check this link: trac.wildfiregames.com/wiki/Debugging#Outofsync
#endif

No idea what to do.

EDIT:  never mind;  problem solved in IRC

Edited by DanW58
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...