Jump to content

olsner

WFG Retired
  • Posts

    476
  • Joined

  • Last visited

Everything posted by olsner

  1. Attached is a patch that along with my recently committed header guards and misc fixes makes unity builds work as well as normal builds. The reason for not just committing it is that some stuff had to be hacked around a bit so I'd like to get these non-obvious fixes reviewed before simply committing them... Oh, and this is untested in Visual Studio, btw. "Documentation" The patch adds an option to the premake script (pass --unity to update-workspaces to enable), and when enabled it generates a source file per library/sub-project according to the existing structure - it simply takes the list of sources for each project and generates a foo_unity_unit.cpp file that #includes all the other files, then registers that single source file as the only source of the project as far as the generated makefile/VS project is concerned. This means that no changes in premake were required, only a few isolated changes in the lua scripts. (Which is nice when we plan to upgrade premake...) A couple of hacks were involved to make this work (beyond the actual code fixes): LOG_CATEGORY is defined by many source files, so it is automatically undefed before including a source. I contemplated adding a #undef to each source file, but rejected that idea based on the number of source files involved In the lib code there's a macro for generating "unique id:s" based on line numbers. This causes a bit of problems when many similar source files include about as many headers and then define a couple of error code associations just after those headers. Anyway, the generated unity unit redefines the macro to give different unique id:s before each file. Errors.cpp is currently ill-suited for compilation along with any code that includes headers that declare error types. A better patch should update the generation of Errors.cpp to work well in unity-builds. Oh, and actually only .cpp files are included in the unity unit. All other kinds of files (e.g. assembly files) get added to the source file list as usual. One problem is that the "engine" build seems to be a bit too big (gcc uses a lot of memory for it), so my parallel builds don't work as well on the unity build. I had expected to be able to run at least one gcc per core but I quickly get I/O-limited rather than cpu-limited. I'm guessing from linking the objects into .a's in parallel (which seems hard to teach make not to do - it doesn't categorize actions), or simply that the total memory use of the gcc instances cause swapping. I'm also getting mahaf link errors from acpi.cpp - but the code looks like it couldn't ever work on linux so I'm thinking it can't be caused by my changes but rather something that is actually an error in the source - especially since I also get the same error in unity builds and non-unity builds. As for build time results: with -j1 I get a build time of 1m10s, -j2 gives 37s and -j4 25s (4 core machine). On the same machine, a non-unity build with -j4 takes around 1m15s, so I guess that means a theoretical speedup of around 4x, although the actual speedup for me was only about 2x. unity.patch.txt
  2. SpiderMonkey on linux/unix/os x is now bundled with the sources (actually, the bundling has been done for some time, I just didn't actually activate it in the build system until now), so the procedure for building and installing spidermonkey has changed. The rationale is that getting the proper version compiled with the correct settings is messy for everyone who aren't lucky enough to have their distribution provide a package for it, so it's easier to bundle our own build script with the proper configuration, and that we'd prefer to use a specific javascript version everywhere to avoid accidentally producing incompatible scripts. When checking out a new tree (and of course for any existing trees), you'll now have to do this: cd trunk/libraries/spidermonkey/src ./build.sh to build and install spidermonkey within your tree. (http://trac.wildfiregames.com/wiki/BuildInstructions has also been updated for Linux and OS X) Actually I also made update-workspaces.sh compile/update the bundled external libraries on every run while writing this post, so those new instructions are already out of date Hopefully the usefulness of not having to manually build the external libraries outweighs the extra time spent checking that the libraries are up-to-date and the extra spam from those build scripts.
  3. That's weird - Valgrind should not be required unless you explicitly enable that code by passing --with-valgrind to update-workspaces.sh... Is the tree properly updated from svn?
  4. That's why depending on struct layout is a bad idea Given the error messages, you definitely seem to be getting the Apple installation of OpenAL. If the headers in /opt/local don't have the same problem, maybe you can change the compilation settings for OpenAL to use the macports openal instead of the "framework". To try that out change extern_libs.lua and just remove the line where we include the OpenAL framework (I think the line will look like osx_frameworks = {"OpenAL"}). If the macports-installed headers are in /opt/local/include/AL/alc.h rather than OpenAL/alc.h you will also have to change the openal.h file in our sources to include AL/* instead of OpenAL/*. If that fixes it, we may want to explicitly use the macports openal instead of apple's preinstalled one.
  5. Hmm, weird error... What errors did you get from the default compiler? It could be that GCC 4.3 is causing the OpenAL compile error (it wouldn't surprise me if that header depends on some apple-specific gcc extension, for example) - and in either case I think the default gcc from apple should be supported.
  6. I'm on Tiger (using apple's gcc version), so that configuration should work fine as far as I know.
  7. cached resources (data/cache and the mod archives): since these (only) depend on the resources themselves, it'd be nice if the installation automatically generated these from the resources installed - unless you actually modify something yourself you shouldn't have to generate them into the user-local directory. I think that when resources are modified we only have to store user-local cached versions of the actually changed resources rather than all of them? screenshots and logs: should be moved into the user-local directory methinks. We should also have automatic (re-)creation of these directories (including copying the resources for the log html). profiles: although these are kind of meant to be global (like every user on the computer creates their own profile), the right thing is probably to put the set of profiles entirely inside the user-directory. user-local mods: while we don't write these - the directory to put them in will depend on all the other stuff. I think if we just mount ~/.0ad as data/ in the VFS, user-local mods and overrides for individual files just go in ~/.0ad/mods/ and that's it. Having this VFS is pretty sweet As janwas mentions in the trac ticket, we should also differentiate between user-local cache and user-local config/data. On unix we could just let the cache and config directories be the same (~/.0ad or ~/.pyrogenesis or whatnot) since there's AFAIK no "application data" distinction there. If we implement those and set up the VFS to disallow write access to the non-user-local directories, all other things we try to write should be pretty obvious since the VFS should then just fail
  8. The way things are set up now, non-windows builds always use the system-installed DevIL library and never the one in the tree (and like you have noticed, the version in the tree doesn't work on linux). If the code can't find the system-installed devil headers it will just fail to compile (which seems to be the original problem). To fix that, you might need to change the definition of devil in extern_libs.lua - if you have devil installed in a non-standard location for instance. (We currently add no include paths for devil, but if it's required and if devil comes with a config-program we could use that to set the proper cflags)
  9. Since the link options are added in an arbitrary order, all the library link options are already inside a --start-group/--end-group pair which is supposed to fix that... I suppose we could change the order in which LDFLAGS is generated by premake (in gnu_cpp.c), but that feels kind of hackish, and it still doesn't guarantee any specific internal ordering between libraries (only between the chunk of package.links-libraries and the explicit link flags in linkoptions).
  10. FYI, I've submitted a portfile for enet here: http://trac.macports.org/ticket/20363
  11. The scripts have been changed to always include the -mt libraries (since we couldn't find an example of someone having only the non-mt variants installed), so this shouldn't be required any longer. Basically, all these warnings require some developer to go through them one by one and fix them - not terribly difficult but someone needs to take the time to do it. This should probably be reported to the bug tracker and handled there.
  12. Been through the patch and merged all the stuff that corresponded to compile errors I could reproduce. (strike-through means I've committed fixes for it - but adding it to the list just to jot down that it was in the patch, the rest is todo) premake config: collada and boost fcollada: stringbuilder fixes fcollada: pragmas (I didn't see any issues with this when compiling on my mac - was this only warnings?) fcollada: use of snprintf (should be fixed, especially if long is more than 32 bits, but didn't cause any compile errors) fcollada: ifdefs in filemanager (fixed by changes in the fcollada makefile - it shouldn't define Linux when on mac) fcollada: makefile changes openal: string type, should be fixed as trac issue #268 lib: timer type, debug_DumpStack, secure_crt functions wxWidgets regexp stuff: I assume that passing the "advanced" argument enables some regexp extensions - but do these regexpes really mean the same thing if we use default rather than advanced regexpes? If so, we should change them to actually use the default-flag for everyone, else we should probably rewrite the regexpes to only use "default"-features. wxUSE_DEBUG_REPORT: looks trivial enough, should be merged I guess atlas/ScenarioEditor: Apple-ifdeffed _UINTxx defines. I didn't see any related compile errors on my mac, but if it's a problem with another version of something (os x, for instance) it looks like we would like to find a cleaner solution for the problem anyway. Since the patch was a lot of small changes, I might have missed something, but what's in SVN now actually compiles (and runs!) on my macbook (Well, the collada integration still doesn't compile for me, due to changed api:s in libxml2 and general confusion between the four separate sets of libxml2 headers that are reachable - but I think that's a slightly larger problem and I'll leave it to tomorrow and/or someone else to sort that out ) In other news, valgrind is now optional and support must be explicitly enabled by passing --with-valgrind to update-workspaces.sh.
  13. About that assertion, the game seems to launch after just suppressing it. (But of course it should also be fixed) About the cxxtest error: In file included from ../../../libraries/cxxtest/include/cxxtest/StdValueTraits.h:10, from ../../../source/lib/self_test.h:189, from ../../../source/pch/test/precompiled.h:24, from ../../../source/pch/test/precompiled.cpp:25: ../../../libraries/cxxtest/include/cxxtest/ValueTraits.h:281: error: redefinition of ‘class CxxTest::ValueTraits<long unsigned int>’ ../../../libraries/cxxtest/include/cxxtest/ValueTraits.h:266: error: previous definition of ‘class CxxTest::ValueTraits<long unsigned int>’ I have a fix for this locally, but as usual I don't really know if it will break Windows Index: include/cxxtest/ValueTraits.h =================================================================== --- include/cxxtest/ValueTraits.h (revision 6954) +++ include/cxxtest/ValueTraits.h (working copy) @@ -276,10 +276,8 @@ CXXTEST_COPY_TRAITS( const unsigned char, const unsigned long int ); CXXTEST_COPY_CONST_TRAITS( signed int ); - //CXXTEST_COPY_CONST_TRAITS( unsigned int ); -#ifndef __APPLE__ // avoid redefinition errors on mac - CXXTEST_COPY_TRAITS( size_t, const unsigned int ); // avoid /Wp64 warnings in MSVC -#endif + CXXTEST_COPY_CONST_TRAITS( unsigned int ); + CXXTEST_COPY_CONST_TRAITS( signed short int ); CXXTEST_COPY_CONST_TRAITS( unsigned short int ); CXXTEST_COPY_CONST_TRAITS( unsigned char );
  14. I've now added Valgrind and enet to the dependency list on the Build Instructions wiki page, together with some instructions for building fcollada. Thanks for the reports
  15. Apparently, HOSTTYPE is not exported by default (so premake sees the variable as undefined). I've added a hack in update-workspaces.sh that should forward the proper value to premake. I also added some code to send the proper elf64 format to nasm conditionally, so the assembly stuff should work out of the box on 64-bit linux now. About the fcollada compile errors, a work-around that worked for me (I'm also on 64-bit linux) was to just remove the #else of that #ifdef WIN32 - I seems visual c++ thinks 'int' is a different type than both int32 and int64. I went ahead and checked that change in, so let's just hope it didn't break 32-bit linux and mac os
  16. That's a pretty large question ;-) It can't really be answered up front without knowing exactly how your game's code looks like and everything that, but I could give you a few pointers nonetheless: Here's the forum FAQ for gamedev's network and multiplayer forums - contains some good links, might be a good place to get started: http://www.gamedev.net/community/forums/sh...asp?forum_id=15 Another forum I've found greatly useful is the UNIX Socket FAQ - although it is a bit more advanced and is probably not very useful until you've actually made your first network code and tried out some different network stuff: (btw, even if it says UNIX, most of the info is also applicable to windows networking which is actually based on UNIX code [sometime in a past decade]) http://www.developerweb.net/forum/ Good luck! And I assure you: Networking is the most fun part of programming! (Some of them just haven't noticed yet )
  17. Is it just me, or is that popup impossible to get rid of except by registering or loggin in? I say at least provide a way to close it, with a cross in the corner or something - that kind of popup thingies could be considered quite annoying by potential future members
  18. The FreeBSD install will automatically re-install Linux on your BSD partitions, then start the Linux installer!
  19. The mighty list of late-night creativity (or is that just brain-noice temporarily seeking verbal expression?) -- Overuse of "ye" and "yield" "Ye shall be crushed under the might of mine army!" "Yield now! And I shall spare you the humiliation!" "Yield! You filthy little maggot!" "Ye shall see my power, and ye shall be afraid. Ye shall be very afraid." -- A few ironic "hey, I just killed half your army, but that's no reason not to joke about it" "Oops.. Was that one of your guys?" "Oh, was that an army? Must've missed it..." "Are you sure your guys have weapons? My guys have..." "There's nothing wrong with what you're doing. I'm just doing it better." -- Monty Python (This could be a side-dish trigger-finger contest - the first user to send the counter-question wins ) "What's the average flight speed of a swallow?" "An african or a european swallow?"
  20. Then laugh a little more at your making a fool out of yourself =) See the what's one thing that makes you really angry thread for one very uncomfortable situation.
  21. None of my media players are in the list!! Music: XMMS Video: MPlayer (not Windows media player, but www.mplayerhq.hu) Both for Linux =) Trailers = Spoilers. Don't watch. =)
  22. Naah.. By the way, just raw computing power doesn't really interest me. I'd rather have a crew of minions, completing all the ideas I've had but never realized, and I'd hire some brainiacs to figure out how I could drive Microsoft into destruction. After that, I'd broadcast the live-feeds of the Bill Gates Official Spanking Session (Bill being the subject of spanking, of course), Muahahaha! Not that any of this, besides the first idea of starting some risk capital business, is realistic.. And even that thing would probably require more money than just One Million Dollars...
  23. - xServe - xServe Raid - A decent workstation (SGI, perhaps ;-) And some screen switchers and cables, so that I don't have to have the noise around me all the time... Probably a couple of nice big 24" screens as well =) And, to top it up, Nice Phat Broadband (a gigabit should suffice for a while) so that I can use all my terabytes of storage... I'd need somewhere to live too.. somewhere close to the broadband =)
×
×
  • Create New...