Teiresias Posted September 16, 2020 Report Share Posted September 16, 2020 In yesterdays IRC I mentioned about using jasmine tests for AI development and the topic was considered as maybe of interest to others. So I hereby provide a trimmed-down demonstrator on how a test suite can be set up. The attached zip archive contains a copy of the API3 AI high-level interface with two test cases (one of them discovered the problem discussed here), plus the necessary infrastructure (jasmine 3.6.0 release and driver html page). To execute the test cases, just extract the zip content to some directory and load the common-api/jasmine-runner.html file into a scriptable webbrowser. Said html file is commented to show how the jasmine framework, the API3 under test and the test files interact. Items to consider: For the demo, I used the official jasmine standalone release which contains a version number in its path. In "production use" I rename the directories to get rid of the version number so no changes are needed when upgrading jasmine. It is also possible to execute the unit tests via the SpiderMonkey js shell by loading all of the scripts and running the jasmine bootstrap code. I also managed to connect the jasmine test suite to the JSCover code-coverage measurement tool, but this requires a more complicated setup, i.e. a list of all source and test case js files plus a platform-dependent batch job and a hacked-up JSCover driver js script. I can provide a demonstration if there is interest. Any comments welcome. jasmine-demo.zip 2 Quote Link to comment Share on other sites More sharing options...
Stan` Posted September 16, 2020 Report Share Posted September 16, 2020 Cc @Itms (Unit tests) having an idea how much coverage we have would be nice. Also having a transition guide might be nice in case we go that route. Quote Link to comment Share on other sites More sharing options...
Itms Posted September 19, 2020 Report Share Posted September 19, 2020 Hi Teiresias, this is very interesting! Our use of C++ unit testing to run JS tests is really subpar. If we can use jasmine through spidermonkey it would allow us to have a better tool for testing JS code. Would it be possible for you to demonstrate how to run, for instance, simulation component tests (in binaries/data/mods/public/simulation/components/tests/), using jasmine? Those tests rely on "Engine" methods which are defined in the 0ad engine, so I'd like to see how that works out. Coverage information would be nice to have. I wouldn't say it is mandatory (we have engine coverage and we don't really look at it) but if we have it it's definitely a plus, and it would push us into analyzing coverage information more often. Thanks a lot for your proposal! Quote Link to comment Share on other sites More sharing options...
Teiresias Posted September 26, 2020 Author Report Share Posted September 26, 2020 Itms, thanks for the "padding on the shoulder". I admit I might be a bit extreme regarding testing, since I partially do this for a living. Regarding your question On 9/19/2020 at 10:30 AM, Itms said: Would it be possible for you to demonstrate how to run, for instance, simulation component tests (in binaries/data/mods/public/simulation/components/tests/), using jasmine? Those tests rely on "Engine" methods which are defined in the 0ad engine, so I'd like to see how that works out. I'm afraid, no. When writing jasmine tests I am in a JS-only world. Since a unit test is about testing the smallest isolatable parts of a software system - usually a single function or a class - this is not a problem for me. But if you intent to include the Pyrogenesis engine activities together with the JS code - that's actually an integration test, and usually harder to acchieve than unit testing. I don't know any off-the-shelf solution, in particular if multiple runtime environments are involved (native code vs. SpiderMonkey JS environment). In my AI experiments I faced similar problems with the common AI and currently use two approaches: Include the common AI code in the JS space where the jasmine tests execute. Since they are both JS this is possible. However, I still try to avoid this as much as possible since it introduces an external dependency. Lift up to system test level, i.e. run the whole Pyrogenesis executable in autoplay mode with special script files to generate a scenario map tailored for testing and evaluating the AI behavior. This is a very cumbersome method and I try and avoid it whereever possible. No JSCover analysis done with this method. Quote Link to comment Share on other sites More sharing options...
Krinkle Posted January 10, 2021 Report Share Posted January 10, 2021 (edited) I like where this is going. It would certainly improve familiarity for JavaScript developers if we use a more common unit test framework rather than something in-house. I would suggest a few alterations so as to increase the value of the unit test: Run it from the command-line rather than from a browser. This will make it easier to run from Jenkins and extract the results in an automated fashion, and it'll be easier that way for developers to continue to (also) run it as part of "make test". The actual code is not meant to be run inside a web browser, which means trying to make it work there including file tracking and loader mechanisms, would significantly limit future developments and add on-going costs, without actually benefiting the game or code quality in any way. There's also the risk of false positives from tests passing inside a browser, but not in the actual game due to differences in environment. Run it using the SpiderMonkey engine (the mozjs engine that 0AD uses) and not V8, Chrome, or Node.js. There are significant differences in how these engines work internally, also with regards to syntax features and quirks etc. There is no need to run it in a different engine, that's kind of like testing a Firefox extension inside Internet Explorer. Quote When writing jasmine tests I am in a JS-only world. Since a unit test is about testing the smallest isolatable parts of a software system […] this is not a problem. But if you intent to include the Pyrogenesis engine activities together with the JS code - that's actually an integration test, and usually harder to acchieve than unit testing. I don't know any off-the-shelf solution, in particular if multiple runtime environments are involved (native code vs. SpiderMonkey JS environment). I believe a unit test is a test where you control (or inject) any and all dependencies and only act and observe a single subject (e.g. a class, module, or method). Thus not causing side-effects in global state elsewhere, and not causing dependencies to be implicitly pulled in. An integration test is where e.g. you act on and observe multiple subjects, and where e.g. you may let your application's entry point implicitly construct its dependencies and inject them, rather than doing so implicitly. In complex code bases, to achieve this, we often mock the dependencies and/or not load the primary engine, but to me this is by no means a logical requirement for something to be considered a unit test. When testing a Library class, it seems perfectly valid to me to test that by injecting an array of (stateless) Book objects into its constructor, and then only calling Library methods and observing its return behaviours. If Book is not stateless or if Book is difficult to instantiate, then it may be easier to mock it, but otherwise that would needlessly reduce the accuracy of the unit test. Likewise, I don't see an issue with exposing the Pyrogenesis engine to the unit test. Its native methods are conceptually no different from the native methods of the JavaScript runtime's own utility methods, so long as we use them within the spirit of a unit test. Beyond that, duplicating a lot of code would be busy work with little to no added value, unless for some reason we can't figure out how to load the engine, in which case we'd take on that added cost as a way to make unit tests possible at all. I think it's quite feasible to pull this off. We can decouple the current test target, and load a test framework like Jasmine there. We can also add code coverage reports by instrumenting the code first through Instanbul, and extracting data afterward. Then after the run we can use Node.js tooling to produce nice web pages as build artefact to visualise the code coverage reports. We can make compiling and importing the Pyrogenesis engine optional, so that we have a fast "make" target for unit tests, and a slower one for integration tests, but otherwise use the same JS engine, test framework, and code coverage pipeline. I'll try to put together a proof-of-concept over the next couple weeks to show what I have in mind. Edited February 1, 2021 by Krinkle 1 1 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.