Leaderboard
Popular Content
Showing content with the highest reputation on 2024-07-14 in all areas
-
Can you upload your logs ? System info and hw userreport Do you get any warning when changing sample number ? And 0 A.D. supports 4K no problem so it's something else.1 point
-
I don't mind if access via SVN continues to be possible, as long as contributors don't have to use it. Good point, it doesn't apply to contributing to library upgrades though. So now the nightly-build SVN isn't a compilation of data from different source anymore, but feeds back into its source again (even though those changes aren't meant to be committed there). IMO cleaner approaches for that would be fetching translations directly from Transifex (which doesn't work, because that requires a Transifex API token not every contributor has), managing them in a separate repository and pulling them from there to the 0ad git repository and the nightly-build SVN repository or just storing them in the 0ad git repository. Wouldn't that foster a culture where certain changes (library upgrades) are done without reviewing them, as there won't be a review tool for SVN anymore. Wouldn't it also make it pretty difficult for contributors without SVN write access to contribute to them? It should be no problem to do that later on. I'd obviously prefer having it from the start, but later on would work for me too. Yes, that probably needs some more consideration. I believe the translations are already even larger than 600MiB. Updating them more often shouldn't increase their size much faster though, because while that would result in more commits, the amount of changes would stay more or less the same (only strings changed more than once between our twice-a-week updates would generate additional changes, but that would be true for the nightly-build SVN setup as well). As git only stores changed data chunks, those additional commits shouldn't do much regarding the repository size. With git, commits are cheap. When looking at the trade-off between the additional size the translations cause vs. not having them in the git-repository & the added complexity of the nightly-build repository not being a 1:1 mirror of the git repository, I'd choose keeping them in git. People who pay particular attention to the size the data takes up on their hard drive, could still use SVN. What's the reason for you negative stance for bot commits to the source repository? Right now that sounds a bit arbitrary to me. I do get the reason for not committing the SPIR-V shaders to the 0ad git repository, but what's the connection to the translations here? Right now you're one of the very few people who can actually change something about it. I'd be happy to support with that though. I believe nobody proposed a single large git repository, with large as in "contains everything the current SVN repository includes" and we all agree that removing built artifacts from the 0ad repository is a good idea. Using submodules we could make it appear as it'd be one large repository though, which would remove the need to use SVN during development and would make having a 1:1 SVN mirror easier. To sum up, I still believe a 0ad git repository (+ submodules for git repositories with compiled artifacts) with a 1:1 SVN mirror would, if possible, be a great win to the reduce the complexity that the new setup introduces.1 point
-
1 point
-
I'd like to add some precisions to this. (@vladislavbelov might correct me) Vulkan shaders (Also called spir-v shaders) are artifacts produced by compiling the GLSL code (text files) into SPIR-V files (binary files). They only should get rebuilt if new shaders are added or old are modified. That compilation process takes one hour on a fast machine, and about 5 on the CI. The produced output also differs depending on the version of GLSLC (This caused a few disagreements). There is no incremental build. So you might wonder if we're doing something stupid when compiling them. And actually it's the converse. We're optimizing the number of files we have to generate using these files: https://releases.wildfiregames.com/spir-v/ You see, unlike GLSL which support conditional macro execution using #if and the like, spirv does not and you have to compile every single permutation of those macros. (and there are a lot) Those json files only list the ones that are actually used in order to prevent generating too much useless code. Bruteforcing the generation would add a couple gigabytes of shaders. When Vulkan was first added the generation script and the relevant rule files were not present anywhere and so we couldn't automate the process. That's one of the things that caused a delay because I didn't want to go the easy route and just dump the spir-v mod as is in the release bundles. In the future we probably should rebuild the shaders only if something in binaries/data/mods/**/shaders changed, or if the rules file changed (could store a hash of the file or something) That's still a significant size. The mod itself has the advantage of being compressed1 point
-
Thanks for the detailed point of view, I'll try to address that from my perspective. (I read on IRC about feeling sorry about giving me more work, don't be! I was aware of that possibility since I worked on my own on this migration. Some parts of it are a bit opinionated: I tried to setup my own ideal setup. That said, I am not against anything in principle, but I'm not going to overwork myself: whenever I am convinced by a proposal, I'll perform it with pleasure, but if I'm not, I'll let someone else do it) My number one expectation is: ditch Phabricator (well, they ditched us first, as implodedok put it). Stan hinted at that above. We need to continue having a collaboration platform, and Phabricator was the only big one that provided support for SVN. But apart from that, I have nothing against SVN. It's still a very usable tool, with many upsides, and as long as we had a forge allowing us to collaborate, we could do without branches and PRs. But it's not the case anymore, so we have to switch to git. You (and others) dream of stopping using SVN entirely, but I also had to convince people who don't want to use git at all. I tried to find a middle ground here and tried to exploit the main advantages of both tools, and I'm quite happy with the result. I disagree with that part. You'd need to have subversion installed on your system (just like now), but you wouldn't have to interact with it anymore. You'd just run the libraries/build-source-libs.sh script and be done with it. No, I added a source/tools/i18n/get-nightly-translations.sh script to export the latest translations into your git clone. It was a valid concern though, bb was the one to bring it to my attention first. That is true, but the majority of devs will not have to interact with those repositories. The upgrade of libraries is fundamentally non-collaborative: me and Stan are the only ones who may upgrade Windows libs these days; you can add wraitii and s0600204 to the list for SpiderMonkey upgrades and that's all. Being one of the involved devs, I agree that the new setup is not a huge improvement over what we did on SVN, but it feels way more organized and streamlined to me. I never used git submodules, so basically: I wouldn't dare trying to setup that I have nothing against it in principle, but I had not thought of submodules before yesterday. If I'm not mistaken, this is a change that could be performed in the future without rewriting history, right? It could be an incremental improvement over the future setup, that could be performed by a knowledgeable contributor. Would that be acceptable to you? same as above, but I believe the directory structure of the repository would make that much harder it would create similar problems to the engine/mods split I think, I'm not sure. That, on the other hand, I am against. The history of po/pot files in the git repo, when I started my experiment, was HUGE. It was several times larger than the current size (LFS blobs excluded) that I got to. (I can't remember how much, but off the top of my head 600MiB) And even that is not satisfactory because you only get new translations twice a week. With the nightly-build system, testers obtain everything "as in a release" daily, which is a bit improvement! Of course, committing translations daily to the git repo would be possible, but the size would skyrocket. Generally speaking, I'm also a bit adverse to letting a bot commit to the source repo. It makes more sense to me to have a git repo managed entirely by humans, and then a repo generated by the CD system, which provides a ready-to-play state to testers. Let me link once again to the glorious diagram I'm really proud of in which it looks like concerns are cleanly separated. But the main thing that convinced me to strip the translations from the git repo is the SPIR-V shaders for Vulkan. Right now we are blocked with a Vulkan support that depends on a mod which cannot be committed because of its size, the generated nature of its contents, and the huge amount of time/computing needed to generate it (and I believe that's part of the reasons for that stalled A27, as including the mod would complicate the release process). Vulkan cannot be tested out of the box by users without some knowledge. I want to demonstrate next week (unfortunately I can't show that to support my claim yet) that this can be fixed by my nightly-build approach. The git repo does not have to contain those large and ever-changing shader files. Instead, the nightly build generation can provide those files daily, with the CD system taking care of generating them, and we'd have them ready for releases in advance. I only have to make sure that the generation can be efficient (we mustn't spend 5 hours generating them every night, but only when needed), and that will be my task for the next week. --- In a nutshell: 1. I don't think SVN is so bad that we should avoid using it entirely. 2. I am open to replacing my source-libs and windows-libs SVN repos with git submodules, but that can be done in the future, and I'd be happy if someone else would do it. 3. I believe that a nightly build for testers and for generating release bundles, generated from a lightweight git repo for the developers, is a superior approach than a 1:1 SVN mirror of a (very?) large git repo.1 point
-
There is a big topic left I want to discuss and that's the continued use of SVN. Up until yesterday I was under the impression that the goal of the migration is to replace the use of SVN for all development of 0ad (while keeping read-only SVN access). I obviously should've read the documentation in detail earlier, but after doing so and after a lot of questions answered by @Itms and @Stan` (thanks again for that) I believe I understand the planned setup and its motivation now. Unfortunately I feel like the migration is one step forward and one step back. Let me explain why: Right now when working on patches and trying out development snapshots I use the git-mirror of 0ad on Github (https://github.com/0ad/0ad). The only time I have to use SVN is when I want to actually commit something. With the planned setup that'd not be the case anymore and I'd need SVN to even compile 0ad, because the contents of the "source-libs"/"windows-libs" SVN repositories are needed for that. I'd also need SVN to test different locals with a development snapshot, as translations aren't planned to be included in the git repository anymore. I believe it's fair to say the new setup is much more complex. Previously it was a single SVN repository, with an 1:1 git mirror, now it's one git-repository and three SVN repositories, which share different subsets of code and/or binaries. The only one of these repositories which contains everything (albeit with more coarse-grained commits) is the "nigthly-build" SVN repository. As that's read-only, every developer will have to interact with git and SVN in future. While one of the big benefits of Gitea is the possibility to use pull requests, that won't be possible for changes to the "source-libs" and "windows-libs" SVN repositories, as Gitea doesn't support SVN. Learning about these constraints pulled me down a bit, but let's see if we can get some ideas for improvement out of that. Going back to square one, what are our expectations for the migration? Mine are: All development for 0ad happens in git afterwards. A comfortable web-frontend is available for browsing git repositories, collecting issues, discussing and merging patches and managing documentation. Build artifacts aren't included in the source code repository anymore. A 1:1 SVN-mirror is available (essentially the equivalent to the git-mirror we had so far). Based on that, here is what I'd suggest: keep the planned 0ad git repository and the use of Gitea commit translations and translation credits into the 0ad git repository again (I don't see a good reason why translations should be handled differently from assets or "programming.json" and having them in git makes everything easier) integrate the contents of the "source-libs" SVN repository into the 0ad git repository again instead of using a "windows-libs" SVN repository, put its contents into a git repository, utilizing Git LFS, and include it in the 0ad git repository as submodule add an additional git repository for build artifacts, utilizing Git LFS, and include it in the 0ad git repository as submodule make an SVN-mirror available with git-as-svn (https://github.com/git-as-svn/git-as-svn) By doing all of the above there would be no SVN repository in use anymore, but everything would still be accessible through SVN. The contents of the 0ad git repository and the SVN mirror would be identical and the mirror would always return the most recent content from the git repository. Changes for all repositories could be handled via pull requests in Gitea. While the git repository would be larger than with the currently planned setup (because of the added translations and source-libs), I believe that's warranted given the history of the project and the size of the code base. Partial checkouts using SVN would still be possible and the necessary storage space on the server might even be lower, as git-as-svn is just a bridge between git and SVN. What do you think about that? What did I miss? Do my ideas even make sense?1 point
-
Hello folks! I have been hard at work on this, and things are starting to look usable. Here is a (probably incomplete) changelog: The CI works! It is still a bit raw and I deactivated it to save space on Jenkins, but I am now working on improving it. It is the last big chunk of work I have to perform, and then we will be ready to migrate. The nightly build now exists! Get it via SVN at https://svn.itms.ovh/nightly-build/trunk. It is generated from a month ago but I'll update it through Jenkins in the upcoming days. All the documentation was updated and improved during the JDLL event in May where I presented the project and received some feedback. I also improved the Privacy Policy based on feedback. Please head over to the wiki especially BuildAndDeploymentEnvironment. I need help! The glorious FAQ has been defaced by the conversion to Gitea, which uses Github-flavored Markdown. Is anyone interested in working on manually restoring the appearance of the FAQ, adapting it to its new home on Gitea? Please let me know if you wish to help. Basically it would be necessary to 1) cleanup the structure of the page, fix the tables, the raw HTML and hard links and 2) improve the appearance just like it was done on Trac, but using Markdown features instead of Trac features. The git repository now has a script for getting translations straight from the nightly build (the same will be done for Vulkan shaders) Links to changesets in the format [25001] on Trac are correctly converted to 04ec75ed7e on Gitea (and I fixed the commit text in that specific revision) Important Trac keywords (regression, pathfinding, design, ...) now have a Gitea label, but I can't easily automatically convert keywords to labels, so I will add the labels manually after the migration. Thanks in advance for your feedback!1 point