Jump to content

myconid

WFG Retired
  • Posts

    790
  • Joined

  • Last visited

  • Days Won

    11

Everything posted by myconid

  1. halcyonXIII, give this one a try. (Generated from git code, applies cleanly to SVN code, for me.) mmpatch.diff.zip
  2. Hm, that's probably my fault (it seemed to work when I tried it on the SVN code...). I'll have to test this myself and get back to you
  3. Ok, looking forward to seeing it in action. What sort of look are you aiming for? Something like the AoE3 video that Mythos_Ruler posted perhaps?
  4. I'm starting new branches for each thing, that have everything merged into one commit so it's easier to extract SVN-friendly patches. Here's modelmapping (basically, the same code as above) for starters: https://github.com/m...elmapping-patch Modelmapping + smoothLOS merge: https://github.com/myconid/0ad/tree/merged-patch1 Nope, they'll have to be a separate patch.
  5. Patch time! This patch contains all the changes relating to the new model effects and materials. It includes code, shaders, example materials, meshes with extra UVs for "Roman Civil Centre" and "Roman Temple Big", example actors for Roman CC and Mars Temple 2, Wijitmaker's textures and prebaked AO textures for these buildings. Adds support for AO/parallax/specular/normal/emissive mapping (with tangent generation and all that), windy trees, a system where effects can disable themselves with distance (and in theory other conditions), a basic "time manager" component, "render queries" (so materials can explicitly request information from the renderer) and some other stuff to control effects from the config. Patch created against latest SVN. Apply with: patch -p0 -i file.diff in main 0ad folder. The images need to be copied to binaries/data/mods/public/art/textures/skins/structural/. All who can test, please do! modelmapping-final1.zip
  6. Well, if you use high-speed, high-quality cameras and markers on the actors then I agree it would work (and even then, two cameras won't be enough). But $5 webcams? For full-body pose estimation? No way. Maybe for hand gestures...
  7. I think they are all finished. Imho, it'll be easiest (for me) if modelmapping is done first, which is why I'm currently going through that branch and cleaning it up. Been kind of busy this week tbh so progress has been a little slow, but I'll definitely find time for it over the weekend.
  8. Haha, good luck with that. I'd use one high-quality camera, but try a method like this one, which is similar to what the Kinect does, but in 2d. I doubt you'd get the same quality of results as a 3d sensor, though.
  9. Wow! This is an amazing suggestion! (My username actually came from a monster type in the Baldur's Gate games - it's a type of giant mushroom from the DnD universe . The BG games two of my most favourite video games from the late 90s, hands down!) The 0ad engine is probably perfect for that style of game. There'd need to be major changes to the simulation code (definitely not as simple as replacing the AI scripts), and indoor scenes could be done with only a few hacky modifications here and there. It's totally doable from a technical perspective, imho, and because most code would be reused, 0ad would also benefit hugely from the added development. Anyway, if there are other people who are seriously interested in this, count me in! (Just imagine, it could even be done semi-professionally if it went the Kickstarter route and all that)
  10. I see what you mean. We don't need to have one looong wave all across the beach, it can be a lot of small waves here and there moving independently. Also, we might use several triangles to deform the shape of the waves as needed (this would need to be made a special case, though). Either way, it's doable!
  11. Ah ok. My thought was to calculate the wave positions/directions in Atlas and insert polygons at those locations with the appropriate rotation. Then we create a material that moves one or more textures across the polygon surface and transforms them as needed to look wave-like. Alternatively, we create the animation in Blender and it could work just as well (but is probably less efficient). This material draws to two buffers simultaneously (look up MRT), one being a normalmap that is combined with the water normals and maybe an additional buffer for diffuse foam or such. The extra buffers are passed into the water shader and combined with the usual water effect.
  12. As I don't know exactly how you're trying to do it, I can't be of much help... If you release some code, I'll have a look (later tonight) and make some suggestions.
  13. You don't need to do any of that. Both the water plane and the waves are in screen coordinates when they are combined. You draw the waves in perspective, just as you would draw them on screen, except you store them separately in a texture. When you are drawing the water plane, after the perspective transformations, the gl_Fragcoord of a fragment on the water surface will match that position in the waves texture. It's exactly the same thing you did with the depth texture, isn't it?
  14. Both the waves and the water plane are already in perspective (think about it), so you don't need to worry about it at all. What if you create animations of waves and then have a script to place them in Atlas where needed (basically remove them and reinsert them on terrain/water update). There's already a way to make objects float on water...
  15. I'm assuming you're drawing new geometry that superimposes a wave texture on the water? Here's what I'd do: instead of drawing the waves directly to the screen, I'd draw them to a separate buffer (size of the screen) to create an extra normalmap that is then added to the animated water normalmap in the water shader. This will also let us do other more complex water effects, like ship trails and so forth.
  16. OpenCV is awesome (I had to use it for some work stuff a while back), but it's mostly for 2d things like tracking of items across a scene, face detection, stereo matching etc... The Kinect actually has 3d input from its sensor, which is what lets it to "pattern-match" a stick figure to a person relatively accurately.
  17. That would need more work from the artists, though. Maybe if we did the movements at half-speed we'd get more accurate results? Dunno.
  18. That doesn't do full-body motion capture, though. Niiice! Looking further in the thread it looks like this idea isn't nearly as wild as I thought!
  19. I'm sure everyone knows what the Kinect is and what it's for. So, it just occurred to me, since people have been writing open source drivers and plugins for Blender that can do in realtime, why can't we push the envelope and use it for motion capture of actual game animations?The motivation would be that this can both lower the barrier of entry for modders wishing to produce their own animations, but can also allow the team to produce a larger volume of animations for large-scale campaigns like those in AoE/AoM. Of course the price is that the captured animations will often need to be tweaked manually by the artists, multiple takes will be necessary, the cost of the hardware etc, however I think that if such a system could work it would be a huge step forward for the hobbyist game-dev scene (since most modern commercial games depend on mo-cap for much of their animation work, and we are at a disadvantage), and it would certainly get 0ad some headlines aside from the increased productivity. I know it's a wild idea (one can certainly dream). I'm just saying, think of the implications if it really works... (To be clear, I don't own a Kinect, so I'm interested in opinions of both artists and technical people who might have tried something related to this)
  20. I've never built the engine on Windows either, but it sounds like something went wrong when you executed update-workspaces.bat. Since you need to build Atlas to use this feature, you need to resolve the wxWidgets dependency first. Did you compile or install wxWidgets?
  21. Depends on what you want to do next. If it's water stuff, then work from 0ad master branch. If it's ARB stuff, then from my merged (which NB may be a bit behind atm, so let me know first).
  22. Yes, the patch on trac is completely out of date. I'm not too fussed about which Alpha these changes are included in, as long as someone can upload some Windows binaries for people to work with! As for hierarchy, most things are actually independent of each other! I have occasionally merged some branches together for integration testing, but most changes are separate and the relevant commits shouldn't be hard to pick out. There are four different things that can be reviewed more or less simultaneously. IIRC, these are: modelmapping (merge of modelmapping + effectsdistance2), terrainmapping (merge of terrainmapping + terrainalpha2), smoothlos, and importheightmap. Tell me the order in which you want these prepared and I'll put the relevant commits in new branches for you to sink your teeth in. In the meantime, I updated from upstream yesterday and started merging with the modelmapping branch (not pushed yet), and I'm dealing with a strange regression bug in the commit to support older ATI cards like fabio's (the kind of bug where putting four values in some vector causes the tree shadows to disappear, which seems more like a driver bug). If this can't be resolved quickly, I'll just reset modelmapping to before that commit and people with old hardware will have to stick with ARB.. Sorry, guys, you really need to upgrade already.
×
×
  • Create New...