Jump to content

Wild idea: Kinect for motion capture


myconid
 Share

Recommended Posts

I'm sure everyone knows what the Kinect is and what it's for.

So, it just occurred to me, since people have been writing open source drivers and plugins for Blender that can do

in realtime, why can't we push the envelope and use it for motion capture of actual game animations?

The motivation would be that this can both lower the barrier of entry for modders wishing to produce their own animations, but can also allow the team to produce a larger volume of animations for large-scale campaigns like those in AoE/AoM.

Of course the price is that the captured animations will often need to be tweaked manually by the artists, multiple takes will be necessary, the cost of the hardware etc, however I think that if such a system could work it would be a huge step forward for the hobbyist game-dev scene (since most modern commercial games depend on mo-cap for much of their animation work, and we are at a disadvantage), and it would certainly get 0ad some headlines aside from the increased productivity.

I know it's a wild idea (one can certainly dream). I'm just saying, think of the implications if it really works...

(To be clear, I don't own a Kinect, so I'm interested in opinions of both artists and technical people who might have tried something related to this)

Link to comment
Share on other sites

The motion capture demos that Emjer did definitely were a big improvement over anything that we can do by hand, and I think it would make a significant difference if something like that made it into the game.

I'm a bit doubtful about how good results can be achieved with Kinect, but I obviously don't have any evidence to base that on.

Link to comment
Share on other sites

Sounds epic. But on a side note, have you heard of Leap? Its more accurate. It hasn't been released fully yet, but they seem open to developers.

That doesn't do full-body motion capture, though.

The motion capture demos that Emjer did definitely were a big improvement over anything that we can do by hand, and I think it would make a significant difference if something like that made it into the game.

I'm a bit doubtful about how good results can be achieved with Kinect, but I obviously don't have any evidence to base that on.

Niiice! Looking further in the thread it looks like this idea isn't nearly as wild as I thought!

Link to comment
Share on other sites

I'm no expert but from what I have read I think that the quality from the kinect is pretty poor so you can only use it as a rough template for the animation. It would probably be easier just to use normal video to "trace".

The kinect way is significantly cooler though.

Link to comment
Share on other sites

How about putting colored tags on crucial parts of the motion capture actor's body and then tracking the movement of the tags with computer vision software? I've only heard good things about OpenCV.

OpenCV is awesome (I had to use it for some work stuff a while back), but it's mostly for 2d things like tracking of items across a scene, face detection, stereo matching etc... The Kinect actually has 3d input from its sensor, which is what lets it to "pattern-match" a stick figure to a person relatively accurately.

Link to comment
Share on other sites

OpenCV is awesome (I had to use it for some work stuff a while back), but it's mostly for 2d things like tracking of items across a scene, face detection, stereo matching etc... The Kinect actually has 3d input from its sensor, which is what lets it to "pattern-match" a stick figure to a person relatively accurately.

Set up two cameras at 90 degrees, after a quick calibration the maths is pretty simple for getting 3d points out of 2d tracking.

Link to comment
Share on other sites

I dare either of you to turn your living room into the official Wildfire Games mocap studio. The programming related work would certainly be an interesting challenge.

Edited by zoot
Link to comment
Share on other sites

Set up two cameras at 90 degrees, after a quick calibration the maths is pretty simple for getting 3d points out of 2d tracking.

Haha, good luck with that. ;)

I'd use one high-quality camera, but try a method like this one, which is similar to what the Kinect does, but in 2d. I doubt you'd get the same quality of results as a 3d sensor, though.

Link to comment
Share on other sites

Haha, good luck with that. ;)

I'd use one high-quality camera, but try a method like this one, which is similar to what the Kinect does, but in 2d. I doubt you'd get the same quality of results as a 3d sensor, though.

It would genuinely be really easy. The hardest bit is calibration, which in the spirit of all good maths textbooks is left as an exercise to the reader. Once you know the two camera positions for every point you now have two lines which it lies on in space. Just find the intersection point (or in reality the point halfway along the perpendicular between both lines). With two high quality cameras this should give good results. More cameras gives a more accurate point.

Link to comment
Share on other sites

Maybe some of our outreach guys would like to ask around in Kinect and computer vision communities? Don't know how fruitful it would be, though. But it certainly would be a boon for indy developers if it could be made to work.

Link to comment
Share on other sites

It would genuinely be really easy. The hardest bit is calibration, which in the spirit of all good maths textbooks is left as an exercise to the reader. Once you know the two camera positions for every point you now have two lines which it lies on in space. Just find the intersection point (or in reality the point halfway along the perpendicular between both lines). With two high quality cameras this should give good results. More cameras gives a more accurate point.

Well, if you use high-speed, high-quality cameras and markers on the actors then I agree it would work (and even then, two cameras won't be enough).

But $5 webcams? For full-body pose estimation? No way. Maybe for hand gestures...

Link to comment
Share on other sites

  • 1 year later...
  • 6 months later...

I think it's possible. Though what about the flying animation. Any stunt(wo)man around?

My worries are how to clean the clutter of animation data, so that the essential frames are still there. Otherwise filesize might be big, though I could be wrong...

Link to comment
Share on other sites

But before doing anything like this, you wanna make sure we will not try to fix skeletons.

Definitely, it's on my list. Don't worry .. Unfortunately I doubt it will make sense to do anything other than to wait for blender's GaiaClary to help us out. For Maya he already did. Perhaps we should go from 3DSMax to Maya and then to blender?

Afterwards Kinect. Time will tell if it will multiply our efforts by hundreds.

Link to comment
Share on other sites

I ´d need to get my hands on a trial of 3dsmax 2013 to get the open collada to work. This need to be tried since it is the best collada exporter. I ´m also looking at the settlers files and looking for a way to open anim files to see if I can find other exports formats. In the settlers files are dff´s openable by zmodeler (not zbrush) and anm file. I can open dff for they seem to match the gta sa format but i can ´t open anms nor export.

I need to test collada import in blender 2.70.

Export too

I wasn ´t able to find max 2013 but maya i can find do you know version he fixed ?

About kinect i ´ll take a look at liońs post.

I never tried to connect the kinect to my pc :)

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...