NVIDIA’s New AI: Journey Into Virtual Reality!
Embed
- Published on Dec 13, 2021
- ❤️ Train a neural network and track your experiments with Weights & Biases here: wandb.me/paperintro
📝 The paper "Physics-based Human Motion Estimation and Synthesis from Videos" is available here:
nv-tlabs.github.io/physics-po...
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: www.patreon.com/TwoMinutePapers
Thumbnail background design: Felícia Zsolnai-Fehér - felicia.hu
Wish to watch these videos in early access? Join us here: thexvid.com/channel/UCbfY...
Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: discordapp.com/invite/hbcTJu2
Károly Zsolnai-Fehér's links:
Instagram: twominutepa...
Twitter: twominutepapers
Web: cg.tuwien.ac.at/~zsolnai/
Your reactions make me happy
Whoa. Putting the VR use aside, this pose estimation looks good enough for most mid-budget animations. Can't wait till we can just act out a scene in front of a camera and effortlessly port it to a rigged character.
These new papers are getting more and more insane, technology is moving so fast I can't even follow it anymore and that's amazing
I cannot help but notice how the final virtual animation does not coincide perfectly with the input.
I really don't want to pre-emptively call VR the Metaverse and assign monopoly to that nebulous technology just yet. We cannot just accept that such a morally bankrupt company would have any integrity upon the matter and they shouldn't be trusted as the proprietor and administrator of our new synthetic realm. The tech in this looks amazing, however! Always rocking it, two minute papers!!
This seems awesome for the future of VR. We are so behind with just head and hand tracking. Maybe this could aid in moving around realistically without needing a whole motion capture studio.
This truly is one of the most impressive tech developments I've seen recently. Great paper, great delivery as always.
I'm looking forward to see whether Nvidia could have an interest into turning this into a part of their tools that uses tensor cores. Would be interesting to use this type of high level motion tracking with Nvidia cards, and I'm pretty sure no game developer would be against trying to make a VR game with this type of controls. It's an opportunity for a lead into a potential new part of the VR market.
I'm strongly against META's metaverse but I'm really excited about bringing accurate animations to 3D characters with this technique!
Dear Two Minute Papers, would it be possible to summarize how the progress in various domains has been underpinned by combining simulations (physics, Montecarlo, etc.) with DNNs or by combining DNNs (GANs, Cognitive Systems)?
Another use for this is in frame interpolation. When synthesizing "tween" frames, having a physical understanding of people allows the image interpolation to account for occlusions between legs and between arms/body to generate a physically plausible interpolation. Many image based systems have bad artifacts in such situations.
This is actually amazing
This genuinely reminds me of when Dave Perry from shiny software, figured out, not to bother rendering hidden objects, it's so simple and elegant, now if only I could get my hands on a reasonably priced 3070
I imagine pose estimation of this quality will have a huge impact for independent video game developers and animators! Now instead of renting an expensive mocap suite a dev could just take a quick video in their driveway of the animations they need, and convert them to usable animation data via this NN.
This is actually very cool. Using physics constraints to fill in missing data. I tried tracking once with multiple Kinects from different angles, and it turned out to be rather tedious. A lot is gained from a simpler hardware setup and doing the rest in software.
Great work! I love these little summaries because I would never know this existed without them.
this is incredible, does it run in realtime?
Kinect is finally evolving! In childhood when I used to play Kinect, the movements used to be horrible, it didn't detect anything right, full of false positives and so on. It had several 3d sensors, and nowadays Tesla only need a camera, just like humans. Amazing how things are progressing, I hope one day I can play Kinect with perfect motion tracking, which will only need a webcam, if the TV isn't already going to come with a camera, which could be used for that, and to control the TV itself also. Goodbye remote control!! Goodbye useless infrared sensors!!
The flickering is actually a problem that is trivial to solve using algorithmic approaches as opposed to neural nets, since you can use a state machine to represent the objects in 3D space as opposed to having to "redetect" them on every frame
Kudos to you!! It can not be understated, that the content/work, that your and your team is producing is a gift to humanity. Please keep sharing your milk of human kindness with the world