Ubisoft's AI Learns To Compute Game Physics In Microseconds! ⚛️
Embed
- Published on Oct 28, 2019
- ❤️ Check out Weights & Biases here and sign up for a free demo: www.wandb.com/papers
Their blog post and their CodeSearchNet system are available here:
www.wandb.com/articles/codese...
app.wandb.ai/github/CodeSearc...
📝 The paper "Subspace Neural Physics: Fast Data-Driven Interactive Simulation" is available here:
montreal.ubisoft.com/fr/deep-...
theorangeduck.com/page/subspac...
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Alex Haro, Anastasia Marchenkova, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Bryan Learn, Christian Ahlin, Claudio Fernandes, Daniel Hasegan, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, James Watt, Javier Bustamante, John De Witt, Kaiesh Vohra, Kasia Hayden, Kjartan Olason, Levente Szabo, Lorin Atzberger, Lukas Biewald, Marcin Dukaczewski, Marten Rauschenberg, Matthias Jost,, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Taras Bobrovytsky, Thomas Krcmar, Torsten Reil.
www.patreon.com/TwoMinutePapers
Splash screen/thumbnail design: Felícia Fehér - felicia.hu
Károly Zsolnai-Fehér's links:
Instagram: twominutepa...
Twitter: karoly_zsolnai
Web: cg.tuwien.ac.at/~zsolnai/ Science & Technology
Ah, yes, I love the stroking bunnies with circular objects mechanic found in many games.
LMAO now only if you saw the Hogwarts legacy gameplay where it shows a person using magic to use a circular brush to brush an animal! Lmfaooo
@KiemPlant HAHAHA
"Ah, I see you are a man of culture as well 😌."
No bunnys were harmed during the experiment !😅👍
LoverLab will love this ;)
can't wait until this new physics model gets into games and such so we can all play more physics based games on our potato pcs
@superninja252 It will make them more costly when the range of possibilities expands you need to create more simulations and mechanics to account for the general expectations of the customer base.
Mind you im not complaining i want better physics but that's how it'll go.
Would be cool to use it to simulate realistic gunplay realtime.
Your potato pc, mine is pretty good i have to say
UE5
@DanDeMan I'm talking about real-time cinematics, like God of war 2018, devil may cry 5, final fantasy 7 remake, resident evil remakes etc...
YES this is what I've been waiting for. physics APPROXIMATIONS! it's like how you can draw an animation of some cloth flapping in the wind and approximate it without having to get out a calculator to calculate what it would actually look like. am I understanding this right?
My problems is typically approximations come at a cost of determinism. Determinism is a fantastic tool to reduce data set over a network to be exclusively inputs. However you can distribute a trained AI which does deterministic physics approximations and that is truly powerful.
@yvrelna well put
similar
except its drawn by someone who has studied cloth physics interactions for many thousands of hours, and can now imagine, quite accurately, what it would do in any situation you ask
@yvrelna exactly
@Temm Physically accurate physics engines would do the simulation by solving continuous differential equations, not by discrete integrations that's common in real time game engines. They might use discrete time steps so they can have something to render on screen, but only if the results are equivalent to the continuous results. I've written some simple physics simulators that does so in the past, it wasn't actually that difficult or compute intensive (at least for the basics physics).
Also, I'm not quite sure on your assertions that real life actually are continuous. We know that mass and energy come in particles and packets, not continuous objects. There's also the Planck's time, which may indicate that the real world may actually be running on discrete simulations rather than continuous one.
So yeah, even those continuous equations at just statistical approximations of what really happens in the quantum levels.
You mean it takes real physics and uses educated guesswork to figure out a massively oversimplified approximation to what's going on? You're like 50% of the way to making all the engineers out there redundant. ;-)
@TheLegendDevil Look into the development around woebot and psychology AI's and you will find that couldn't be farther from the truth
@Juho Joensuu we adapt in the same way at the end of the day, an AI has to be programmed because it serves a purpose which fits our own vision. The difference here is the mechanism behind the technology and the structural state of what we work with. I get your viewpoint, AI needs to be setup in the right circumstances so that it's evolution is self sufficient, but I see that more as a statistical quirk than anything else. We evolved our intelligence to fit our environment, that doesn't discredit the similarities between our intelligence and that of an AI though
@Quinten Cambridge I dont think you've ever had the displeasure of working with modern AI systems. Adaptation is precisely what most of them cannot do at all and those that do very poorly, you can train them certainly to an amazing degree and automate that process a lot but those are two entirely different approaches to problem solving.
@Juho Joensuu AI is a tech which is built to adapt, I'd call that pretty smart, check 2minute papers to see what AI really already accomplishes nowadays. The domains may usually be narrow but once we link it all together, we could get something truly powerful. Besides, it's not as though the human psyche doesn't just exist as a compilation of a multitude of narrow task computations
The AI neutral networks being built today are not smart in any sense of the word really. They're hyper specialized task optimizers, the best AIs we have are utterly incapable of replacing tasks that require true adaptation. White collar paper pusher jobs are the ones that are on the chopping block next, industry got it first only due to market demand, for white collar it'll be oversupply soon and it'll cut deep.
1:03 me and the boys after we just compute the game physics in microseconds
য়কগহজগ
R4Fa3L :(
@Max | RoadToMadeira Go away spooky monster, go away...
Was laughing so hard at those models drunk dancing about
Anyone else not a scholar or doesn't do anything with AI, but is just fascinated with the field and watch just because it's SO INTERESTING to watch the unbelievable progress the AI research is making?
YAS
@Stanislav K yep, me too.
I stop going to college and start going to youtube, I tried to get my youtubeship degree now
Stanislav Kubernat yes, I followed this channel just to see how close we are to the merge.
0:55
Devs: Wear a cape and simulate its physics.
AI: This is cool and won't make me look ridicolous.
Devs: Also dance.
@FunkyPrince step up revolution
AI: OK this is not so bad
Devs: Now wear a skirt.
@FunkyPrince Yep, and they will hit us with a floating teapot (without textures or materials) for the rest of our lives!
@Synkronization They'll make us constantly dance for years :
It will have its payback soon... and we will wish we never made it dance so ridiculously, for hours on end, with millions of test simulations.
I can see this tech revolutionizing games and especially VR since VR has very strict performance requirements and the closer to real time the more convincing it will be
The style immediately reminded me on a previous paper on character control with phase-functioned neural networks. Turns out it is indeed from the same author. Solid work :)
That's absolutely genius. Can you simulate destruction of things this way too? This could be groundbreaking for the games industry
/bless/ this channel, anything that makes an greater connection between the general public and current academia of any sort, keep up the stellar work!
This pretty amazing. I have always wondered how much could be done with very little processing. It seems the sky is very much the limit. I suppose now it's not so far fetched for an AI to use normal computers take over the world in one fell swoop.
I was literally thinking about this a week ago, "Could we solve physics collisions using neural networks ?"
And now this gets recommended to me ? Amazing.
Great work by the way, it's truly impressive !
@violet_flower huh, good to know, thanks!Understood. Definitely want to learn more. I think UMAP has something to do with topology and manifold space, wayyy over my head lol.
@violet_flower I wonder if other dimensionality reduction methods can be used, that better segregate categories? Like t-SNE or UMAP? I don't know how these things work though to be honest, and PCA is pretty mathematically simple in comparison, so maybe it's not possible or useful.
Thank you !
@violet_flower It's like the first guy that tried mixing peanut butter and jelly. Sounds like it's something people should have tried from the start but there was still a beginning to it.
"solve" ? no. change? yes, of course.
Wow this is so cool. I can think of so many games where this could be implemented and would improve the visuals so much!
Accurate with seemingly amazing performance. Seems to good to be true, I feel skeptical about anything like this being implemented in the near future but oh boy would I like to be wrong.
I wanna see how this thing handles destruction types of simulations and games, such as a collapsing building or wall, because if this works well for those types of simulations, Tynan I fully expect physics based destruction games to explode using this technology.
Maybe we'll finally get a good Red Faction game after the failure Armageddon was.
Next step : 3D graphic optimisation, to make last 3D games and simulations of supercomputers work in very low PC configs and even low smartphones, that would be awesome
Now to wait for a general purpose physics engine built by AI.
This seems like a pretty incredible breakthrough
@Rumford Chimpenstein Just like any tool is not meant to do everything. Actually this kind of approach is consistent enough when used for the right application. However, from a scientific standpoint we don't want to hide the limitations of our method.
it's not. did you see the end? the reason game physics engines are so computation-heavy is because they need to cover edge cases like that. its already possible to make physics much cheaper but it always comes at the cost of edge cases. it could be handy for some simulators or animations with many many colliding objects, but in games it will not be consistent enough.
It isn't.
@Ryan Denziloe I hope it will be, but I can't help but feel this is just another one of those things that sees little use in practice, for one reason or another.
@Ryan Denziloe +1 ...UE4 and Unity will be awesome ...in... i think ~2 years from now :0
this will be huge for games :0 because physics eats a lot of power in any game that has is... and with this there will be huge leap :O daym..... can't even imagine how cool games can be with this :0
honestly this seems pretty revolutionary, i really hope game engines start embracing neural based solutions
This is actually amazing, one of than most creative AI applications I have ever seem.
This sounds like a pretty revolutionary practical advance. Will we be seeing this technology in games soon?
This would be brilliant for cloth simulation in crowd engines for 3D animation and VFX. If you have thousands of agents, simulating cloth on all of them can be prohibitively expensive, but with this you could pre-compute the cloth for all the different animations first, and then have the AI approximate it at render time.
We are going to see some serious leaps forward once they figure out how to implement this in games.
Finally, this is happening. You could also use a similar approach for the physical model auditing of a simulator.
Omg, I’m actually in tears of just pure awe over how impressive this technology is! If I showed this to myself 10 years ago, I would faint, I think.
So for actual game development, could you spend a few weeks with a bunch of workstations training almost every situation in the game and then have super fast, accurate physics on the user end with barely any performance hit?
@kin2naruto Oh that would be dope, I hate having to either uncomfortably wear my glasses under the VR headset or barely seeing anything at all.
@Nal Pair this with the "eyeball tracking prediction" software I've seen in prototype 3d goggles. You actually COULD keep ahead of the player and only render the fova focus spot in full defintion with a vague blur for everything else. Would give the game the illusion of HD!! AND you could probably tweek it to work with ANY perscription so you don't need to wear your glasses underneath.
Or you could have a super-cluster computing new cases in real-time and then just stream the compressed data (e.g. trained algorithm) to the end-user. Which means that hardware will once again become the bottle-neck for immersive VR once this gets fleshed out. Wonder if consumers will still be buying GPUs in the future...
This seems to be a generalised computational compression method stumbled upon by messing about with neural nets. It's lossy compression: in the same manner as lossy image compression, you can use a very small amount of data to recreate something very close to the original data at least to the human eye. But doing it with code for a physics model instead of static data is quite something. Getting a result of two or three orders of magnitude faster just blows my mind.
Imagine realistic physics in gaming that bests high powered desktops but can run on low powered smartphones. It's not as perfect as a full fat simulation but for almost all purposes you'd never be able to tell. Wow.
I seriously cant wait for games to get there. Im really excited. Especially if theres no dip in frame rate.
I don't want to game on smartphones. I want to game on a computer powerful enough to provide orders of magnitude better experience. Why makes the same games run on smartphones when you can make a lot more advanced games for high spec PCs? I was a fan of original Far Cry and Crysis.
So really you don't need to setup the self learning AI every time but just extract the final method after a million generations and use it as a static code like it was originally written by a human.
This is going to be so useful for massively memory intensive physics simulations, this is so useful outside of movies and video games.
I can't believe it, this is revolutionary!!! It's a jump forward on the scale of 2D to 3D in video games technology!
I’ve honestly been saying AI is the future of gaming and this just keeps proving it
This is amazing, has this technology ever been applied in a game?
"Not all heroes wear capes" I like how not-stiff and academic this is, while not necessarily being not serious or direct about the information.
oh my god, I can just imagine, all physics to cost 0 and games can focus on making truly extremely good looking games and at the same time work on older PCs [with lower settings]
what a time to be alive
Maybe a similar approach could be used to simulate other physical interactions like ray-tracing
This has the potential to eliminate CPU bottlenecks in games, WOW!
Imagine if this could be incorpotated into game engines like Unreal Engine and Unity!
I can't get enough of your channel. Your way of keeping it simple and interesting makes me watch More and More.
What a great time to be alive.
Greetings from Greece, where are you from ?
What a time to be alive!
But will there be another?
As soon as I read this, a tv in another room said “What a time to be alive!”
Yippie37 haha, with the way AI is advancing, not for long!
Forever.
I see what you did there!
Dude, this is actually game breaking (no pun intended). Game engines like Unity and Unreal are gonna have to closely watch the development in this area if they don't want to get trampled over by new game engines that make use of this technology.
Your channel is just amazing. So fun to hear about these things :)
This project is basically what I did for my undergrad capstone project, except that this has higher quality results, of course
Edit: even the use of PCA is the same! Wow, déjà vu much
Wow this is a serious mind blowingly powerfull breakthrough..it could be used in video games right? if so then it is gonna be so crazy to have video games with this level of realistic physics in real time !
It would be cool if it could detect when it runs into a situation it doesn't have significant training data for, and use a more reliable (but less efficient) secondary method to resolve that collision, and collect that data in a buffer to further train itself with later.
I keep seeing these amazing papers from you, how do I use the software and techniques?
Depends how lucky you are. Since they are research papers the authors might only have their own private research implementations of these things (which are probably pretty lacking in usability since they were programmed by a bunch of scientists). But people do sometimes make their code public. Have to go read the paper I guess. Chances are though that these things are a long way off from entering commercial tools like unreal engine or some such.
"That costs nothing, even compared to the previous nothing" 3:15
@Luminous Dragon 10 months later, I'm doing the same thing
@Xevion Bullshit ..
Its (nothing)²
Lol
This sounds like a Trump statement. " the cheapest computational cost in the history of computational costs"
oh my god that completely flew past my head haha
0 in comparison to 0 is 0, simple.
If I may provide some feedback, I would recommend avoiding putting text on the bottom of the screen. Some viewers like myself have subtitles enabled and, as such, we have to pause the video and disable it to see the text or ignore it.
@KrissVinZ Exactly, this is why rather than channels avoiding the bottom, the better solution would be if they could specify important areas to dynamically avoid.
You can move the subtitles on firefox too.
You can actually move the subtitles by clicking and dragging them. (at least on google chrome)
Yeah, when talking about framerates, I often could not see the highlighted fps counters because the CC overlaid it.
Though honestly, the best solution would be if youtube ever allowed channels to mark sections in their video that are important and shouldn't be obstructed, and then the CC's would have to adapt to try and avoid them if possible. Doesn't even have to do a good job to still be a massive improvement, particularly if the user can disable it when needed.
Great feedback, thank you!
I wonder if we can make AI do something really unique with an animated story, where if you play through a scene more than once, the characters can make different choices, changing the same scene by taking it down a different path. We can already do this manually with game engines using variables and switches.
This is the most impressive and useful one I have seen so far.
With the amount of time 3D programs take to simulate cloth physics and soft- and rigid body simulations today, this solution will revolutionize the entire animation industry!
I love how physics engines always let you stroke bunnies😂 no matter what it’s so cute and funny 😆
Which program do they use for the animations? They always seem like they do them in the same environment.
Thanks
What are the chances that you make these videos longer? I really love your content and the material presented here, but it's so short! I really would like a more in depth look.
So is this the result of the "good enough" approach t computation? I am hardly versed in this stuff but this seems like a pretty impressive result.
i hope devs will implement this in their games
Could we do the same with fluids? It would be so great
I really wanna know how they did this. Did they use 3rd party apps, physics engine or did they made this from scratch themselves?
that's some brilliant work!
When they report the frames per second, they *only* mean the physics themselves, right? Not the additional overhead from actually generating images to display? - Because if the raw physics can run at like ~100FPS that sounds impressive, but it actually doesn't leave a whole lot of compute for the graphics. So it might not actually quite be real time then. (I'm actually not sure how much overhead is necessary strictly for just the graphics)
Of course the only example that had that (relatively) low a frame rate was the one with absurd amounts of absurdly high quality interactions on screen all at once, which is gonna be rare anyway, so either way this is amazing!
With modern big budget games the vast majority of each frames time is spent on rendering. Each call to the GPU is extremely expensive so lots of work is done on the CPU side gathering things up to reduce the number of calls
amazing. Training a.i how a thing should behave in any scenario so it can approximate physics. This can revolutionize games and any other field that doesnt require accurate to life physics.
Could you make a more detailed video on the theoretical basis of this?
This is so cool! A game dev could create a neural net for each type of physics interaction in a game that is typically expected, and then the game would be optimised.
Damn. The next gen games will looks so much better! And the more you play the better they will look. Every update will probably improve physics and looks of the game lol.
Cool, I've been predicting that soon coding will be unnecessary for basic tasks (only needing to code the programs that will do the dirty work for us), glad we are getting closer to that outcome so that people can stick to what humans are best at; creativity, not menial and laborious tasks.
Hmm that looks pretty cool, can't wait to see this applied to problems like fluid simulation and particles, the soft-body and cloth simulations are already looking pretty darn good.
I believe that the simulation can be based on the real world, by having an actual cloth being against objects, and that cloth will have some sensors attached to it, removing the need for a simulation
Then, this data can be fed to an AI, and when combining it with physics equations - the results should be accurate
Curious what this kind of thing can do for rendering applications. As a product designer some of single raytraced frames can take an hour or two. Would love to get a "good enough" result 40x faster.
Could I ask the comment section, what programming language would these sort of things be made from? Is there a list of the top languages that these sort of neutral networks use in their code? Thanks
I know this video does not hit on this subject directly, but since it is AI, I suppose a lot of people may have a background in this.
For a while now, I have considered reaching my hand into Natural Language Processing and Computational Linguistics. In my mind, it seems reasonable that (language) syntax can be taught through neural networks, as it does tend to boil down to structured rules. However, I have always been skeptical of the semantic issue, the idea of a machine being able to truly "understand" what it is speaking or hearing, without hardcoded explanations. It just seems like "random" neural networks could not account for the complexity of semantic cognition - but, recently as I have seen AI learn ways to achieve goals beyond what it was taught, I am starting to believe anything is possible through AI.
What I am trying to ask is, does anyone believe linguistics in all of its aspects can be taught to an advanced enough AI, with little to no hardcoded data?
If you watch the video at 1.25 - 1.5 times the speed, the simulation seems even more realistic (even the part where the puppets dance). I don't understand how fast you programmers see the world. However, your work is considerable
Nagyon nagyon érdekesek és inspirálóak a videók. Elképzelhető, hogy hasonló területen próbálkozok majd érvényesülni. Esetleg a szakdolgozatom valami AI témában lesz.
Yoshiboy is a good guy, and the stuff he does is pretty impressive. He presented this to a select audience, a part of which was yours truly, back in August... or was it July? Anyhow, it's a very impressive piece of research for sure.
Does anybody know if that kind of technology would be able to be played in regular software, or would it need machine learning cores (like tensor cores) to work?
1:00 all my brain cell trying to keep it together in an exam
Wish I had more than one brain cell then it would be more than a caped hero that dance.
humans not knowing what to do because all jobs are done by ai
Hmmm
As the new AI generated version loses so much detail, shouldn't it be compared to the "standard" version running at a similarly reduced detail factor?
It's like saying, "look how long it takes me to draw a hi-res orange, but by just drawing a circle I save so much time"
the thing about neural networks is that even after it trains to exactly replicate the realistic image it will still have its same cost.
@Cmon Meow Interesting. Now, do you have anything to add to the actual topic being discussed?
@SpinazFou Take a look at the video, it's nowhere near 10% let alone 99%
It's more like: i can draw a hyper realistic almost perfect orange in 2 seconds and a circle in 0.0000000001 ms
The problem I saw is this is all about 2 body problems though. The real limitations I'm seeing in modern games is mass item calculations. Water physics, a tower breaking apart and falling, voxel based interactions. Will this work with that as well?
"That costs nothing, even compared to the previous nothing" love it
I’d really like to see AI used to make a more convincing Fish Tank screensaver.
This is gonna be amazing in VR
one step closer to perfect simulated reality
we NEED this and some kind of dynamic animation engine all in one game.
Sounds like an excellent step for VR, where novel physical interactions are more prominently featured. I have to read the paper, but I'm curious how this might be able to be incorporated into a game development pipeline.
Could your pre-existing integration tests be used for training data? Maybe combine with a neural gameplay AI to discover novel scenarios for training? Or is the next step really to generalize to novel scenarios, running exhaustive training, and having a generalized physics engine with a real-time fallback?
Hi great work. Thanks a million. Can we remove all known rules of physics to create an alien intelligence?
Can this method be applied to things other than simulating physics? Like can it be generalized as a "save massive amounts of computing power by running this algorithm and, while accepting the drop in accuracy, be able to do much more with your processing power" method that's applicable to other things?
What engine or software did you use in machine learning and deep learning? Actually I want to make a neural network based AI for game that will teach itself to move walk or basically all human interaction and not use real world game voice + motion actors. So by learning and teaching itself. Plzz suggest best softwares and language 🙏
I never realized how badly I needed an AI dance party in my life. Am fulfilled.
I know that often when it comes to scientific advances like these we overreact and overestimate how much of an impact these things will have, but this is ACTUALLY GAMECHANGING HOLY SHIT 300x PERFORMANCE INCREASE WTF
For one born in the early 50s, Two Minute Papers videos are magic to me. How do you do it? Are there softwares for sale? Or do you make your own code? How? From software or code to what I see, what did you do? What happened? Are you creating digital awareness with a very high IQ? Will I be able to do it? Can the software or code predict changes in the gene sequencing of corona virus?
Does this mean we might be able to finally have physics in our mmo’s :d
Hey I’m new to the whole thing. What physics engine are you using it looks so cool
This would be fantastic if collision between 2 AI elements is handled.
Make a video about your research with guiding volumes!
Also, another great video!
this is soo cool do you think this could work for graphical rendering?
I sure do hope Bethesda does something about their physics engine.
This is really cool, but I don't understand where the speedups come from. How well does this generalise? Is it able to only model the exact situations it has been trained on? Would this work for speeding up complex 3D wave propagation calculation for instance, that is typically done with finite- difference methods?
finally i can play my games at 24000 fps
Only some filthy console peasant would be content with a measily 24000fps. What do you want my eyes to bleed from the chugging of the frames?
The true pc gaming master race requires 60000fps!
While your monitor can work at maximum 120hz
the physics can run at that framerate, not the rendering
@Николай Марков не похоже на шутку
How does it work? I’m not looking for the code itself, but I’d like to understand this more than just the high level concept, or at the very least I’d like to see this stress tested.
Amazing! This has a good future ahead!
Could ray-tracing be improved by machine learning in a similiar manner? It's still basically physics, so I don't see why not
Sounds like a good use for the tensor cores on NVidia's RTX series.
Devs: Stroking bunnies with circular objects is too performance intensive.
Two Minute Papers: Hold my papers.