This AI Learned Physics...But How Good Is It? ⚛
Embed
- Published on Oct 26, 2021
- ❤️ Check out Perceptilabs and sign up for a free demo here: www.perceptilabs.com/papers
📝 The paper "High-order Differentiable Autoencoder for Nonlinear Model Reduction" is available here:
arxiv.org/abs/2102.11026
🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
Aleksandr Mashrabov, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Haddad, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
If you wish to appear here or pick up other perks, click here: www.patreon.com/TwoMinutePapers
Thumbnail background design: Felícia Zsolnai-Fehér - felicia.hu
Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: discordapp.com/invite/hbcTJu2
Károly Zsolnai-Fehér's links:
Instagram: twominutepa...
Twitter: twominutepapers
Web: cg.tuwien.ac.at/~zsolnai/ Science & Technology
It's going to be cool when some developers get something like this fast enough to be run real-time in a game engine across multiple objects. Be so nice to have more realistic interaction with a world as it'll add more variation to the animations - compared to how they do it now (preset animations with some kinematics mixed in).
I have been thinking about this too. What if instead of trying to piece-by-piece write an entire physics engine, which will inevitably always fall into the uncanny valley, we trained an AI on inputs of real world collisions and interactions. For example, we feed a car racing game footage of crashes, skidding, and rolling, and it improvises in-game animations and even body deformations
Actually, this kinda stuff was possible in 2016 - and funnily enough, used the exact same models as this demo. It’s an engine called FleX, and I believe it only ever got used in one demo and the Nvidia VR Funhouse game. interesting tech, totally wasted opportunity.
@Deivison Carvalho To avoid actually computing physics interactions and make physics simulation degrees of magnitude lighter in terms of computing power, is the point of using AI-based physics simulators, isn't it? So the time for a 'world' with many simulated objects interacting with each other, may not be far away.
@Sitter lol in 50 years you'll be able to run these simulations on your wristwatch
@Sitter Yeah and 640K is enough for everybody.
The AI essentially hallucinating an approximation of physics it still blows my mind. Trading off calculation time for prior investment plus some slight simulation inaccuracy almost reminds me of how old games had to use pre-rendered cutscenes to give the illusion of it's presence in the game before cutscenes with could have real time in-engine lighting that is so common today. I'm really looking forward to see how far we can push AI based physics sims and what point would it reach something in real physics it can't replicate.
the models are much slower than actually simulating some aproximative physics... these papers are kind of pointless
@Braxbro addition and subtraction are as fast as bit-wise operations, multiplication is slower and division is much slower AFAIK, however you shouldn't worry about those things, the compiler most probably will change multiplications/division to bit-shifts where it's wise to do so
@Daniel Yeah, my bad. There's one bit you can flip to go from lower to upper. Not sure addition and bit-wise ops are equally fast, but both are optimized to the point that any difference is probably negligible, so.
@Braxbro I think you are mistaking bit-shift for bit-wise operations, you can flip a bit by using the XOR bit-wise operation, and that can be used to go from upper to lowercase or vice versa.
It doesn't really matter because iirc, XOR and addition are equally fast, if not the compiler would use the correct one for you, in fact I did use both in my code and I didn't notice a difference.
@Daniel Iirc you can, because that's how the ASCII number allocations are set up, there's one bit difference between upper and lowercase if I recall.
At some point one of these black box physics emulators ought to be used to simulate physics we observe but don't understand, and see if it's any better at predicting the ground truth than we are. That'll be the real advancement. I do like the idea of real time high-res physics emulation in games, even if I think we're well past the point where improving graphics quality actually improves the game experience anymore.
uh
Hmmm, why not feed it real data? Maybe it could figure out quantum gravity.
very impressive. could make animation a great deal faster and easier.
Yes, especially if you like abusing rabbits by throwing them down the stairs :D
Information content aside, Dr. Zsolnai-Feher's speech patterns are just so engaging and pleasant for some reason. Maybe an AI can tell us why. Maybe an AI already has....and it's him!
Plot twist: The Ai controls him.
I wonder when AI will learn physics through interaction with the real world.
@Benjamin Rood Or you could just feed it some rallying or air display show footage and get a working car or a plane sim. But if you are into that...
@skierpage considering the way that we humans treat the world, it only seems just.
Profound!
@Benjamin Rood Impaled, torn to pieces, shoved down stairs, crash tested, forced to sumo wrestle and ride trains across wonky bridges.
Yes, I look forward to being twisted and distorted and pinned to a wall for AI to learn physics.
This is seriously one of my favorite channels on TheXvid. I'm not a consistent viewer for a lot of my subscriptions, but I try to be for this channel. Fascinating computer science distilled down into something that can be understood by the layman and presented by someone who really knows what they're talking about, all while being short and engaging enough that my ADHD brain doesn't lose interest.
amazing, imagine UE5 + great devs + a few years of coding. This going to revolutionize movies, games, general content creation, VR, etc etc. I love technology
The AI revolution is becoming wicked good at doing anything you can imagine with armadillos!
How couldn't that amaze you?
• 2:05 - Those other methods are simulating the dino-bones accurately, they're just simulating it trying to bend when it's older and not as flexible as it used to be. I can relate. 😕
• 4:44 - Two more papers down the line and… _The Matrix._
This is pretty impressive now - I am holding on to my papers to see what we will be seeing two papers down the line.
What a time to be alive
LOL. Learning base algorithms our really like an extension to our subconscious mind or employees with technical skills. As humans, we tend to use conscious effort to learn new things and after repetitive practices, our subconscious mind takes over or we just hire people to do the repetitive stuffs. The ultimate goal in the future is that our conscious mind will know no boundaries in the creative thinking process, our subconscious mind spared and budget reduced from hiring.
@{Bread boi} not really .it's just that a single person would be able to do multiple things at once, which means working time would be reduced
@Dat On3 everyone will be unemployed then this is the consequence
Now, imagine that but in a brain interface where you have a program using AI assisting you. Being able to do multiple tasks while doing it solo without the need of other people is going to be fun while making creativity-based stuff.
hmm
Now, imagine combining this and other previous works with VR character... And we get a real-time softbody flesh, that does not clip through objects and can have visible muscles under skin. No more clipping for clothes, objects can be picked up by virtual hands and deformed, stretched or torn, cloth used to create actual clothes on the character that can be put on normally instead of rigid models pre-shaped for a character with spring bones. And imagine actually pouring water into a virtual cup in game!
Granted, it's not all yet ready for this, but two more papers down the line...
@Matthias Max A degree says nothing about your skills. Open source projects created from the work of volunteers are repeatedly outperforming corporate solutions as we speak. So don't be afraid to learn :)
why do I feel like you are describing an adult game XD
I think this paper is enough to do everything you described, since a complex VR Game world can be made with less than 10,000 physics vertices. The real models with over 100,000 vertices would just be used for rendering. In game engines, physics calculations are done using far less complex models than the rendering. Now, all that is needed is talented developers willing to code this paper into a game engine or game.
AI is absurdly powerful, it'd be interesting to learn what it figured out to make it no longer a black box of sorts
I like how the neural cubature @ 1:14 seems to display its sense of height perception by trying to place its right leg toward the ground in a seemingly non-physics based manner. Obviously that is why the traditional simulator exploded, it couldn't handle the emergence of conscious thought...
Could we train an AI to simulate quantum electrodynamics so we could predict things like chemical reactions in various solvents at various concentrations and temperatures will catalysts even ?
It's already being done my friend, AI is the future.
@MusicDev
Exciting ! I always wanted to play with a chemical simulator like I can play now with fluid simulations or electrical circuit simulations or optical simulations, etc.
For me this is very interesting because I always hated working with organic perchlorates in the lab. Designing a safe chemical process in a simulator before attempting it in reality would be a dream come true.
Yes. So we currently have an AI that can predict protein folding INCREDIBLY well.Take it from a biochemist, this is a huge deal in our field. The program in question is called AlphaFold. We’re already using AI to simulate molecular interactions without having to create the molecules ourselves, and can even roughly approximate some of these interactions in fairly mild conditions. We’re not quite on the level of what you’re talking about, but we are quickly approaching it.
Yes (maybe), but that's too complex for now
It'd be interesting to see what adversarial networks could produce by interacting with these physics AIs.
If our reality is a simulation.. was it programmed by a “being” or an automated AI?
I find it funny that as NN's become more advanced, we're getting closer and closer to making GLADoS from Portal a reality - an AI that really, REALLY loves to test :)
Two minute paper has become five minute paper. What a time to be alive!!!!
This DL physics simulations MUST be brought to videogame engines. Boost in porformance are going to bring beautiful mechanics and VFX
we need all this condensed in a game engine
I wonder if physics researchers have super accurate physics in their dreams because they know how physics work.
Impressive! It'll eventually challenge physicists like me to work much harder on modeling and intepretation (always the hardest part).
@Adam Sobol you are probably right.. only time will tell though :)
@serloinz Simulating models isn't equivalent of actually understanding them with logical tools such as mathematics. So if anything those simulations will open a whole new frontier for physicists :)
i think you're on a lost cause there.. considering how fast AI is progressing in this field ..almost doing it all now in almost real-time.. maybe become a chemist or biologist ;p
People that put effort in making creativity possible in any medium ❤
Everytime I see it's made by y'all, I get more invested.
If you watch the release of unreal engine 5 the pardigm shifts are for a significant increase in graphical capability and speed. In it they even mentioned some physics engine stuff and predictive foot placement, but look at details like how the hair of the character interacts with the body or their clothing in the wind seems a bit off. I think a paradigm shift that will happen in unreal engine 6 is an increase in simulation and physics. What do you think?
Considering they're doubling down on getting into movie level VFX, I've little doubt they'll be exploring into that.
@Étienne Vanier Agreed.
Just need to teach it General Relativity and Quantum Mechanics, then tell it 'hey, unify this will ya?'
I'm getting closer to these results in real time with Unreal in Engine. The white papers definitely helps with progression.
i’ve been wanting to do ai physics for a long time
Always optimistic for what will be achieved 2 more papers down the line. Wish more of this could make it into blender
We are basically left with one more dimension to exclude and we can teach AI to predict the universe
I wonder if you fed a very large neural network random physical simulations of any type and task it to find a common feature it would derive relativity and quantum mechanics.
@VanillaAbstract AI was involved in the folding process to identify the proteins we target with vaccines, so in a way, yes... It's up to each of us to actually do our part and get vaccinated.
@VanillaAbstract -- I don't see how that relates to my comment at all, or even who you're referring to with the word "they".
I think that would require extensive, detailed training data from situations that are very extreme and beyond the ability of humans to currently observe. (e.g. what happens right near a black hole's event horizon) Given that scientists themselves are (in many cases) being held back by a lack of data, I don't think that the AI would accomplish much that humans haven't already.
Furthermore, because neural networks don't care about elegance or understanding of the thing they're approximating, only the accuracy of the results, we wouldn't be able to derive any new physics equations from the output, even if the output was highly accurate.
Maybe if we gave it random data it would create a model of physics that has nothing in common with ours but is consistent enough to be explored and have fun with. Like that 4D Toys game.
Amazing, I wish joint system solved by ML in real time !
Now we can have super accurate physics in video games
It seems like AI based acceleration is applicable almost anything - as long as you just need the result to be "good enough".
No doubt we will see computer hardware focus more and more on AI-acceleration
could this be used in multiplayer? would the AI behave the same on multiple computers?
Are the previous techniques he is comparing them to other AI implementations? Is this AI-based physics faster than a non-AI implementation?
I hope that people will learn how such networks work and make better simulation algorithms instead of relying on black box.
I have the impression, they use the same examples for physic prediction a.i. as people would use for classic physic calculations. Wouldn't it be interesting to use examples that are especially hard for classic physic calculations, but achievable for a.i. based technology? I'm no programmer, but maybe some kind of scenery with large quantities of individual simulations like large wheat or corn fields swaying in powerful squalls.
This is great, especially for the enhancement of virtual training environments (e.g. Tesla's Dojo), it will make for a much closer approximation of real-world dynamics
@skierpage I agree with you. However, there are all these examples where you add some noice to a picture or you just change a few pixels and your ML model cannot cope with that. For you it looks indistinguishable, for the model it becomes a totally different thing. Maybe these small perturbations are the things that make it complex. What comes to my mind is, that the use of manly synthetic data might make your model more prone to willful manipulation.
I'm not a scholar in machine learning, I just try to puzzle together all the bits I've heard and remember.
@Harald Töpfer i watch a of Lex ..but i disagree that we will never get to a 'real world footage' simulation.. surely it's inevitable, regardless of how slow we progress ?
Question is, is your simulation sufficient enough to catch the edge cases.
If I remember correctly, some researcher on the Lex Fridmann podcast mentioned that he doesn't believe that simulation will ever replace real world footage.
I love that these physics sims keep using the Soccadillo action figure from Mighty Morphin Power Rangers.
Truly amazing!
AI is absolutely amazing and scary if used by the wrong people.
I can't wait for this to hit Blender and Unity,
Is there a channel which explains what's actually going on in these papers and simulations? Like the very detail of how does AI work, compute and is trained?
All that in a way that common person like me can understand. Really want to understand.
@P E Thankyou it makes a bit more sense now.
as I understand it, each particle in the sim gets itself and its neighboring particles' data (velocity, mass, acceleration, gravity, other stuff) and sends to the neural network, getting a result back that tells it how to act.
basically while normal sims calculate long form, this algo reads the multiplication table. a really, really big and elaborate multiplication table.
@Yo Él Thankyou man, that i have watched. What i meant was how does the tech in this paper work
Like a channel that explains each paper in detail.
This video explains quickly how do neural networks work
I often see these AI simulations on the channel. But how likely is it that something like this will be implemented in 3D software like Blender for example?
We are heading toward this, but it will take a shift in people perception of computers. These AIs can do incredible things, but they always have a chance to blowup too. And when it does, it often catastrophic, like the dino bones turning into a santa-claus by mistake.
Those AI need human supervisors. We need people to accept that their blender module might go awry and spit gibberish. And it's quite different of what we are used from computers.
1:05 I'm not technical nor informed enough to know if this is even close to an accurate description... but seeing the demonstration where the traditional simulation exploded while the neural net sim continued without a problem really makes it seem almost... human. A human would easily recognize that the explosion isn't even close to a valid outcome. The neural net sim is like, "yo dude what are you doing, obviously that's never going to happen... it would probably look something like this (continues with actually sane simulation)".
@Get Sideways That is basically exactly what I'm saying. The algorithm hits a point in the simulation that it just can't handle mathematically. The example I gave was from observation of where the failure most often occurs, and that is when the mesh ends up clipping through either itself or the underlying object during the simulation.
@Dilligff Are you sure that's not just a result of division by zero or a similar showcase of Mathematics being but a crutch?
Not too far off. In my limited use of simulation (primarily dforce in daz studio) the explosions generally happen when the verts clip through the mesh. The algorithm doesn't understand the 'negative space' calculation and it basically throws everything off. I'd equate a pure simulation as flying by instrumentation alone where you set a course and it follows through as best it can, where the neural net is like having a pilot on standby to make course corrections based on 'knowing' where certain landmarks are supposed to be.
Now they know physics, teach them the periodic table and ask them to get from one planet to another and speed up time for them.
Seems like AI can make any slow simulation faster. Another Two Minute Paper showed that AI also brought moving camera in still images down to real time by simulating far fewer light rays and predicting the rest. This probably holds true for all intensive graphics. AI could probably reduce high-end video games to play on a phone even with all settings turned to max.
@Lysander Dusseljee I'm afraid it's not as easy as you put it, and all these Two Minute Papers videos are enough proof. Apparently, neural network calculations take a lot of overhead themselves. And that's with the specifically designed "Tensor" units on the RTX branded GPUs. The videos on this channel also show that the resulting 'emulations' are still far from being real-time (still, 1 to 2 frames per second is vastly better than one frame in one hour).
As I understand this, to make the best use of NNs we need high qubit count personal or easily accessible quantum computers, so that's still relatively far off in the future. At least the "simulating the entire universe" task.
@Get Sideways Thanks for putting a name to it. This allowed me to google it and see that RTX is used in 150 apps. If I understand it correctly, RTX is slightly different from this physics simulation, but both use the same principle; the principle of running a complex simulation and training an AI to replicate the results with minimal processing.
If they added a temporal component, where the AI predicts process states at time x without going through the intermediate steps, it should be able to run off-screen simulation using microscopic amounts of processing. I think this technique could probably simulate an entire universe in real time. The larger the object, the less details matter (until you zoom in). You'd just need to estimate a plausible end-state, not necessarily the precise outcome.
RTX is basically ray-tracing approximation using neural networks and "far fewer light rays".
Can the AI also do Quantum Field Theory? Such as calculating the energy levels of a gold atom?
@NukeMarine interesting
They're already using AI to predict protein folding. That's a BIG field with enormous medical applications.
I'm not sure ai simulation even can blow up. It would not make sense for ai to have values suddenly explode unless it learned from faulty data that included that situation.
Of course it shouldn't since it learns "visually". A mathematical model - however good it is - has tolerance zones where it goes haywire (like dividing by zero), while a neural network simply goes, "ok, with those limbs being there and with that much bending the whole system should look like this..."
Basically, you could do the same thing. Human brain is also a neural network, and having had watched a whole lot of physical interactions you can also try to predict how the thing is going to behave.
When you watch a car or a flight sim doing its thing, you do have the episodes of "well... that doesn't seem realistic" or "whoa, it flies exactly as I expected it to!" from time to time, right?
I squeezed the paper real hard on this one!
I mean for games accuracy dosent really matter, so long as it doesn't look out of place it works
i would like to apply machine learning to a physics simulation that i am running in unreal engine 4 but does anyone know if unreal supports python machine learning?
I can almost guarantee UE has plugins for that sort of thing
I can't wait till I will be able to play with transparent stretchy jelly bunnies in VR.
That explosion caught me off guard!! Spooky
reminds me of spinning a ragdoll too fast
Never seen cars launch themselves into the sky in a racing sim before?
Crazy 😝 the Ai will simulate itself
how should I learn AI like this? What books should I start with?
Do a vid about what neural networks even are and the hardware involved pls
What a time to be papers!
this is extremely cool, but for some reason the elastic cactus is kind of upsetting
How do you even train this kind of neural network? What does the data look like? I really have no clue.
Is the source code public?
1:15 traditional simulator didn't hold on to its paper tight enough
Impressive. One thing: everything acts like slow motion rubber. Or is it just me sensing this?
I believe it is just slow framerate to display more precision
What a time to be alive!
What I have to do with my squeezed paper? How to repair it?
What does "blow up" mean in this context? What happened in the simulation?
Division by zero? Or maybe sqrt(-1)
Can we just teach an Ai all the subjects
yes, take a look at Dall-E to see what one which was trained on the whole google image can achieve.
really cool!
What about an AI simulating light?
@SomeRandomAutisticGuy Personally, I'm waiting for neural networks to start emulating physics in simulations (like racing sims or flight sims).
Yeah Ray tracing is really promising it's much cheaper than simulating every single photon
if i see an ai doing ray tracing or simulating photons it would be sight to behold
@SomeRandomAutisticGuy As far as I'm aware, it's either for improving low ray count ray-tracing or indeed for upscaling, but it's still ray-tracing specific (trained on the traced data).
The ai in RTX are for upscaling low res ray tracing
@Get Sideways but rtx is just an ai
upscaling a low resolution ray tracing
1:08 had my dying that caught me so off guard
The Matrix is becoming reality at some point I swear
1:15 my last brain cells on a math exam
Please correct me if I'm wrong: a neural network-based physics simulator is trained with a lot of data from a mesh deformed in different contexts. Then, when it comes to put the deformable mesh in a new context, it just *infers* how the mesh should be deformed from its knowledge of how it is usually deformed. On the other hand, a traditional physics simulator uses physics data (material properties, forces, etc) and operations to compute how the mesh is deformed. Right?
Other question: this neural network-based physics simulator runs in real time for certain meshes. How much memory does it need for the same mesh? The neural network's configuration data must be pretty large...
We need to put all of this into gta 6.
I just came for the "what a time to be alive"
1:15 the most horrifying *jump scare* i've ever seen
IKR? Go back and watch it frame-by-frame. It's even more unsettling.
I love all about your voice, your accent, your way to express yourself, the subject you are treating... Marry me :D
AI ENDING the fun game "Where's Wally" in 4.45 seconds! Artificial Intelligence (AI) Waldo :thexvid.com/video/gby04JxTV0M/video.html
Replace triangle by Nanites of unreal engine this would blow your mind
I get that this is about Computer Graphics but Man…. There is some great Philosophical stuff here, too… The Physical computer in 2005 ran a host of software to simulate Physical movements… Those DNA guys should really think Hard about that… and now this! These are just man made things, too. Those proteins and all of their folding happens without computer simulation and in real time… We don’t yet understand how or why?
Keep going 👏
@skierpagePS - I’m no scientist… I’ve been looking at this for a while, and it’s just a feeling, that this is an interface thing. The object in space. The stuff in between…like what’s going on with the Fourth Phase of Water. I am just a guy who reads stuff… take it for what ever you will. Pull the POV back a little. Look at more than the focus. Keep at it. Never stop. We are all behind you.
@skierpage Science has done a tremendous job with DNA. I know that they’ll get to it sooner or later… but nature sure has you biting your tails on this. There is a answer, but it’s going to be a revelation… I think the Electric Universe model has a lot of insight.
Reminds me of the title screen in Super Mario 64.
I don't understand everything he said, but for some reason I'm pumped anyway!
The voice in that video is also an AI.
The new method doesn't bend the dinossaur tail as much. Isn't that (as) relevant?
(I mean, I guess not so much given it's a lot faster, but still.) xD
Did that AI not blow up because it did not want to blow up?
Can't wait for anime physics
One more paper down the line, it will be the second paper down the line
Finally we can simulate dinosaurs playing limbo
Nvidia is calling, They'd like to replace their PhysX Engine to use all their Tensor cores on their new cards.
They are already using them in a similar fashion, but for the graphics with RTX...
Amazing
What exactly means ground truth, here? I always thought that it was footage from real life.... obviously not.
Footage from real life would work better, but you can't always see what were the input values (or output coordinates for all the vertices in question) in that case. Not to mention it could simply be prohibitive to present ALL the needed situations in real life to the network.
@mvmlego1212 oh right, that totally makes sense ! Thanks
From my understanding, the ground truth is a simulation in which the laws of physics have been explicitly programmed, and then applied to a virtual object. The positions of vertices are determined by spring equations in the same way that a physics student would determine them with a paper and pencil.
The neural network is trained on data from those simulations--specifically, the positions of the vertices over time, but _not_ the equations that govern those positions. From that data, it attempts to derive physical laws that would reproduce that output. The laws won't match the actual laws of nature exactly, so it's not a rigorous simulation, but I think the advantage is that it has the potential to be faster and more versatile.
Probably simulated with a most correct traditional simulation method without much of acceleration means.
2 more papers down the line and I'll be able to pronounce your name correctly!
Not gonna happen...
Next paper? MATRIX
We are living through a new Renaissance
amazing
lex fridman should have you on his podcast
can you do something about your audio quality? seriously, it's so muffled. it's pretty bad.
thanks doc