Microsoft’s AI Understands Humans…But It Had Never Seen One! 👩‍💼

Share
Embed
  • Published on Dec 26, 2021
  • ❤️ Check out Lambda here and sign up for their GPU Cloud: lambdalabs.com/papers
    📝 The paper "Fake It Till You Make It - Face analysis in the wild using synthetic data alone " is available here:
    microsoft.github.io/FaceSynth...
    ❤️ Watch these videos in early access on our Patreon page or join us here on TheXvid:
    - www.patreon.com/TwoMinutePapers
    - thexvid.com/channel/UCbfY...
    🙏 We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
    Aleksandr Mashrabov, Alex Balfanz, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Benji Rabhan, Bryan Learn, Christian Ahlin, Eric Martel, Gordon Child, Ivo Galic, Jace O'Brien, Javier Bustamante, John Le, Jonas, Kenneth Davis, Klaus Busse, Lorin Atzberger, Lukas Biewald, Matthew Allen Fisher, Mark Oates, Michael Albrecht, Michael Tedder, Nikhil Velpanur, Owen Campbell-Moore, Owen Skarpness, Peter Edwards, Rajarshi Nigam, Ramsey Elbasheer, Steef, Taras Bobrovytsky, Thomas Krcmar, Timothy Sum Hon Mun, Torsten Reil, Tybie Fitzhugh, Ueli Gallizzi.
    If you wish to appear here or pick up other perks, click here: www.patreon.com/TwoMinutePapers
    Thumbnail background design: Felícia Zsolnai-Fehér - felicia.hu
    Wish to watch these videos in early access? Join us here: thexvid.com/channel/UCbfY...
    Meet and discuss your ideas with other Fellow Scholars on the Two Minute Papers Discord: discordapp.com/invite/hbcTJu2
    Károly Zsolnai-Fehér's links:
    Instagram: twominutepa...
    Twitter: twominutepapers
    Web: cg.tuwien.ac.at/~zsolnai/
  • Science & TechnologyScience & Technology

Comments • 378

  • Alex
    Alex 4 months ago +726

    Finally, AI is teaching AI, and things are about to get a whole lot faster!

    • Anonym
      Anonym 3 months ago

      imagin we teaching the AI to teach the AI how to teach an AI and then teach the AI with that AI, after that we teach the AI with the teached AI Data.

    • Γιώργος Παπαδούλης
      Γιώργος Παπαδούλης 3 months ago

      Your comment bears 0 relation with the video.

    • KrinoDaGamer
      KrinoDaGamer 4 months ago

      yep robots building robots

    • chpsilva
      chpsilva 4 months ago +1

      Faster, and even more out of control. Yay.

    • Sean Crawford
      Sean Crawford 4 months ago +1

      hothouse speeds

  • Leonardo SA
    Leonardo SA 4 months ago +250

    Open world games could use these faces as NPCs, every single one being unique, that is quite something

    • 원태정
      원태정 Month ago

      @DavidVercettiMovies i hope that really!! AND visualize that situation often,, after that, it will be our future in 7 years.

    • DavidVercettiMovies
      DavidVercettiMovies 4 months ago +2

      In Metaverse, as avatars. We are going to be like Ready Player One... I can wait tbh

    • Xavier Magnus
      Xavier Magnus 4 months ago

      But people aren't all unique.

    • GraveUypo
      GraveUypo 4 months ago

      uh, metahuman on unreal engine 5 is this and there's a bunch of other character creators in games since forever that are essentially the same thing.

    • Alkis05
      Alkis05 4 months ago +8

      That is pointless if they are clearly souless maniquins. We already have decent npc generators. What we need is interactive AI. Simulated agents that can at least give the impression of being autonomous.

  • apostasis
    apostasis 4 months ago +232

    The synthetic faces look fairly stylised and "cartoony" to my eye - does that have any impact, e.g. does it introduce any unexpected new biases? Or are the advantages from greater diversity (both in terms of demographics, angles, lighting etc.) in the dataset so overwhelming that the fact that they aren't exactly "photorealistic" just doesn't really matter?

    • Kenny Hill
      Kenny Hill 4 months ago

      @Jaface "I'm not an expert but I think the artifacts that come from low quality cameras and recordings would make the cartoony look"
      You were more correct at the start, it's just that they are not using high quality light simulation in the renders - just enough to convey the topology correctly.
      They probably determined that full light simulation wasn't necessary to their goals anyway.

    • Superkobster
      Superkobster 4 months ago

      reminds me of borderlands, dont know why

    • itsEthan
      itsEthan 4 months ago

      the main thing is that it does not seem "uncanny" in that the rendering may have been stylized but proportionally speaking, it looks real and very impressive.

    • Craig Baker
      Craig Baker 4 months ago

      @Jaface Nah, you can have a "cartoony" look under totally unusual lighting just as you can take real video footage of a person in weird lighting but still know that it looks realistic.

    • Faux 2D
      Faux 2D 4 months ago

      @apostasis That's what I thought as well at first (and got really, really excited) but this whole paper boils down to "if you train your AI with 3D faces it will work better".

  • Matthew Nohrden
    Matthew Nohrden 4 months ago +174

    AI in general is about to get a whole lot faster. It was just a matter of time until AI was learning from an AI generated dataset.

    • espace1
      espace1 4 months ago +1

      Mais ils ne faisaient pas ça avec de fausses radios créées par l'IA pour enseigner à l'IA les fractures ou autres

    • Anko Painting
      Anko Painting 4 months ago +1

      It’s just like dreaming

    • Amphicorp47
      Amphicorp47 4 months ago +6

      This has been going on for about a decade as an alongside for adversarial networks

    • Mr. Al Mezeini
      Mr. Al Mezeini 4 months ago +2

      We are doomed

  • Hakan Dances
    Hakan Dances 5 months ago +202

    This looks like a great way to get around bias in data collections.

    • Son of a Beech
      Son of a Beech 4 months ago

      @SJK agreed

    • SJK
      SJK 4 months ago +1

      @Son of a Beech my point is just that this could help eliminate certain biases. If others become problems we can work towards those too, but for example a situation where there is discrimination against race is a much more serious issue than discrimination against facial markings, mainly because it compounds with other ways they’re discriminated against. I see this as a good step.

    • Son of a Beech
      Son of a Beech 4 months ago

      @SJK Yes, in the long term. but not necessarily in the short term

    • SJK
      SJK 4 months ago

      @Son of a Beech sure but none of those things are too hard to add

    • algc19
      algc19 4 months ago +2

      The problem is reduced to getting rid of bias on face generation!

  • juliandarley
    juliandarley 5 months ago +63

    this seems to be nearly perfect facial performance capture - without expensive depth cameras and awkward head cams. when matched with ever improving pose estimation (ideally with hands and fingers - not so easy) it will mean we can have full human markerless mocap without any expensive equipment (like body suits or dozens of cameras + markers).

    • Mike Marynowski
      Mike Marynowski 4 months ago

      @Lucas Ryan Stereoscopic image/video data could be generated the same way then used to train an AI, which could then use real stereoscopic video to get some pretty accurate 3d mocap I bet.

    • juliandarley
      juliandarley 4 months ago

      ​@Lucas Ryan you may be right, but 'never' is not a word to be used lightly when talking about AI. i have already seen astonisthing progress in the last couple of years. i do think that some kind of hybrid (between sensors and video) may yet be a good way forward, but it also brings limitations (as pointed out in the paper in the next TMP after this one).

    • Lucas Ryan
      Lucas Ryan 4 months ago

      Annotation will never be an exact replacement for markered tracking. At least not in 2d. It's not exactly easy to annotate things like a jaw moving back and forth. With two cameras however that would be possible.

    • zakuro
      zakuro 4 months ago +1

      holy grail of hololive

  • Greg K
    Greg K 4 months ago +49

    We're still in the "let's just feed the AI more data" phase, but hey, it works.

    • Electron826
      Electron826 4 months ago

      Johnson Johnson Yes this is the annoying thing. When I do ML projects, 90% of the time is writing code and only 10% is spent toying with the network.

    • edwardinchina
      edwardinchina 4 months ago

      Not exactly. Look at how powerful the first GANs were. Look how powerful AlphaGo zero is. Simulated and AI generated data is orders of magnitude faster than hand labeled or even captured data. Once AI agents can start teaching and testing each other in a flexible way, crazy things will happen.

    • Johnson Johnson
      Johnson Johnson 4 months ago +16

      so much of machine learning isn't really structured yet. Preparing data is actually 80-90% of the work which isn't really taught properly because every type of data is handled differently.

  • Viktortheslickster
    Viktortheslickster 4 months ago +10

    Wow the potential and scale of AI models are so impressive! I enjoyed this one.

  • David Guffey
    David Guffey 4 months ago +58

    You can tell what AI thinks of us because it gives everybody a clown nose.

  • MilesBellas
    MilesBellas 4 months ago +7

    "Hold on to your papers, because our simple wonder and fascination with coding, inadvertently empowered a predatory Police and surveillance state."

  • Brandon MacKendrick
    Brandon MacKendrick 4 months ago +5

    This reminds me a lot of the simulations used for training self-driving cars. Very cool!

  • Aron Septianto
    Aron Septianto 4 months ago +7

    man, i remember how hard face tracking was when i tried to do it the first time (when you can't really use AI for it)

  • Major Johnson
    Major Johnson 4 months ago +3

    This is the AI that will make face-detection-avoidance makeup/facewear ineffective. It would be trivial to add those objects to the training set.
    Which makes the conversation about the ethical use of face ID even more important.

  • Fictional Character
    Fictional Character 4 months ago +4

    I think the "cartoony" effect is more due to the lighting diffusion than the algorithm itself

  • AVAIDN
    AVAIDN 4 months ago +18

    Hello Two Minute Papers, I want to thank you for the knowledge you share, the knowledge you share is very mind blowing. I want to thank you for your efforts to share such knowledge, thank you very much. ♥

  • Exilum
    Exilum 4 months ago +1

    Over the past year, we went from detecting human movements with *some* level of precision to having such precise parsing. Impressive.

  • TonyGamer
    TonyGamer 4 months ago

    I'd be interested to see this same AI, but combine the training step with Style GAN like in one of the previous papers and see how that affects its ability to track and recognize human faces.

  • Citizen Of Earth
    Citizen Of Earth 4 months ago +22

    Damn. I never thought we could use virtual generated faces instead of real ones for A.I. training. That's something so novel and beautiful. A.I. is progressing at an insane rate. A.G.I./A.S.I. would be an actual living God among humans.

    • Estimate Snatch
      Estimate Snatch 4 months ago

      Things in general can go wrong.
      When you make something new it is expected that you find problems.
      AI is unlikely to suddenly develop too fast for us to do something about it.
      Thats at least what I think.
      We'll have to wait a while before there is good working A.G.I. and unless people manage to get the narrowly focused AI and computer capabilites to be used in it properly it definitely wont be an actual living god among humans
      (AI has a lot of potential, but its not magic, you cant do more than what limitations are set because of how much you can improve given your state and the possibilities for the situation you are in)
      At some point AI will (very likely) be better at improving AI than humans (it should still take a while before human assistance isnt required in this as AI needs to learn to recognize for what to optimize in which situations and if you want to preserve accuracy and make it work for problems in general, you need more hardware or differently connected hardware (which Im sure is possible to have the AI take care of, but that will take time and the AI training to do so), but even though many people are talking about the singularity, we might find out at some point (maybe quickly (or already??) ) it gets more difficult to optimize faster than there is improvement and we recognize something new that slows progress down (this could balance with the progress improving progress for a while), maybe systems reach their limits in improving other ai and it turns out you need a lot of hardware for it or its impossible to make AI better without a lot more hardware.
      I wonder how much hardware is required for an AI making AI to work as good as what humans are creating together now or whether AI making AI still has much developing space even without extra hardware so that they can outgrow humans in the task easily. I mean, humans are not made to make AI, we do a lot of things.
      How much of AI experts brains is actually centered around AI?
      And even if you know how much of our brain is centered around it, maybe even that part isnt close to properly optimized for AI making.
      Or will it turn out that quite a bit of a humans brain is involved in the process, because you need general problem solving parts to be able to do something like that?
      Blah blah
      (Im just saying some stuff that I've thought about just now, so obviously I could be wrong. Its just that I dislike it when people just assume the sci-fi movie plotlines are realistic without proper arguments.
      Although I do understand people saying that humans will use it in harmful ways.
      But people saying the AI just decides to do something else from what it was programmed to do makes me feel a little disappointed.
      Doing something else from what you were planned to do is something different from doing something else from what you were programmed to do.
      Athough there are many possibilites and a lot has happened that wasnt expected in the past, that doesnt mean everything futuristic will have to happen. Many futuristic ideas or not very realistic or inefficient.
      Not saying its impossible for stuff like what I described to happen, just that its bold to assume anything like that as of yet.)
      You could consider evolution paired with what is on earth and the way physics works to be a method to create something that can create AI, if you think of it as something that made humans exist, who then made hardware to work like AI.
      Its probably a very very inefficient method though and not at all focused, it also took a lot of time. There are problems that can arise with organisms giving birth quickly too, its not like every organism replicating very quickly doesnt bring a lot of problems.
      This stuff makes me want to know the limit of development speed. Especially starting from a very basic system.
      How much can a problem solving / task doing system improve when its entirely focused on getting better for tasks / problems in general?
      What would the graph of the development rate look like, what would even be a good measure for development? The amount of problems solved / tasks completed? How to prevent that the problems or tasks are repetitive or too similar and it looks like the system is developing a lot when its actually just getting better at specific things, maybe its even throwing out its capability of doing something else in order to get better at a specific thing, because it repeats in the "progress measurements" a lot which makes de it look like its developing.
      Is it even possible to make something that creates problems or tasks that are actually very different from each other and without a bias? How to define the similarity between problems / tasks?
      You are right about A.I progressing quickly, now lets see whether

    • Citizen Of Earth
      Citizen Of Earth 4 months ago

      @Brian Harrison Everything.

    • Brian Harrison
      Brian Harrison 4 months ago +1

      Yeah, and what could possibly go wrong with that?

  • Andy McCullough
    Andy McCullough 4 months ago

    One of my favourite channels. I find your output fascinating, it really lets me know that there is lots I don't know!

  • 内田ガネーシュ
    内田ガネーシュ 4 months ago

    This would be a great way to fill up a game world with distinct crowds with one template. So much time saved making crowd models.

  • Truth Matters!
    Truth Matters! 4 months ago +1

    You are amazing! Please always work for the people. Never work for the evil ones. The things you could do could change the world in very powerful ways and then the hand is the wrong people could be very sketchy.
    I love your work as always you are amazing it's so exciting to watch your channel 💟

  • Chain Ch0mp
    Chain Ch0mp 4 months ago +5

    This seems extremely useful for VR applications

    • Johnson Johnson
      Johnson Johnson 4 months ago

      just add a couple of cameras inside and outside to recognize expressions.

  • JoinUsInVR Steam Group [ FishTail ]

    Why so harsh on the AI? I've seen humans since my day 1, and still don't understand them :p

  • Kyle Bowles
    Kyle Bowles 4 months ago +1

    All the big companies have been using this strategy (not exactly this method) for a while. That's how PTC and Tesla do CAD/object tracking for example

  • Paul Cooper
    Paul Cooper 4 months ago

    This approach is now being taken by Tesla to train the FSD driving software on edge cases by generating many fully labelled variations of difficult situations. It’s a blend of real world and synthetic production to greatly accelerate edge case learning

  • Carlo Carnevali
    Carlo Carnevali 4 months ago

    Is there an AI that can analyze and recognize things in an image like a city skyline, a museum or a image with a lot of different objects in it, even small details.
    Think Google Lens but on steroids.

  • Tailslol
    Tailslol 4 months ago +1

    perfect face tracking and eye tracking for vr, facerig and vseeface.
    combining with metahuman could be an idea.

  • KnowledgeCollective
    KnowledgeCollective 4 months ago +1

    Thank you for your informative knowledge!

  • StephenRansom
    StephenRansom 4 months ago

    Jinkies, it been a while since I got the creeps from this channel. A Merry Christmas 🎄 and a Happy New Year 🎊.
    Continue to Hold those Papers as tight as you can… loving this content.

  • Natã Henrique
    Natã Henrique 4 months ago

    This AI gonna be so useful for creating metaverse characters

  • a dude on the interweb
    a dude on the interweb 4 months ago +6

    AI learning from AI, the only thing stopping us from the singularity is the AI not knowing how to learn from itself unless we teach it

    • Damon Irvine
      Damon Irvine 4 months ago +1

      As Nikola Tesla once said, “uh oh spaghetti-o”

  • Rebel
    Rebel 4 months ago

    I'd say, in about 5 years, we'll be faced with game characters animated entirely by AI in real time.

    • N2 Pizza
      N2 Pizza 4 months ago

      Probably in a year already

  • Mehmet Emin Mumcuoğlu
    Mehmet Emin Mumcuoğlu 4 months ago

    I really wonder where they build their photorealistic faces (whic program?) and can we access to that?
    Does anyone know or have a guess?

  • S Q W O R M
    S Q W O R M 4 months ago +6

    This is what we thought Kinect was when it first came out

  • Shahar Har-Shuv
    Shahar Har-Shuv 4 months ago

    All of this will be useful when we interface with the metaverse.

  • Rei
    Rei 4 months ago +1

    even though they said none of those virtual person are real, i believe throughout the history there may be at least one real person that looks very similar to the virtual person

    • Drake
      Drake 3 months ago

      That's pretty likely, yeah. If I remember rightly, on average, each person has approximately six other unrelated people in the world who look near-identical to them - though obviously it's incredibly rare that they'd ever interact. Humans are very unique, but there's a LOT of us, so even among the living population it's relatively common to have multiple people who have some set of near-identical traits, and that'll only become more and more common if you include people throughout history, too.

  • Shikhar Gupta
    Shikhar Gupta 4 months ago

    This is very cool! Please recommend more videos like this to me TheXvid!

  • Rory Chivers
    Rory Chivers 4 months ago +3

    Finally Microsoft will be able to simulate images of real humans enjoying their Zunes

  • AnthonyAnalog
    AnthonyAnalog 4 months ago +1

    I love how people think they're mysteriously unique and not 100% already quantified in marketing datasets lmao.

  • Nan Yay
    Nan Yay 3 months ago

    someone should make a virtual world for the ai to learn and grow. like the sims kinda... itll be interesting to see

  • Arthur Brock
    Arthur Brock 4 months ago

    I wonder if someone could scan their face and then an AI outputs what makeup would be necessary to look as similar as possible to a reference, perhaps a character that person wants to cosplay as?

  • Taurus
    Taurus 4 months ago

    This technique is specifically for human heads, which will be quite awesome for games and virtual worlds in the short term.
    I wonder when we can generalise the creation of a digital version of any real world entity. Human heads, like any objects, consists of parts: skin, bones individual pores, hair strands, eyelids, eyes, nose, nostrils etc. I hope that researchers are soon able to generalise the kind of object recognition that a human child has. The neural network would recognise something as an object even when it doesn't know it's name. Given more data about the world it could for example learn that the object it saw earlier is called "a tooth", and a set of multiple tooths is called "teeth" - just like a child would learn.

  • RDB
    RDB 4 months ago

    This is the only channel I sub to where I watch every video. Amazing stuff every time.

  • Skeptical Caveman
    Skeptical Caveman 4 months ago +4

    Removing complexity/noise by using virtual reality instead of "real reality" is similar to what Tesla is doing with FSD beta.

  • 3D Print Timelapse
    3D Print Timelapse 4 months ago

    @Two Minute Papers They should use an extra texture layer that states per pixel the distance between the surface of the skin and the muscle, and the thickness. And another texture layer (or combined with the muscle texture) to describe how far from the surface the bone structure is. Because when you look at the skin, the skin feels like its from a doll thats hollow on the inside. By using two extra texture layers the subsurface scattering could use the texture pixel data of the muscle, and then the bone to calculate the effect on how the light will be bouncing under the skin.

  • MidWalker
    MidWalker 4 months ago

    That facial landmark detection is very nice for low cost Facial Capture, That if they decide to release the project and the dataset as an OSA,
    Or just the binary builds

  • Xyu
    Xyu 4 months ago +33

    Beautiful and TERRIFYING at the same time because we know 10% of this tech will be used to entertain us while the other 90% to control and oppress.

    • Olaxan4
      Olaxan4 4 months ago +19

      I must say, no matter how cool it is -- it's hard to get excited about advances in facial recognition...

  • Ser Ta
    Ser Ta 4 months ago

    Just wow! And this is also possible with all kinds of different cases like climate change and chemistry and other physics

  • Matias Wainsten
    Matias Wainsten 4 months ago

    Could this be used to recognize individuals based on a dataset of IDs + a photo, video or feed of people?

    • TMCicuurd12b42
      TMCicuurd12b42 4 months ago

      That question is 30 years too late... and the answer of course is Of Course; we did that in the 90's with basic neural nets and airports security systems and federal database... Took about 30 seconds per facial grab . Pretty sure they can do it for everyone now...

  • NewzToday
    NewzToday 4 months ago

    Microsoft will use this technology in their metaverse platform in near future. Now they use 2D/3D avatar. Then they will use AI-made real avatar.

    • Nova Verse
      Nova Verse 4 months ago

      Hopefully they will give these tools to their game developers as well, as their games could definitely use random NPC's with actual good A.I modeling.

  • 66ThankYou99
    66ThankYou99 4 months ago

    I thought every computer-generated realistic face were always meshed around generic 3D models through steady tween fitting.

    • Machina
      Machina 4 months ago +2

      Not every single one, but yes many are made from a base model, however not everyone uses the same base model.

  • Kaleid-Will-Cope
    Kaleid-Will-Cope 4 months ago +7

    What. A. Time. To. Be. Alive.
    Festive Greetings and thank you for all these wonderful papers.

    • Two Minute Papers
      Two Minute Papers  4 months ago +3

      You are very kind, thank you! Happy Holidays to you too! 🙏

  • Anirudh Pratap Singh
    Anirudh Pratap Singh 4 months ago

    i have a lot of interest in machine learning . i don't know where to start with and make this fruitful. i really love your videos and i am subscribed to your channel. By profession i am a designer and if anyone would respond back i would love to connect withtthem and discuss whats in my head.

  • Gamer Tayhong
    Gamer Tayhong 4 months ago

    In the past few days I was commenting how Microsoft was incapable of understanding respect for privacy or at least faking it. Lol! They don't really understand humans but their AI might.

  • Supreme Lobster
    Supreme Lobster 4 months ago

    Maybe the purpose of the simulation we live in is to generate labelled data automatically lol

  • Erik Žiak
    Erik Žiak 4 months ago

    Why am I not impressed? I guess all this "wizardry" became second nature. We truly live in the future.

  • dick butt
    dick butt 4 months ago +1

    Maybe my viewing resolution is too small, but I honestly wasn't sure if the first real human was a real human.

  • Reyna Singh
    Reyna Singh 4 months ago +5

    Incredible!

  • Unktheunk
    Unktheunk 4 months ago +10

    Am I the only one who kinda gets a chill when thinking about how this will be used?

  • The OtterBon
    The OtterBon 4 months ago

    They developed this amazing tech but couldn't find 3d modelers other than the guy who made the faces in marrowind?

  • NCK Gaming
    NCK Gaming 4 months ago

    Brilliant, now if we do get some apocalyptic rogue murder AI at some point, it'll be trained to go after fake humans instead of real ones xD

  • TheUntalentedRat
    TheUntalentedRat 4 months ago +2

    I only hope digital humans won't replace real humans.😕

  • MrVipitis
    MrVipitis Month ago

    recently asked a researcher why they are modifying test data from other parts to show how syntax and semantics are learned in the encoder structures of a language model, not the static embeddings. Didn't get the answer I was looking for

  • Sébastien Pautot
    Sébastien Pautot 4 months ago

    We could use it for synthetic eyes (for robots and stuff like FNAF cosplays) to be able to "watch" people in the eyes when they speak nearly in real time, it'd be great ngl!

  • Jakub z Neba
    Jakub z Neba 4 months ago

    finally you dont need toxic actors to create anything story based… the future is here

  • Judoball
    Judoball 4 months ago

    Reminds me of how tesla trains their FSD ai

  • Nova Verse
    Nova Verse 4 months ago

    I wonder if Microsoft will use it for their gaming division.
    As of right now, Epic Games is really at the forefront of Digital customizable Meta-Humans in their UT5 engine, i'd like other companies to do the same.
    A bit of good ole competition wouldn't bite right?.

    • Nova Verse
      Nova Verse 4 months ago

      @IceFire I'm sure someone is making a open source version of it by potential crowdfunding..
      I would recommend that person to work at The Blender Foundation.

    • IceFire
      IceFire 4 months ago +1

      It'd be really cool to see it become open source to help development across the board rather than just one company. Imagine the crazy things we could make if anyone who needed access to this type of technology had access to it.

  • Meinbher Pieg
    Meinbher Pieg 3 months ago

    To be fair, to an AI there is no difference between a "real human" and a "generated human character". AI's only ever seen pictures that represent humans anyways. They never "see real humans" so it's not really that impressive.

  • HarryBallsOnYa
    HarryBallsOnYa 4 months ago

    This and deep fakes in general has terrifying ethical ramifications. i fear we may have opened a box that can not be closed.

  • pikachufan25
    pikachufan25 4 months ago

    imagine this used for a vr-Chat type games.
    imagine the possibilities.

  • i wanted to save the world

    AI in cahoots with AI, what could go wrong! 🤓

  • Adriane
    Adriane 4 months ago

    Definitely can be used to populate open world games in the future

  • VodShod
    VodShod 4 months ago

    how can you get a hold of the software for modeling humans?

  • John Ingham
    John Ingham 4 months ago +1

    Is this what our brains are doing: imagining possibilities based on experience, processing these scenarios, and learning from them?

    • any wallsocket
      any wallsocket 4 months ago

      not that we know what the brain is doing, but it seems likely that yes, the brain naturally will explore probabilistically viable latent spaces for new objects, if that's what you're suggesting.

    • LAPABA 123
      LAPABA 123 4 months ago

      Pretty much

  • John Maxwell
    John Maxwell 4 months ago

    Aiming at the correct part of the head will make the killer robots a lot more efficient.

  • Erobus Black
    Erobus Black 4 months ago

    Ok but full functionality is the point, and its going to require this

  • Jeremy Hofmann
    Jeremy Hofmann 4 months ago

    Meanwhile I’m over here trying to figure out which images have a streetlight so I can prove I’m not a robot.

  • Ero
    Ero 4 months ago +2

    There is a requirement for front facing cameras for the new Windows 11 OS. This mixed with that, make me understand that our privacy is or could be nonexistant anymore if you use Microsoft Windows. Whether or not it's used for selling to advertizers or having it for a national/global tracking system of some kind scares me for the future of privacy. I just learned that everyone is opted in to a keylogger for windows 10. Everything you do on their system is tracking you. Just call me crazy if you didn't want to hear this.

    • Spencer C
      Spencer C 4 months ago

      @Ekao advanced knowledge and tools, you mean Google? Everyone acts like Linux is unusable if you're not a tech guru, most people with more than two braincells to rub together can Google most issues.

    • Ekao
      Ekao 4 months ago

      Best get used to linux while win10 is still supported, at least, that's my train of thought. There's still some major issues, but it's come a long way towards being reliably usable. Still not entirely there, as troubleshooting still requires advanced knowledge and tools, and is more common than it should be imo. I have hope that the user experience will continue to improve in the coming years.

  • Stanley Tang
    Stanley Tang 4 months ago

    AI based no tracker facial mocap, wow

  • Takkie Jakkie
    Takkie Jakkie 4 months ago

    Isn't using AI generated images to generate more AI generated images too circular? It's like building on top of something that may not be perfect and thereby amplifying the imperfections with each iteration. I might not have understood it 100%.

  • FleurBird
    FleurBird 4 months ago +1

    Sadly Microsoft is able to make an ai. But not something simple as a working store.

    • Leonardo SA
      Leonardo SA 4 months ago

      That is to show where the effort is going

  • Ʌᴛᴍᴏꜱ ɢʟɪᴛᴄʜ

    This is pretty cool and all until you realise that this could be exploited for nft generation :(

  • Brian Harrison
    Brian Harrison 4 months ago

    This will be very useful for governments who want to track their citizens.

  • Rizuken Nekuzir
    Rizuken Nekuzir 4 months ago

    I bet the only reason it does better on the a.i. versions instead of the real ones is just less data to go through. The real faces are more complex so it takes longer to parse the same info

  • Novelty p6
    Novelty p6 4 months ago

    But has this particular AI not used a dataset originating from other AI that has been feed with actual footage of humans? just asking for a friend....

  • Jupiter
    Jupiter 4 months ago

    Amazing! ❤️

  • M2-x-N2
    M2-x-N2 4 months ago

    they were already super fast in this field, now theyll get fastter.

  • praestvs
    praestvs 4 months ago

    While technology is a part of the future, so should be thought and human interaction, the development of interpretation of reality.

  • Roverson Melo
    Roverson Melo 4 months ago

    OK, but I'm interested on an AI that can distinguish between a real human and a fake (AI made) one. Can we get one?

    • EBTS-3
      EBTS-3 4 months ago

      Theres an AI that can detect deepfakes but thats probably detecting alterations made the the video, not sure about a clip thats fully CG though..

  • Scoria Desert
    Scoria Desert 4 months ago +1

    Finally, clown detection AI

  • jackbauer322
    jackbauer322 4 months ago

    For me it's the opposite , i've only seen humans but I don't understand them xD

  • Pond Fish
    Pond Fish 4 months ago

    To be honest.. Reface app does a great job with right pic.

  • SAIIIURAI
    SAIIIURAI 4 months ago

    Next human reaction and behavior data set and there you go with digital waifu! ^^

  • Franco Moyano
    Franco Moyano 4 months ago +1

    Imagine in the future how cool VR / Metaverses can be when they can mimic our expressions with 100% accuracy. You'll be able to tell if a player/user is nervous, excited, angry, stressed. Really adds a whole other dimension and depth to virtual worlds and games.

  • Jaxon Pham
    Jaxon Pham 4 months ago

    What a time to be alive!

  • Cabo Obake
    Cabo Obake 4 months ago

    What a time to be alive!

  • ȘoimulX
    ȘoimulX 4 months ago

    Ah yes. An ai which makes 3d people

  • Faisal Niazi
    Faisal Niazi 4 months ago

    What a great time to be alive☺️

  • Magnus Winther
    Magnus Winther 4 months ago +2

    This could be a good way to counteract biased data sets. We've seen a lot of this happen in the case of racial bias in some scenarios.

    • edwardinchina
      edwardinchina 4 months ago

      Sadly, more likely to go the other way unless the designer is actually trying to balance the dataset. Looking at the the examples, it looks like mixed race followed by white are the most common.
      What would make it balanced is if the AI generated new models to train on areas of weakness. Sort of how a GAN works.

  • Joshua K
    Joshua K 4 months ago

    Microsoft's AI understands humans... but it has never seen one!
    Just saying.