DeepMind’s AlphaStar Beats Humans 10-0 (or 1)

Share
Embed
  • Published on Feb 5, 2019
  • DeepMind's blog post:
    deepmind.com/blog/alphastar-m...
    Full event:
    www.youtube.com/watch?v=cUTMh...
    Highlights:
    www.youtube.com/watch?v=6EQAs...
    Agent visualization:
    www.youtube.com/watch?v=HcZ48...
    's Reddit AMA:
    old.reddit.com/r/MachineLearn...
    APM comments within the AMA:
    old.reddit.com/r/MachineLearn...
    Mana’s personal experience: www.youtube.com/watch?v=zgIFo...
    Artosis’s analysis: www.youtube.com/watch?v=_YWmU...
    Brownbear’s analysis: www.youtube.com/watch?v=sxQ-V...
    WinterStarcraft’s analysis: www.youtube.com/watch?v=H3MCb...
    Watch these videos in early access:
    › www.patreon.com/TwoMinutePapers
    Errara:
    - The in-game has been fixed to run in real time.
    We would like to thank our generous Patreon supporters who make Two Minute Papers possible:
    313V, Alex Haro, Andrew Melnychuk, Angelos Evripiotis, Anthony Vdovitchenko, Brian Gilman, Christian Ahlin, Christoph Jadanowski, Dennis Abts, Eric Haddad, Eric Martel, Evan Breznyik, Geronimo Moralez, Jason Rollins, Javier Bustamante, John De Witt, Kaiesh Vohra, Kjartan Olason, Lorin Atzberger, Marcin Dukaczewski, Marten Rauschenberg, Maurits van Mastrigt, Michael Albrecht, Michael Jensen, Morten Punnerud Engelstad, Nader Shakerin, Owen Campbell-Moore, Owen Skarpness, Raul Araújo da Silva, Richard Reis, Rob Rowe, Robin Graham, Ryan Monsurate, Shawn Azman, Steef, Steve Messina, Sunil Kim, Thomas Krcmar, Torsten Reil, Zach Boldyga, Zach Doty.
    www.patreon.com/TwoMinutePapers
    Splash screen/thumbnail design: Felícia Fehér - felicia.hu
    Károly Zsolnai-Fehér's links:
    Facebook: TwoMinutePap...
    Twitter: karoly_zsolnai
    Web: cg.tuwien.ac.at/~zsolnai/
  • Science & TechnologyScience & Technology

Comments • 1 417

  • Two Minute Papers
    Two Minute Papers  3 years ago +505

    This is a new Two Minute Papers episode, which is not two minutes, and is not about a paper (yet). Welcome to the show! 🎓

    • Ife Newsome
      Ife Newsome 2 years ago

      how do i see an AI play against an AI??

    • Major Mononoke
      Major Mononoke 2 years ago

      Seems cool, but when will such an advanced Ai be ready to play Starcraft Broodwar/Remastered on a good level, without cheating... Since That is considered the more difficult REal-strategy-game of both ...

    • 85Damix
      85Damix 3 years ago +2

      @mg8383ffdryhh I happen to play SC2 for about 5 years, never made it to Grand Master level but I am decent player. I can tell when I see a bad player, AlphaStar places building randomly, in a bad order, it's general strategy is very bad, it does not really have any strategy to be honest.
      The only reason why it does not get wrecked by 8 years old is because it can do 1400 EPM, pro player never really go about 200 and even that would be sort of spammy.
      It's like playing multiplication number game against a calculator. Calculator is not a clever, advanced system, it's just faster than you.
      AStar can kill anyone by simply rushing them with drones right when the game starts and win 100% time.
      It's like playing Counter-Strike against someone using an AimBot, it's not clever, it's just inhumanely fast.
      The whole project boiled down into major reward hacking exercise.

    • mg8383ffdryhh
      mg8383ffdryhh 3 years ago

      Here are some of my thoughts. APM was used as a cap to ensure that DeepMind developed good strategies to win, instead of using brute force super human game play. The development team didn't take into consideration the APM of human players are highly exaggerated. A lot of those actions per minute are actually spamming actions. EPM is a better measure of useful actions, but even with EPM you are just looking at actions that are unique. DeepMind also spams, but during engagements its actions are too perfect.
      Why even bring this up? DeepMind continually used micro intensive units every game. These are inferior units that can win, but only if you use brute force super human game play ability. Why is that important? Because it's not realistic. The AI solves the problem using mechanics that aren't possible in the human world and so its strategies will be stunted and rely more on micro, as opposed to using positioning, superior unit comps, timings, etc. In the end you end up with a computer that knows the best way to win with having perfect micro, but has less variability and possibly less adaptability, evidenced by the loss it took because it had never seen an immortal drop before.
      Because of DeepMind's micro ability it's hard to judge its overall strategy, and I think that was the whole point in limiting the APM in the first place. Humans can't effectively use these strategies. It would be much better if the AI came up with strategies that humans could learn from.

  • a2jy2k
    a2jy2k 3 years ago +172

    Not once have you mentioned that TLO is a Zerg player being asked to play his worst race. For someone who isn't really into StarCraft you can sort of handwave that away, but it has such a significant effect on the game. Each of the 3 races play so incredibly differently from one another. That being said, MaNa got bodied unfortunately and that man does main Protoss.

    • ꧁Mike Sully꧂
      ꧁Mike Sully꧂ 4 months ago +2

      I know this is an old comment and this version of Alphastar could only play PvP, well for those who aren't aware last year a version of AlphaStar came out that could play P vs any race (originally they had just trained it PVP but now they have trained it PvZ and PvT), it has actually played games online under a pseudonym (there are various levels of alphastar, Diamond etc) and to my knowledge it has won basically 99% of the games it has played. They also did some changes like lowering max APM so it's less "cheesy", plus the AI cannot see the entire map all at once like the stock AI, it has to actively scout. Though I'm pretty sure the AI can "mentally process" where every unit it has vision on is, and what it is doing, and what its intentions likely are based on prior simulations - at all times during a game, this is an innate advantage a machine intelligence will have over any human due to the way it exists.
      I am sure the developers could train this AI to play TvZ, ZvP, whatever they want, however training the AI is a lot of work and I think the alphastar team have moved on to different things now, stuff to do with proteins, medical science etc.
      I wish they could release this AI if possible, I have no idea how feasible it would be (does it need a supercluster to run a game, or does it just need the supercluster for the training and the game itself can run on a laptop) ? I'd love to see how well this AI would do at simpler games such as C&C Red Alert 1 also.

    • Joao Lemes
      Joao Lemes Year ago +1

      It played serral's zerg

    • Kairat Kempirbaev
      Kairat Kempirbaev Year ago

      Anyone from grand master gets to grand master with any race. He was

    • MisterL2
      MisterL2 2 years ago +1

      yeah TLO's protoss was marked at like 5,4k mmr on the graph LOL

  • EM P
    EM P 2 years ago +67

    "The number of actions performed by the AI roughly matches the player"...
    If the AI goes against a 300 APM player, then the AI has 300 APM.
    The difference is that the player has "meaningful APM" and "filler APM", the latter which is just them clicking or pressing buttons to warm their fingers up.
    If all 300 APM done by the AI is "meaningful APM", then the players never stand a chance...

    • Novacification
      Novacification Year ago +6

      Yeah, I also wondered what the cap was. Was it allowed to fire 50% of that APM within a 10 second engagement controlling each unit seperately and then retreat until it had more APM available?
      There's also a huge difference in APM used to macro and other tasks based on routine as opposed to that APM being used to micro meaningfully in a skirmish with enemy troops. Each unit targeting optimal targets so the available damage is spread out in a way that minimizes overkill is impressive but hardly an indication of the AIs strategic ability.
      Overproduction of workers in anticipation of losing some was pretty cool though.

    • M. Vipsanius Agrippa
      M. Vipsanius Agrippa Year ago +7

      It would pretty much be able to micromanage every single unit in an engagement. Imagining timing your stim separately and splitting perfectly while pulling back units about to die in the last second. Even in a lategame engagement with a hundred units.

  • moldpiss
    moldpiss 3 years ago +35

    What I would really like to see from this AI is what is the lowest APM it needs to consistently beat human pros. Just keep throttling it until it starts losing (I know not really feasible because it will need to be retrained for every APM, but still...). I think it would be much more interesting to see how that thing plays than the micro god we have now.

    • Aminul Hussain
      Aminul Hussain 6 months ago

      It can play at any level of apm without retraining.

  • Ironpencil
    Ironpencil 3 years ago +478

    While the number of APM is human level, I think the precision of those actions is way above human level (selecting single units, while a human might accidentally select multiple) which was very obvious in the micro-management-heavy engagements, adding some noise to the mouse positioning might be interesting.

    • TheSuperappelflap
      TheSuperappelflap Year ago

      @Ivan Dimitrov because if the AI can reach its objective by controlling its units better, we arent learning anything about strategy from this AI model. If the AI has to learn superior strategies to human players, it should be able to A) find things about the game that can get it an advantage that human players havent found out yet and B) if it learns to actually play the strategy part of the game then researchers can look at what kind of subnetworks make the AI better or worse at certain aspects of the game and what limits its understanding of its state and its environment. That can lead to improvements in general AI research, not just for video games.
      Currently its too easy for the AI to win because it has too direct of an interface with the game and can issue too many commands with too high accuracy. it has a 150ms reaction time. its just dumb to do it this way, except if Google only intended this to be a PR stunt. Oh look we built something using some AI that can beat human players with inhuman parameters in a realtime strategy game, mission accomplished, write a press release, and lets go get some beers.

    • DC
      DC Year ago

      I picked this up a massive advantage too.

    • whirled peas
      whirled peas 2 years ago

      how about they created a physical simulation of a player, actions occur physically with inertia and as you say, noise etc. That would be intense.

    • أمادو
      أمادو 2 years ago

      @Daniel John von Long see you in 50 years .

  • n y
    n y 3 years ago +4

    ai has huge advantage in this game because of infinite ability to macro units strategically ie- porting out stalkers one at a time when their health is low is done seemlessly by an AI but requires almost 99% attention from a human player

  • Soul-Burn
    Soul-Burn 3 years ago +149

    I want to see a match between DeepMind vs OpenAI on some game. Should be super interesting.

    • Lisa Ruhm
      Lisa Ruhm Year ago

      @Alex Balogh well they limited alpha star, so it is an Ai with the limitations, that the human that is playing the game also has, so it cant micro all units on the map at the same time.

    • Yue Si
      Yue Si Year ago

      @Phantom aha

    • prodj.mixape official
      prodj.mixape official 3 years ago +1

      OpenAI is closing the project will have their last appearance in April vs TI8 champions. They might commercialize the product and release on the industry.

    • Alex Balogh
      Alex Balogh 3 years ago +5

      @GREG GMW dude, AlphaStart has been limited by the programmers, if they want they can literally make it unbeatable by removing the input delay and increase the apm.

    • Brnki S
      Brnki S 3 years ago +1

      @GREG GMW not the same thing. you cannot see the type of unit/ building. and you cannot select the unit or building from the minimap... also, you have to click on the minimap or have a hotgroup or hotlocation to move the screen to do those things. it is a fucking really big deal. if the AI was able to optimize itself, then doing these operations, and being limited to 6 effective actions per second would be of little consequence.

  • Verandure
    Verandure 3 years ago +154

    Correction of 6:06. AlphaStar can only play Protoss v Protoss on a single map. It cannot play any playstyle in the game (for now).

    • far num
      far num Year ago

      @Patrick we're so fucked.

    • Kairat Kempirbaev
      Kairat Kempirbaev Year ago

      @freetobeyou it was on a ladder a year ago. Winning rate was about 98. Mechanics of games you can see on replays. Many shared their games and it was insane. I like ai research. You can Google how serral has lost 2 out of 2 on some competition.

    • freetobeyou
      freetobeyou Year ago

      @Kairat Kempirbaev link? Im all for ai sc2 being a challenge to players.. Pointing out that the scrubbing of names from replays doesn't make sense. If its so good name the players that actually played those matches. They won't.

    • Kairat Kempirbaev
      Kairat Kempirbaev Year ago

      @freetobeyou lol. Serral has lost a year ago.

    • freetobeyou
      freetobeyou 2 years ago

      Exactly as you say, it cannot play any style. Nor can it play actually good true high level human players. It screws up on the most basic things.

  • Antsan
    Antsan 3 years ago +2

    I'd like to see them take on cooperative games - cooperating with other AI and with human players. My intuition says cooperating with human players would be really hard.

  • Michał Orynicz
    Michał Orynicz 3 years ago +2

    I think that a very important point about the game won by MaNa is that he was able to exploit buggy behaviour of AlphaStar where the AI was looping with moving units from one place to another.

  • ImrazorZodd
    ImrazorZodd 3 years ago +11

    I get it, this is an insane achievement, but could you undersell the advantages the ai had any less? Describing it as "technically" and "slight" advantages is insulting. The fact the pros put up as much of a fight as they did is admirable.

  • Simone
    Simone 3 years ago +2

    What we are seeing here is an exponential increase in intelligence. If this can be leveraged and reproduced over and over, AI complexity will skyrocket over the next few years.
    Of course its going to take time before society and cultures adapt and change to accommodate this new paradigm, but this could potentially be the starting point for a whole new chapter in human history, much like mathematics, agriculture, language and metallurgic was in the past.

  • Sean Walker
    Sean Walker 3 years ago +2

    wonderful video!! first one of yours I've seen but really great coverage of the content and thanks for all the resources. definitely subscribing and can't wait for more!

  • Moby Motion
    Moby Motion 3 years ago

    Amazing work, yet again. Also, some really interesting comments on this video from experienced StarCraft players- can't wait for these to be implemented in the next iteration of this algorithm. The rate of progress hasn't just been astounding, it's been exponential. It makes me wonder how much further these models have to go, before they can tackle meaningful real-life issues like scientific research.
    One thing that is interesting, however, is that this took 200 years of training in total. That's much more than any human player, and could be related to a point you mentioned in an earlier video about prior information that people already have.

  • Christopher Simms
    Christopher Simms 3 years ago +1

    I've been working on my own AI that plays SC2 and I'm blown away at how much better deepminds has gotten. 200 years of gameplay is amazing.

  • Marquise D
    Marquise D 3 years ago +52

    11:34 "weather prediction" means war strategy and mass surveillance for reference.

  • A P
    A P 3 years ago +2

    I think the Ai needs to interact with the game with a physical interface to be fair, a mouse and keyboard. Looking at the apm, it seems to be a bogus match. As development into ai this match is exciting, as a match no. It's like playing basketball when the other team are mutants with 10 arms.

  • AntiCheap
    AntiCheap 2 years ago +1

    3:23 That makes sense, I couldn't see how AIs playing against humans could even lose, given they would just be having unfair advantages like users who cheat.

  • TehProBot
    TehProBot 2 years ago

    Man I love these videos. Very detailed and engaging information. Great job!

  • Simo Vihinen
    Simo Vihinen 3 years ago +1

    The in-game time in SC2 USED to be faster than real time but they adjusted it so they match now in Legacy of the Void.
    And as people may have pointed out, the AI doesn't do any "spam" actions to keep its fingers warm so this lowers its APM.

  • nigahiga878
    nigahiga878 3 years ago +1

    early game protoss vs protoss is by far the simplest thing for an AI to understand out of all the 9 matchups, can't wait to see the AI that doesn't require the humans to play in ways they don't usually play.

  • Darth Pro
    Darth Pro 3 years ago

    This is amazing, can't to see your analysis of the paper!
    (also, i really like these long-form videos, please don't hesitate to make them)

  • _Sky_
    _Sky_ Year ago +1

    would love to see AI play against two players at once.

  • ben rogan
    ben rogan 2 years ago

    the AI also had significant impact in the pro dota scene, it literally invented using the courier constantly to win the regen battle in mid, usually it was just bottle and thats it, once the AI started doing it in its exibition matches everyones started using the courier mid for all kinds of regen, anything required to win the lane as hard as possible

  • Žiga Iglič
    Žiga Iglič 3 years ago +1

    This version of AI was slightly "cheating" (by being able to see the entire map at the same time, not restricted to see just one screen at a time). The fixed version of the AI lost. Also, the player who won that one match seems to have adapter to how the AI plays after seeing those 10 games.

  • Chris Topher
    Chris Topher 3 years ago +994

    Yeah but the pro player wasn’t even Korean...

    • Strange Videos
      Strange Videos 10 months ago

      Korean what?

    • Leonardo Prada
      Leonardo Prada Year ago

      @TheNeonknt yeah ... i mean he has not that easy as 2018 ... now Reynor and even Clem are catching him

    • Leonardo Prada
      Leonardo Prada Year ago

      @TheNeonknt Serral is done? ... just for a bad season? ... we can the same for Rogue tho

    • Gianfranco Volpe
      Gianfranco Volpe Year ago

      Serral, top 1 ranked player in the world, is actually from Finland

    • SSJ AQL
      SSJ AQL Year ago +3

      Koreans probably use AlphaStar as a warm up before ranking up.

  • astatalol
    astatalol 2 months ago +2

    Who would've guessed? A micro-managing machine which can control single units at the same time won against a human that doesn't have 100 mice and hands... Yeah looks like the AI dudes wanted to seek revenge for being so bad at star craft lol

  • SaltyRamen
    SaltyRamen 2 years ago +1

    This game is amazing. This AI is scary. This channel is great too.

  • TJ Macara
    TJ Macara 3 years ago

    Wouldnt mind having this AI as a sparring partner and giving you the option to test above your limits. More or less it can be even be your sensei haha

  • CHERNOMOR GAMES
    CHERNOMOR GAMES 2 years ago

    It is amaizing, that DeepMind was able to introduce a strategy of overdroning and different way of mining managment. That blew my mind completly back a year ago when those games happened!

  • Smoking Beetles
    Smoking Beetles Year ago +1

    The ai is literally micromanaging economy and warfare at the same time. Tactics each unit is independently controlled. Humans highlights units in groups. APM for an AI is unbeatable.

  • Tom Liberman
    Tom Liberman Year ago

    Awesome video, I'm a chess player and modern Chess Engines have really improved the play of human players by suggesting new strategies and general concepts for victory. Now, chess engines can beat any human easily but, if anything, their presence has improved the game of chess immeasurably, rather than destroy the game, as many predicted.

  • psychic function
    psychic function 3 years ago

    Really fascinating! I wonder what place we will be at in a year. Or five years. A.I. is so promising! I'm really glad your channel exists, it's the best one on TheXvid for this kind of stuff!

  • Shlomdog
    Shlomdog 3 years ago

    Great video ! thanks for sharing and keeping it straight forward

  • GaminFaith
    GaminFaith 2 years ago

    At some point we need to start doing ai matches and have teams that modify them in between rounds. Thats would actually be super cool

  • PwnHub
    PwnHub 3 years ago +1

    Time for deepmind to get into rocket league. I would love to see the strats it comes up with

  • Jianju69
    Jianju69 2 years ago

    At the top level of competition, margins of ability (which result in victory) are usually quite slim. Hence, the top player would most likely also lose even had the AI not been refining itself in the meantime.

  • andrew nelson
    andrew nelson 3 years ago +1

    Don’t know why they would bother with adding a reaction delay if they didn’t limit the action per minute and the what it could see at one time like a human. I understand that it can’t see they the fog of war but the computer essentially had extra screens to see where all of its units where at instantly.
    I’m curious what hardware it was running on.

  • Maxime Larmeaux
    Maxime Larmeaux Year ago

    thanks for taking the time to give an indepth explanation to every paper

  • World Peace
    World Peace 3 years ago +1

    Well the reaction time and parallel processing of this AI is well over what humans can do. To make it at least comparable there would be needed restrictions on what the ai can do.

    • Shiinon Dogewalker
      Shiinon Dogewalker Year ago

      and those watching the video knows it has 350 ms reaction time, and capped apm.

  • Julien Teyteau
    Julien Teyteau 3 years ago +3

    It was such a delight seeing this alien AI play starcraft

  • quinxx12
    quinxx12 3 years ago

    When training the AI can you feed it with any given situation of a game and it will do what it thinks is the optimal game or does it have to start the game at t=0?

  • Kristupas Antanavicius

    Would be very interesting to see AlphaStar playing against a team of multiple players who are controlling a single StarCraft player instance.

  • SandroRocchi
    SandroRocchi 3 years ago +1

    The fact that the AI chose an unfavorable composition and still managed to easily win just shows that it found a composition with which it can have more control over the game. The first iteration of AlphaStar to use blink stalkers owned all others so bad that now all iterations are stuck on perfecting that strategy. If they never code a limitation to how much AlphaStar can micro manage blink-stalkers, that's all it's ever going to play. Perfect micro + blink beats everything.

  • MegaMark0000
    MegaMark0000 3 years ago +5

    Its not fair that it can see the whole map at once. The whole point of SC2 is thats it is hard to macro and micro at the same time because micro happens out on the map on the offensive and macro happens at the main base. They might as well be playing different games.

  • _Sky_
    _Sky_ Year ago +1

    Also, I would LOVE to see how would the AI react if the player had chozen Zerg. :D something whihc this AI presumebly had never seen before :D

  • Petros Adamopoulos
    Petros Adamopoulos 3 years ago +1

    When the holistic view of the map was removed on the AI, it lost pretty miserably. Also, it's completely dumb against things like unit transporter drops. This is expected since it's unable to learn this thing by itself and would need to be showed and taught.

  • Justin Lor
    Justin Lor Year ago

    I think this is amazing, running a program to self learn a game in rapid succession to pretty much get the skill level of a pro. This is basically matrix Neo: "i know kung fu" Morpheus: "show me"

  • TFat69
    TFat69 3 years ago

    Teaching AI how to win wars. I love it 👏

  • Phantom
    Phantom 3 years ago +97

    2:53 "noting that ingame time is a little faster than real time" - No, it isn't. This was corrected four years ago with the release of the latest expansion.
    4:04 "the reaction time of the AI was set to 350 ms" - There is a delay, but it wasn't set. That was simply the round-trip time of the neural network running in realtime.
    5:58 "these agents can play any style in the game" - more accurately, almost any style of 1 of 3 races, on 1 of 7 maps. The agent in the 11th match demonstrated inability to defend warp prism harass.

    • Lua
      Lua Year ago

      Cool story bro.

    • zsqduke
      zsqduke Year ago

      There is slow, standard and fast options for ingame time. At “standard”, it’s equal to real time. But ladder games are always on “fast”

    • Gimmisomesoap
      Gimmisomesoap 3 years ago +1

      @Joi 179 It's an optimisation problem. Micro is super effective, so if it learns something positive in micro, while at the same time doing something strategically bad, it is taking this as a net win. People can separate the two fields and say we simultaneously won because of micro but lost strategically. But if you want to do that with AI or an optimisation algorithm, you either have to figure out how to make it have 2 contradictory results, or make micro less effective. I.e. set a hard cap on speed. If you want a comparison, I know at least Winter did a climb through the ladder using a max of 50 apm, so theres a sample size there of good strategic knowledge & terrible speed to compare it to. Fighting essentially becomes 'maybe pre-spread, then a-move.' Set it so the AI has a hard cap of 1 action per second (so it has 1 action to spend each second, and it resets.. none of that do whatever you want in 3 seconds then slow down to compensate), and let it compete. Whatever agents win out there should be more strategically sound than one that will just make stalkers. The rest is up to whether they built a good algorithm that they can use the knowledge and build upon it later with speed as well.

    • stropheum
      stropheum 3 years ago +5

      @bagourat it was also trained improperly. The fact that it can spike to 1500 perfect apm means it will always fallback to perfect control instead of making correct decisions. So it can afford to make objectively bad decisions and micro its way out of them because no human can play that fast. That's not very impressive and not a good demonstration of learning imo. It needs to have its ability reeled back and it needs to be re-trained with a wider sample of replays

    • stropheum
      stropheum 3 years ago +1

      1 of 3 races, in 1 of 6 matchups*

  • ComixConsumed
    ComixConsumed 2 years ago

    I would be very interested to see what would happen if you put in openai agent or two on the human team

  • Jerry Green
    Jerry Green Year ago +1

    10:08 "it was asked to do something it was not designed for", - yeah, I agree on that. Though, on another side, it's the main problem with those AIs! In the end we want a general-purpose AI, while this AI is kind of "intermediate AI", - it's made not on top of strict algorithms but on top of neural networks, it's adaptive yet it's still hugely dependent on the task he's made for. "Weak neural network" is probably a good term for that. Maybe at some point we will have some strong neural network. And while on that path, we'll enjoy various weak neural networks yet to come, because they are still useful and sometimes just awesome!

  • TheBorogrove
    TheBorogrove 3 years ago

    When it always plays itself I'd be concerned it will get locked into using stalkers (the ranged protons unit) as it can master the micro of that unit better and defeat variants which are more rudimentary at using other unit compositions.
    The way it micro'd reminded me of other AI building games where the commands to retreat and attack where totally fired by shield/health percentage and distance from enemy and enemy number. It's tactically genius but strategically I doubt it can prioritise to effectively. I'd be curious if it also adapts to scouting information in an interesting way... Anyone interested in this kind of AI I'd reccomend the mobile game on the google play store called gladiabots and you'll get what I mean in more detail...
    A suggestion I'd have is to force the AI to master the micro of different units by making it play itself with some of it's preferred units disabled??

  • Wayne Wu
    Wayne Wu 3 years ago +5

    Watch and remember these faces, when AI is capable of ending the world one day, remember these people who created it in the first place.
    LOL

  • Borislav Mitev
    Borislav Mitev 3 years ago

    "AlphaStar" is the result of a work of a team of players, so basically we have one player vs many developers. That is quite unfair from the get go. "AlphaStar" should play against other AIs created by other teams, rather than against a single human.

  • Lord of the Pies
    Lord of the Pies 3 years ago

    they need to apply this to real life robots , i'm thinking along the lines of spot mini wired up to deep mind ai with a reward goal

  • Them
    Them Year ago +2

    "weather prediction and climate modeling"
    it's always the same line.
    are we seriously to believe that the military is not watching this VERY closely and perhaps spawning bots of their own?
    a machine that can multi-task, make extremely complicated, split-second, life-dependent decisions, and learn by experience at a rate exponentially faster than any human ever could.
    the applications to battle tactics and general war strategy are endless.

  • RJ
    RJ 3 years ago

    It's crazy that one day we will be able to tell our kids we watched the development of the AIs that theyre so accustomed to

  • Stepan Anokhin
    Stepan Anokhin 3 years ago +21

    I have a controversial feelings when seeing how easily AI beats humans in such a games... games which resemble military actions in real world.

    • GodlyAtheist
      GodlyAtheist 2 months ago

      AI won't destroy humanity, at least not for a long time and even that is iffy... But it's a shame it won't do it tomorrow because we all deserve it, and if you disagree you are wealthy and enjoying the suffering of the majority (the poor).

    • Aaron Fkckcjc
      Aaron Fkckcjc Year ago +6

      not really a fair comparison in this case...alphastar won so consistently by having insanely high reflexes and leaning heavily on a style that supports it. In a real world scenario, that's a targeting system, which we already use computers for. They did amend the experiment after this to limit alphastar to human-like reflexes and it started losing quite a bit.

  • Nevo Krien
    Nevo Krien 3 years ago +2

    Tlo played his off race which is a big deal sense making a zerg ai is mutch harder and so making an anti zerg ai is again very hard

  • Disent Design
    Disent Design 2 years ago

    so much fun to watch us train our replacements :)

  • BangDroid ✪
    BangDroid ✪ 3 years ago

    Next step is an AI domain integration. These highly specialized AI's are great but they need unifying to pool their skills. An AI that can win games, drive cars, and predict our every action.

  • 331 TNT
    331 TNT Year ago +1

    @Two Minute Papers at that point, the AI's APM limitations were very wacky. The rules was alphastar can only do certain actions with in a min, the AI found that if he stores AMP before a battle, he can output AMP at an broken speed(it peeked 1k apm) not to mention AI's actions are 90%+ EAPM. The AI microed 3 armies perfectly which is humanly impossible. Serral(Best player in the world rn)can only barley micro 3 army and micro 2 armies good at the same time. Thinking how the AI have so much APM is unfair(a professional player only has around 300-500 APM and 20-50% of their actions can be useless(depending on how long the the game has played)

  • ThirtyOne Fifty
    ThirtyOne Fifty 2 years ago +6

    When Skynet figures out that only 1% of people in the world is smart so it kills off those 1% and enslaves the rest of the population to do its biddings.

  • Kessra
    Kessra 3 years ago +1

    I'd love to see how the AI reacts to the famous disconnect-cheese attempted in so many bronze-league matches :D

  • Timmy
    Timmy 3 years ago

    I'm gold league in starcraft II, which my dad said is the best rank, and I am here to tell you that this a.i. is unbelievably good. I can see it utilizing some of my strats at various points in the match.

  • Baker
    Baker 3 years ago

    It would be cool if they had competitions where they selected a game and then teams build ai to play it and the ai faced off against each other.

  • TurnTheGameOn
    TurnTheGameOn 3 years ago

    This play mode should be accessible in starcraft 2, that would be so cool. I and I'm sure many other would like to try too.

  • roman2011
    roman2011 3 years ago +291

    These players aren't Koreans so the AI has the advantage right off the start

    • Emir Latinović
      Emir Latinović 8 months ago

      @digital nme the games were Rigged from the start.
      Ask any of these ais to play online on online players vs real players without following specific rules to indulge ai team lol.

    • Strange Videos
      Strange Videos 10 months ago

      Korean's what?

    • Insouciant
      Insouciant Year ago

      That's because Korean AI is even more OP

    • TimeTricks
      TimeTricks 2 years ago

      @digital nme there is always an ai more intelligent as a normal human, but there is always an asian kid that is smarter than that ai haha

    • Paendabear
      Paendabear 2 years ago

      @Airdrifting but the handicap is still protoss vs protoss.
      The ai knows that match up better than anyone since that is the only match up it knows. They should've done random random. Or they can both just pick

  • Sam Muller
    Sam Muller Month ago

    What I am getting from these videos about AI is that AI is programmed to learn and has millions of chances to learn. Lessons for humans: keep on learning

  • Matthew Mercier
    Matthew Mercier 3 years ago

    This is awesome. I want to see AI vs AI. Would one win and how so?

  • Arjan Bal
    Arjan Bal 3 years ago +1

    From what I can see in the comments, most people seem to be complaining about setting a limit only on the average APM, and that seems like a fair complaint. The AI could save on some Actions and use them to micromanage it's units with god-like precision. They should have limited the max APM too, to keep a level playing field.

  • Alejandro Aguiar
    Alejandro Aguiar 3 years ago

    You said: we humans build up new strategies by learning from each other, and of course, the AI, as you have seen here, doesn't care about any of that.
    I disagree since the AI had 200 years of experience in gameplay 1:40

  • Uriel Engel Piffer
    Uriel Engel Piffer 3 years ago +475

    This is too disrespectful to TLO. They dont even mention he was playing offrace

    • Sophia Cristina
      Sophia Cristina Year ago

      @Palacinka13 Indeed, thanks for the correction. Here comes a little long explanation if you don't mind to read.
      Anyway, i think a perfect or imperfect game is not a problem for AI, because the way AI learn is not related to the imperfections or not, it is basically a giant and advanced data collection of brute-forced different strategies, it would do the same in chess or DOTA.
      In chess the AI can see the pieces, but the AI still have to test the different tricks until it "decides" which is better, and the human also can see the piece.
      In DOTA, the AI can't see the enemies, but also humans, anyway, the way AI develops its strategy for DOTA wouldn't be much affected by "imperfectionism" because the main point AI train is to win the game through certain abstract hidden-layer-data based on reward and punishment. What i mean is that AI beats human by learning in the "same way" human does, the imperfection or perfection degree of a game wouldn't change the fact that mistakes and successes are punished / rewarded the same way human stress of joy shapes their strategies.
      Summarizing, I don't think the game being "imperfect" means that AI will struggle or suffer differently that it would in a "perfect" game to beat a human, i **think** the mere way AI works and how the data is "instantly" accessed and with "perfect memory" is enough for any AI to beat any human in anything, especially with the proper science and development of newer AI tools.

    • Palacinka13
      Palacinka13 Year ago +2

      ​@Sophia Cristina dota is an imperfect information game. In chess you can see all of your opponents moves, and can know every move that led to the current state of the game, and that's why it's a perfect information game. In dota you can only see what's in your sides field of vision. You can't see enemy movement, item purchases (before they're seen by your side), neutral farming, etc. That means that the ai has to predict what the enemy team is doing without seeing your opponents, and act accordingly. Enemies in the fog of war could be ganking you, doing roshan, grouping up for a push, etc. or they might have just gone to base to heal, or to the jungle to farm a neutral camp.

    • Lisa Ruhm
      Lisa Ruhm Year ago

      @Dubanx and the AI was unlimited, as far as I know, so it couls perfectly micro every single unit and worker at all points on the map.

    • Trent Wood
      Trent Wood Year ago +1

      I have a feeling that the AI was only programmed to handle the Protoss. Adding the other two races would have made the code way to monstrous to handle initially. At least that's my theory.

    • Sophia Cristina
      Sophia Cristina 2 years ago +1

      @youtoober2013 That looks very cool, i hope your project keeps on. Yah, its on good days to try something with Machine Learning, its new and few things on the market. And if you decide to give us a link, you can answer me, and i'll probably subscribe to you.

  • KnightGab
    KnightGab 2 years ago

    From the games I've watched, AlphaStar is incredibly strong as protoss but not so much as other races

  • DantooineRNG
    DantooineRNG 3 years ago +1

    I really don't get why Alpha Star needs it's own special full map view when there is a perfectly good minimap. The AI shouldn't care that it's mini should it? And if it does then that's just another thing it has to learn to deal with. Why don't we teach it go by letting it see inside it's opponents heads?

  • LordyHun
    LordyHun 3 years ago +1

    I've seen a few matches of AlphaStar. I've seen it winning against an anti-Stalker army by perfectly micro-managing about three dozens of stalkers which were attacking the other army from three different directions (with over 1500 APM at the moment). Not humanly possible, but still an amazing sight.
    And for the people who complain about this: it's not a human, it's an AI. It has things that it can do immensely better than humans, and it's intelligent enough to build on its strengths rather than its weaknesses.

  • SnowboarderBo
    SnowboarderBo 3 years ago

    I have a hard time trying to explain to people exactly how fast AIs learn and can improve.

  • 6Twisted
    6Twisted 3 years ago +3

    I can't wait until this kind of AI starts becoming a standard part of games.

    • Abyss Strider
      Abyss Strider 2 years ago

      It won't, the AI (especially in newer games) is very dumbed down on purpose

  • Leon
    Leon 3 years ago

    Connectivity, management, micro actions and time response is the wining game for anything in this world

  • Molb0rg
    Molb0rg 3 years ago

    they really need to bring those to the consumer market, to make servers to play with
    It would be interesting to finally play against NPC which have more brains in their heads.

  • Dan
    Dan 3 years ago +1

    Starcraft player and machine learning enthusiast here. I’m not entirely sure how you calculated the APM cap but it needs to be tweaked. During the battles the AI was performing insane micro that could never be achieved by a human and it seems like that’s where the AI got its advantage

  • Bob Salita
    Bob Salita 3 years ago

    Needs an ablation study. What are the maximum times (lag, reaction, selection, firing) to beat human players?

  • ReMeDy
    ReMeDy 3 years ago +5

    Neither player is Korean, so these results are deeply flawed. I understand it must be a Protoss, so make it a Protoss Korean main, and ensure he's a native Korean who's fluent in Korean who's only lived in Korea. Yes, you'll need a translator.

  • Lennart Nilsen
    Lennart Nilsen 3 years ago

    I'd like to see what it can do under stricter constraints than human players. Like 60 APM, slower reaction times, etc.

  • Veljko Jankovic
    Veljko Jankovic 2 years ago

    It would be nice somehow to implement this deep learning stuff for diagnostics in medicine, our possibilities would be endless.

  • LV-426
    LV-426 3 years ago

    The million dollar question is, which one is more challenging for an AI to master - Dota 2 or Starcraft 2?
    Either way, I heard DeepMind wants to tackle Hanabi next. Not sure about OpenAI's next move.

  • fathy balamita
    fathy balamita 3 years ago +6

    Hopefully, one day AI will run governments better than humans. Thanks for the video.

    • burt591
      burt591 3 years ago +1

      Skynet for example...

  • Daniel Fernandes
    Daniel Fernandes 3 years ago +1

    Excelent work on covering this subject

  • Promethor
    Promethor 3 years ago

    Thank you for these videos, you are amazing.

  • The Philosopher
    The Philosopher 3 years ago +1

    Nice recap, thanks for putting this together

  • galapagoensis
    galapagoensis 2 years ago

    I can see this used for actual military combat with drones or ground robotic units.

  • ForceOfWizardry
    ForceOfWizardry 3 years ago +35

    "I'm sorry Dave, I'm afraid I can't do that"
    -HAL 9000

    • Jeff Green
      Jeff Green 2 years ago

      Yep. That's what's coming.

  • 570lucas
    570lucas 3 years ago

    It's also important to note that each game wasn't played against the same version of the AI, but agains diferent versions of it, even if they were developed with the same algorithms

  • Nils P
    Nils P 3 years ago

    So the interesting question to me is: is the AI providing new insights and conceptual ideas, or is it just the digital equivalent of a person trying to push back a train. From what I saw in the video, it seems to be a mixture of both. Direct access to the game data is likely much more easily combined sensibly, than the hectic back and forth between tasks of human players. Also the AI likely manages tasks completely parallel. I would be interested if it also outperforms humans strategically. And also if the strategies of the AI rely on its different mode of control.

  • Data Lore
    Data Lore 3 years ago

    Some things in life should only be done by humans. It reminds me of the star trek next generation episode where Data had to fight to be not classed as property. Limits need to be drawn or before we know it the AI is going to think"What do I need these animals for"

  • Granny Grandma
    Granny Grandma 3 years ago

    while this is really impressive overall it doesn't come as a surprise that AI can micro-manage beyond anything a human player could dream of

  • Mobile Computing
    Mobile Computing 3 years ago +451

    I think the AI community should stop quoting the number of days of training, and instead provide the actual computational power used, hardware acceleration / vector instructions included. For all we know, AlphaStar could have been trained in two weeks, using the computational power of all Google data centers around the world. "Two weeks" is kind of a pointless reference. And I mean a comparable metric like FLOPS, _not_ 5 1080Ti, 2 laptops, 10 iPhones, 3 D-waves.

    • lapidations
      lapidations 2 years ago

      @SUDHIR PRATAP YADAV (B14EE033) or 200 years in the case of my computer

    • Dexxus
      Dexxus 3 years ago

      @Adam BrausDude, the stock market is ALREADY 100% algorithms...

    • David B
      David B 3 years ago

      @Nicholas Perkins I don't know enough about this stuff, but wouldn't Leela playing itself be an advantage since it essentially "knows what the other player is thinking"?

    • Pal Hachi
      Pal Hachi 3 years ago

      @Adam Braus Most stocks are already traded using algorithm's. Humans play a very small factor in large scale trading.

    • Frenckie
      Frenckie 3 years ago

      so in reality AI is pure trash... It took 200 human years of games to actualy beat human who is playing this game what 20 years.. Put PRO player and AI with 20 years of exp.. Lets see who is really better

  • cafe liu
    cafe liu 3 years ago +1

    looking forward to the next episode!

  • PervAction
    PervAction 2 years ago

    Please unleash AlphaStar's full reaction time !!!!