Why AlphaStar Does Not Solve Gaming's AI Problems | Design Dive

Share
Embed
  • Published on Feb 13, 2020
  • Support AI and Games and help the show grow by joining my Patreon:
    www.patreon.com/ai_and_games

    --
    With my recent episode on AlphaStar in the main AI and Games show, I wanted to talk a little about how this research won't really impact game development anytime soon, or at least not in the way you might think.
    --

    AI and Games is a TheXvid series on research and applications of Artificial Intelligence in video games. It's supported through and wouldn't be possible without the wonderful people who support it on Patreon, plus TheXvid members and Paypal donations.

    www.patreon.com/ai_and_games
    thexvid.com/channel/UCov_...
    www.paypal.me/AIandGames

    --

    Get yourself an AI and Games t-shirt over on Teespring!
    teespring.com/stores/aiandgames

    You can follow AI and Games (and me) on Facebook and Twitter:
    AIandGames
    AIandGames
    GET_TUDA_CHOPPA
  • GamingGaming

Comments • 151

  • Jahrazz Jahrazz
    Jahrazz Jahrazz Year ago +160

    I think it is also really important to differentiate between fun AI and hard AI. Alphastar is great for pro players to have an opponent they can still learn from, but for the average gameplay AI you dont want the AI to be hard, you want the AI to create a fun gameplay experience.
    For competetive RTS games it is good to have an AI that plays like an experienced player, so new players can learn from it by watching/playing it, but it is not necessary to use machine learning, for example age of empires 2 definitive edition upgraded the old aoe2 AI to use actual meta strategies and micro.

    • Tonechild
      Tonechild Year ago

      I would much rather have an AI that doesn't "cheat" like in most strategy games (like getting tons of gold etc) and is a real challenge

    • Ronald Luc
      Ronald Luc Year ago

      One can take several AI models during the training and end up with granural "hardness" of AI oponents for any skill level.

    • purpleganja2
      purpleganja2 Year ago

      ​@Jumpnit Yeah I have seen Alphastar attack it's own building at least twice in gameplay analysis videos. I thought maybe it learned to do that from some players misclicking, but ultimately winning the match lol.

    • Parrotsticks
      Parrotsticks Year ago

      That's something Id got right in developing Doom 2016. They fill a map with demons but if allowed to just attack optimally, it starts to looks silly as they start bunching up while chasing you, attacking in unison etc. They created a token system to keep the demons in check, so that a demon needs to ask this system for permission to follow or attack the player. If denied a token, they hang back and wait for their turn.

    • Jumpnit
      Jumpnit Year ago +1

      @Kent talks tech I dont know anything about the coding of the AI but have watched many games it has played and I dont think this is 100% accurate, I have seen multiple times it target its own units when trying to focus fire an enemy units. Perhaps you are correct when it is selecting it own units but there is clearly an area target hit box it has to select with targeted commands, or friendly fire with non-area based attacks would be impossible.

  • NukeMarine
    NukeMarine Year ago +45

    The cost to train a single agent was actually the most interesting aspect of the video. I was aware of the amount of days and human hours equivalent of play time, but seeing the cost in the millions put things in perspective.

    Your other point about needing to retrain agents is also being researched. Instead of redoing training from scratch (as you noted, an expensive process), they try to update the meta in an existing agent. They described it as doing the equivalent of brain surgery to the code.

    As for use of deep learning in games, we're going to see all sorts of uses not just in opponent and NPC actions, but in how the game is run. There have been amazing papers released on using deep learning to aid in the rendering of graphics and physics. Also, as reported a year or so ago, developers won't need 44 days of training for many aspects of AI as NPCs likely won't need to beat top StarCraft players if the NPC is random thug #4 in GTA VII.

    • Fleecemaster
      Fleecemaster Year ago

      @Hanclok Yeah, that was for all ~300+ agents, but they are all programmed on the same meta, so would still all need to be retrained in this example.

    • Hanclok
      Hanclok Year ago

      Sry quick correction it doesn't seem to be the cost of a single agent , cause Alphastar v2 consists of thousands of agents as far as im aware.

  • Scrysis
    Scrysis Year ago +20

    Now I'm wishing we had a deep dive on Black & White. I loved that game and always wondered how the creature worked.

  • Mike Houle
    Mike Houle Year ago +5

    Feels like Mimir started a channel on AI and game design; from the accent and inflections, all the way to the incredibly knowledgeable and eloquent presentation. Great stuff!

  • Pat MaCrotch
    Pat MaCrotch Year ago +18

    I never got the impression that alphastar was being designed to help AI implementation in video games. I took alphastar as the same as that robot they taught to play chess.

    • jhfiyugy8g
      jhfiyugy8g Year ago +3

      I don't think he's arguing Google was trying to fix videos games. More that some in the video game community may be expecting devs to implement this technology themselves.

  • TYNEPUNK
    TYNEPUNK Year ago +2

    Hi Tommy, I watch your vids every day :) Id love to know your thoughts on a "nav mesh in the sky", im trying to code flying AI right now and wondering how to get something at least slight as good as a nav mesh but in the sky.. or any other way to make a simple flying ai that avoids walls etc, i guess i could just raycast and change dir etc. I even thought you could put cubes up there, bake navigation then tunr them off, but then it wouldnt really work in true y axis etc. Thanks for all the great vids, my AI of about 8 months now was built originally from your tuts.

  • M
    M Year ago +4

    So many videos! Hey Tommy, while I've been very interested in AI from a more abstract and theoretical designer perspective for a long time, I'm only now getting into the programming, and feel quite lost... I've got a partial prototype of an enemy that uses pretty much all the different factors I think I want in a full package, and was wondering if you could give me a lead on what kind of AI programming system I should invest my time focusing on in my learning process.
    (It's gonna be simple and pure top-down 2D with zero verticality simulation)

    ''Tries to keep line of sight on as much of the area spanning x distance in any direction the player could move, even takes priority over direct line of sight on player if area is big enough.
    Has less heavily weighed preference to stay close to a certain sweet spot distance away from the player.''

    Basically, I'm describing three conflicting goals it has that it tries to balance to get the most overal value, each goal has it's own value to the unit.
    Direct line of sight is an absolute goal, but ''staying close to sweetspot'' isn't, and that thing about keeping line of sight on x distance over any direction the player could move... Where do I even start with that one?

    Anyway, if you, or anyone else, has any good sources or advice on how to achieve these things, I'd be VERY appreciative.

    • FabulaM24 [McAfee†]
      FabulaM24 [McAfee†] Year ago

      yeah. just do any other than writing this text and you are good.

      behavior trees

      pathfinding

      machine learning

      AWS has „right now“ a lot about machine learning. so when you want to know how professionals do code that filters ocean or make a car drive it is your chance

      the alternative to that would be open AI but I never used it

      game engines are a good tool and there is third party software for that like one called RAIN that did the half life AI I think ? really not sure, it’s compatible to source engine tho

  • Anthony H
    Anthony H Year ago +1

    Great video! I love the realistic view of the situation. ML is amazing but not widely applicable yet

    I also didn’t know Black & White has neural networks! Time to learn more

  • Pohjoisen vanhus
    Pohjoisen vanhus Year ago +3

    The Communications of the ACM magazine last month or something had an article especially about how computer scientists are trying to turn the black box that is trained AI into something that can be inspected and understood so it's an active area of research.

  • Kevin Griffith
    Kevin Griffith Year ago +3

    If it weren't for the prohibitive cost of building and training the AI, I would say that an AI like Alphastar could be a very effective tool for balancing gameplay that is generally difficult to quantify. Essentially by taking out factors like player skill or community bias (one of the characters/factions being favored over the others for non-balance reasons) and repeatedly testing you can get a substantial amount of high quality game balance information, particularly if you're already set up to record useful data from the matches (how much of a unit gets made, how many times this ability gets used, such like that). At the moment though, Blizzard is already very effective at gathering data from online matches, so unless the process becomes *much* cheaper then the industry will likely stay the course.

    • Fleecemaster
      Fleecemaster Year ago

      Researcher here, like pretty much everything in research, costs for training will decrease exponentially as the technology develops, it might take some years, but the training example given here will drop from £3million to £3k, then eventually to hundreds. Look at human genome as a comparison, the first human genome to be decoded cost £5billion. While I was as Uni about 12 years ago my friend was working on a machine that could do it for just £1000. Now it can be done for like £10 a pop. Insane difference, but you see that in everything. I think it'll be quicker than ~30 years like the genome, but it will happen! One day training an AI will be as frivolous as running a function in Excel.

    • AI and Games
      AI and Games  Year ago +2

      AI for testing is increasingly more common. Got an upcoming episode that looks at exactly that.

  • Topher Doll
    Topher Doll Year ago +2

    On the topic of adapting to patches, I found that interesting because your notes are exactly what humans deal with. They use old patch builds and ideas initially and adjust as those builds are proven to be good or bad. I think the difference is humans theorycraft the moment patch notes show up, trying to know how changes will impact their build and then testing these theories, something AlphaStar wouldn't do. So Humans should, big should because often times the theories we come up with don't match the actual game in testing, have a leap in terms of adjusting to changes but I thought that topic was interesting and how it compared to how humans adjusted to change.

    • Michael Nurse
      Michael Nurse Year ago

      This is a good thing for the majority of players because it means once a month or whenever, everyone returns to an equal footing. Chess has no patches. As a result, no-one is going to take the $1 spot until Magnus Carlson until he retires. Tennis has a similar issue - with virtually no-one besides the big 3 winning grand slams over the last two decades.

  • Kalenz
    Kalenz Year ago +6

    The shocking revelation of AlphaStar's capabilities is not that it will solve AI problems in gaming development. It's that we now have AI that beats players at fast paced multi tasking strategy games.
    It's a milestone, way more impressive and frightening than chess AIs beating chess masters.
    It's another proof that AI can replace any human output whether physical or mental.
    Reeducation of workers is a big topic in order to deal with automation and prevent people from becoming unemployed.
    There is no guarantee that by the time their reeducation is complete there won't be an AI that can take over the job they just learned.

    • Fusilier
      Fusilier Year ago

      That's why 'well, we need to just give humans Other Jobs' is so deeply not-the-answer that it's shocking and painful. The brutal paradigm of 'every human must work or they shall starve, so sayeth the Lord' needs to change, and fast, because technology, and more importantly the way it's used, has already rendered the prospect of full meaningful employment impossible.

      So to get to full employment now, you'd have to use stupid makework jobs. Either capitalist useless cubicle-hell money-changing market accumulation that gives precisely nothing to civilization or society, only takes, or the classic Soviet 'build a wall one day, tear it down the next, but hey it's work comrade'.

      Bluntly, our society now faces the problem of 'surplus population', and there's two general approaches to take towards this. Either redefine what's 'surplus' and bring a new paradigm where people matter, not dollars, or... reduce the surplus.

  • Vincent Oostelbos
    Vincent Oostelbos 9 months ago +1

    In response to point 1: This is probably true, although when it comes to designing an AI that is able to compete with human players, especially for RTS games where you're simulating a player rather than a character in an RPG game or some such, I think it often does not matter too much if you know why the AI chooses something.

    In response to point 2: It is possible to train these agents without relying on human data (or gameplay data from other AIs) to bootstrap them. See AlphaZero (as opposed to AlphaGo) and MuZero (which does not even rely on precoded knowledge of the game) for examples.

    In response to point 3: This is a serious problem, for sure. I have nothing to counter here, except maybe that this sort of thing could be done (especially for older games) via fan projects from the community, rather than the developers themselves... although you are at that point talking about a slightly different thing, of course. I only hope that progress will be made to make this quicker and cheaper, as you addressed in the video.

  • Davivd2
    Davivd2 Year ago +3

    I see the potential for this in sports games like NBA 2K. Sports games have always suffered from the AI opponents being stuck in specific play styles and patterns. If you have a team with dynamic wings who can cut to the basket off of screens, or have a team with a dominant post player surrounded by 3 point shooters who can hit open 3's when the post player gets double teamed, it does not matter. The AI will always play the same way. You can tune the sliders in game to try to make the AI run more plays for your post player, and nothing changes. Kareem Abdul Jabbar will never be the highest scorer on the AI's team but some small forward or shooting guard will, no matter how much better your center is. Every team and every game plays out the same way for the AI with very little diversity.

    End game scenarios in a basketball game are dynamic. How much time is left on the clock, how many time outs the AI coach has left, how far ahead or behind the AI team is on the scoreboard; all of those things come into focus when you need to make a decision about taking a 3 point shot or a 2 point shot, late game fouling. Right now in game AI just have a pattern and don't even realize that with 8 seconds left in the game that they need to take a 3 point shot to at least tie the game.

    I see basketball games, and other sports games benefiting tremendously from an AI that can actually change tactics depending on the situation they find themselves in. I would love to play a basketball game where the best player gets the most chances to score, and not just the 2 wing players taking the majority of the shots. I would love to have to change my defensive strategies every game to compensate, and even in game with the AI adapting to my adjustments. The potential for AI in sports games can not be understated. 2K looks nice and has very fluid movements. You can do nearly anything in the game that real players can and the physics engine replicates player movements really well. But strategically 2k and other sports games are still a mess.

    • cpowerca
      cpowerca Year ago

      nah, games like NBA 2K is too simple and AI will probably find an optimal winning strategy very very soon and just abuse it non stop.

  • The NetherOne
    The NetherOne Year ago +2

    if I was going to use Deep Learning in a video game as the default AI I wouldn't train it in a lab...
    I'd use the games own single player mode as the training group, every player connected to the internet would be training the AI weather they realised it or not. (insert evil villain laugh)

  • Skynet
    Skynet Year ago +46

    Why kill them all, when you can beat them all at every video game they come up with and make them feel inferior for all of eternity

    • Weeble Wabble
      Weeble Wabble Year ago +1

      @ThisIsRTSThree999 I don't know what you're talking about. You could build an AI that could curb stomp any player, they already had a version of alphastar with any rules or limitations. No one would have a chance, it'd be a calculated defeat. But, the recent version of alphastar you see in grandmaster has many more rules to follow. Such as APM limits, vision limitations, whatever human limits they recently have been trying to implement. It just takes time, alpha star is still a new thing. I know in this day and age technology progresses so rapidly but have a little patience. If you don't think computers can beat you at any game... - you're wrong. Simply put. The challenge is creating an experience that is achievable through human means. So that when that awesome AI does beat you inevitably, you can take your game to the next level, artificially. Since the meta took decades to be what it is now, alphastar can evolve the meta faster than human competition can. Which doesn't sound like a big deal, but what if a game was dead. And no one good plays it anymore, you can still Q up against good ol' grandmaster alphastar. Which you might not say is much, its possible to beat it sure. But it's still at that same pinnacle. I mean the implications of it are insane. Take the logic that alphastar applies to the ingame world, and plug that information into the existential reality we exist in. What if machine learning could do scientific research according to the parameters set by not us, but by the universe. How fast it could evolve our technology, economy, the systems so complicated people need to go to college for over a decade to even understand. An AI, could learn that intuitively, and manage things we once never thought possible in society. Rapid response emergency services, or even preemptive emergency services. But yeah, if you want a version that can beat you at your little video game, that already exists. It just doesn't serve a better purpose than the one that can lose to us.

    • ThisIsRTSThree999
      ThisIsRTSThree999 Year ago

      @Weeble Wabble Serral, HeroMarine, Special, Showtime.. hell even JimRising xD ..... what I find deceiving is when they say that Alphastar beats 95% of players like that was a good thing.. I'm part of that 95%, it's the percentage of people who don't know how to play. If an AI beats bronze or platinum players, that's whatever, that's nothing, no real accomplishment, I'm yet waiting the day of seeing an AI capable of consistently defeating the players who know how to play SC2. ... The true victory percentage of Alphastar is about 1% as measured with respect to the players who know how to play (pro players), 1% is the measure they should say because it shows how well Alphastar plays (aka not well at all)

    • Weeble Wabble
      Weeble Wabble Year ago

      @paddy Serral makes short work of any AI.

    • paddy
      paddy Year ago

      They are not quite beating us all (everytime) in SC2 yet, it wasn't 100%.

    • AI and Games
      AI and Games  Year ago +18

      Quiet you.

  • Failing at Stuff
    Failing at Stuff Year ago +2

    I think a really good use for the AI in game development, is as an artificial play tester. Because they can quickly find exploits and cheese strats that you want to patch up.

    • Naum Rusomarov
      Naum Rusomarov Year ago

      For strategies, I can't speak for other games, I was actually thinking of keeping the bots fresh for newbies and those who just want a good realistic fight with someone at or slightly above their level, but don't want to play with other people. In strategies, bots and other NPCs tend to become stale after a while because they have a limited set of behaviours, they are also not great if you want to be against human players. If you have the infrastructure to properly train bots with ML technologies for your game, then this would be the sweetest cookie in the jar.

    • totalermist
      totalermist Year ago

      My thoughts exactly. The major advantage is that exploratory algorithms can just just 24/7 parallel to manual play testing during development and find anomalies (i.e. bugs or bad map/level design). It's much cheaper than training a god-tier AI opponent and might save a lot of time, especially since the algorithms are usually quite good at exploiting flaws since they don't necessarily subconsciously "follow the rules".

  • sockatume
    sockatume Year ago

    Do you think that any genres are sufficiently standard (eg arena FPS) that a “bootstrap” AI could be created for training? Or would that require a kind of general intelligence that NNs aren’t ready for? Would Alphastar even be of any use in Starcraft sequels?

  • WiseWeeabo
    WiseWeeabo Year ago

    How ML can be used is having instant bots in your game for a particular genre of games (say fps games, moba games, tactical battlers, et.c.), it could even be modular to add behavior for point capture of flag capture.

  • Parrotsticks
    Parrotsticks Year ago

    What's interesting to me is that although Alphastar is intended to play more creatively, if you watch its games it's normally winning through consistent macro and accurate micro. However, its losses are often from a strange lack of insight into the game, or repeating the same mistakes over and over, never adapting mid-game when the human opponent starts taking advantage of that.

    It still acts "like a computer"

  • Chidori011
    Chidori011 Year ago

    Games at the moment just provide an interesting challenge in understaning how AI deveopment works. The intend behind this is never to actually develop better AI for games. Games just provide a realistic chalenge set while stil having relatively strict rules an mountains of datat available to feed the learnng. THe interest that google and facebook have with this is data analytics and big data mining application after the technology has advanced further. If we develop more advanced AI learning or development methods that facilitate a usage outside academic study ... using them to make better opponents in games is basically last on the list.

  • DartStone
    DartStone Year ago

    It may be possible though to use a subset of GAN (Generative Adversarial Neural Networks). But for now, there are no out of the box solutions and you weren't wrong in stressing how much cost effectiveness is important. Even if EA or Activision got millions to put in this kind of R&D, they won't do it because they have no guarantee on the return on investment. They are more focused on designed new manipulative micro transaction systems. But you are also right by saying that ML is mostly used for now, in graphics and sounds. Nvidia already shown promising result already back in 2016-2017 with lips syncing and on the fly animation. Or with audio synthesis. I will be the most happy person when I'll be able to hear a NPC calling me by my character's name. I also witnessed in research labs astonishing tech used to apply and deform textures according to the deformation of the 3D mesh.

    Damn that's a long post. All of that to say that video game and computer graphics have good days before them.

    And again nice vid :)

    • DartStone
      DartStone Year ago

      @Michael Nurse Didn't EA(or Activition) published a patent involving ML and speding habits or something ? I'm pretty sure that they are already working on it. If it's not already used :/

    • Michael Nurse
      Michael Nurse Year ago

      Micro transaction optimization by DL would be easy to implement, I am certain they already do it.

  • Jon Watte
    Jon Watte Year ago +1

    An AI opponent is very different from NPC AI. To build good AI opponents, I think trained models could work in the near future.

    Also, trained networks CAN be opened up and examined. The gradient backpropagation algorithm of deep networks is eminently visualizable.

    Self play can actually help in building agents before release, and in game tuning.

    The cost of training may be no different from, say, the cost of calculating level lighting in a large unreal engine game. Sure, it will be AAA tech first, but that's where most expensive tech makes it's debut.

    To be fair, though, I believe in hierarchical, ensemble networks. "One big network" is slow and expensive to train, and can be fragile. Small components that can each be trained and assembled, with knowledge of game mechanics, looks really exciting to me.

  • Naum Rusomarov
    Naum Rusomarov Year ago +1

    I actually strongly disagree with you about the costs, the other stuff you mentioned is fine, imho. It is very hard to meaningfully compare the costs for developing AlphaStar with what would happen in a commercial game because the former is a research project and the latter would be a commercial product with a very limited timeframe and budget. Almost all research projects have very large costs; this is not a secret, doing research is actually very expensive -- most things don't work, those that do might not be feasible and the few ideas that work and are feasible often require large investments in dollars and man-hours before they are even remotely viable for anything. Anyway, if the conclusions of their research are sound and the field keeps developing, then it would take quite a few more projects before the results can actually be put into a more commercially viable project, if that's Google's goal in the first place. But if that happens I'd expect the costs to drop by at least a factor of ten.

    The rest of the comments address my other concerns well, so I won't write about them in my comment. Good video anyway.

  • Billinous
    Billinous Year ago

    I fell off EA sports games nearly a decade ago as each game vs the computer feels exactly the same. If EA could use some of that Ultimate team money to implement a dynamic AI such as this one, I will get back in without question. I'm just imagining the replay value knowing that each team would play similarly to the reallife games we see on TV

  • restlessfrager
    restlessfrager Year ago

    Playing the original Starcraft's menu theme, in a time where Activision transforms blizzard and its IPs into a huge cow to milk.

    The nostalgia burns my soul.

  • Royalist
    Royalist Year ago +2

    but the deepmind project isnt ment to simply design ai for games, this isn't the end goal. i dont really see the issue here?

  • Cavemantero
    Cavemantero Year ago

    the problem with an AI is it can still 'cheat' and be exploited...Lowko's video playing against it showed it make a barracks selection that would be completely impossible by a human player while also showing videos of it floundering at times that would've been exploited in the right circumstances by the opponent.

    • Gabriel Andy
      Gabriel Andy Year ago

      the cheating is just a matter of fixing it, the first versions of the ai had a very high apm... then they lowered it and made the ai move the camera too (it did not ahd to move the camera before) however its still do these weird clicks, but it must be on the screen still.... this could be easily fixed by making the ai not able to select things that are very close on the border.
      basically the alphastar started very cheatty and as the developers updated it it got less cheatty/more balanced...... i would say its pretty fair now tho.

  • Billy
    Billy Year ago

    I think you missed a possibility for neural networks in smaller studios. They can just ship the game with much worse ai aswell as "hard coded" ai and have it learn in a distributed way with it using examples from customers then as time goes on update the ai in the game untill it outperforms the hard coded ai.

    Granted updates still pose an issue

  • Quickshot0
    Quickshot0 Year ago

    So with some luck, by 2030 this technology would be a lot cheaper to implement* and maybe you could use it for some Game AIs... though by then technology will probably have moved on and perhaps far more useful approaches will exist.

    * Due to making the process more efficient and substantially improved hardware capability.

  • Tim Ogul
    Tim Ogul Year ago +1

    Given the Stacraft example, I think there's a clear use for this sort of thing, *multiplayer obsolescence.* Plenty of games are popular at first, but then decline in popularity over time. If they require a large and healthy multiplayer community to be viable, to keep queue times low, then they can death-spiral very quickly. But what if you trained AI to pick up the slack?

    Building an effective AI for launch is, as you note, very difficult. But building an AI _after_ launch might be more viable. Just constantly feed player data in to it, Have some computers grinding away at AI training, and over months or even years the AI can get better and better at the game. Then, assuming the game is at least moderately successful, when it gets into its declining years, AI players can start to pick up the slack of missing human players, and as the community gets smaller and smaller, nobody even notices, and what players remain that really enjoy the game (and might keep spending on it) will just keep playing and having fun.

    • Billinous
      Billinous 22 days ago

      Agreed. The annual sports games that barely change (Maddens/Fifas/NHL/etc) make serious Billions annually for Electronic Arts. They can use data for the earlier iterations and simply add various Deepmind difficulties this implementation will move the game to unprecedented levels of enjoyment and even lure in millions of more customers to their already fat rich franchises. Also, It should never be a single difficulty otherwise there is no point to add the Deepmind AI system for video games. Have 3 settings using Bronze, Silver and Diamond league players data

  • mrasap
    mrasap Year ago +1

    I was expecting a proof that this is a non-deterministic polynomial complete (NPC) problem and that heuristics are merely an approximation and not a solution.

    I guess my expectations were wrong.

  • AI and Games
    AI and Games  Year ago +23

    It's time to put the AlphaStar chat to rest. With this Design Dive episode I'm giving my 2 cents on the practical applications (and otherwise) of Deep Learning in games right now. Currently got some fun topics lined up for later in the Spring. But first, I gotta big deadline to hit by the end of the month.

    Don't worry, you'll know what I'm talking about when it hits.

    • Yannick
      Yannick Year ago

      Alpha Star learns from self play not human play, making that point invalid, otherwise great video
      Edit: Open AI invented AI surgery for its dota 2 ai that can continue learning after a patch

    • Time on my hands
      Time on my hands Year ago +2

      Do you think the next leap/ trend for game AI will be in construction of procedurally generated levels/ worlds and missions rather than NPC behaviour? Live service game devs would love this - have AI build levels indistinguishable from human created levels and no longer have to keep teams of designers working on these games.

    • · 0xFFF1
      · 0xFFF1 Year ago

      Two Minute Papers has a recent video showing that you can update an AI without retraining it from scratch. You really should follow that channel if you don't already.

  • FischiPiSti
    FischiPiSti Year ago +1

    The video portrays the current cost as a stopgap for the future, but to me, the trend says it will not be the case for long. Alphastar proved that the model works, that's the most important thing to consider. Yes, it was very expensive, but if you look at Nvidia, they are bringing AI hardware to even consumers and with each generation, AI becomes more and more accessible and efficient. And as more and more devs become trained in deep learning, hiring devs who can build such systems become less expensive too. And with the total cost going down, retraining becomes less of an issue too.
    It's also worth noting that a game like Starcaft is like the most difficult example. Other strategy games like turn based strategy, card games or grand strategy, I suspect it would be much less complex and easier to implement deep learning AI to those genres as you don't really need to create specialized interfaces the AI can operate in.
    It would be great if you could make a video about AI in grand strategy games, the Paradox games in particular, because AI is a hot topic right now in games like Stellaris - because of how bad it is.

  • phobos2077
    phobos2077 Year ago

    Not many know about it but actually STALKER games used neural networks for some portion of it's AI. Made it hard as nails to mod these specific parts :)

  • Frostile
    Frostile Year ago

    I would honestly love to hear about AI in racing games as a personal fan of the genre

    • Robko
      Robko Year ago

      its quite easy und unremarkable compared to A* or OpenAI Dota 2 bot. I trained a bot to play a racing game better then me. Nothing special.

    • AI and Games
      AI and Games  Year ago

      Forza's going to happen eventually when I find time - and patrons vote for it.

  • Nicholas Perkins
    Nicholas Perkins Year ago

    Would training A.I. be cheaper if they used GPUs vs TPUs? I know TPUs are better as far as results but could the price of GPUs be a factor for using them instead? Also when are tier lists in games going to be determined by A.I? I'd rather know what character an A.I. chooses more often to beat a game instead of the opinion of pros and hobbyists. Objective Tier lists(which some websites replicate by showing which characters winning teams have most of the time like in tft) would have a strong use for all game designers. The only way to balance a game is to know which characters in the game have an advantage.

  • Michael Nurse
    Michael Nurse Year ago

    Openai trained the original dota 2 ai in a couple ours on a single box. It all depends on the size of the grid and the frequency of actions. If you restricted Alphastar to a single small map the training cost would drop 100 fold. These days you can train Resnet from scratvh on a GPU in less than 10 minutes if you utililize the modern hacks to reduce the training time (reduced precision, learning rate optimization). If Deepmind were focused on reducing cost of training they could easily get 100 times reduction - but it is not a focus as you point out their real cost is salaries, amounting to a billion over 3 years, so reducing training time isn't on the radar as they would spend a million to save a million.

  • jaybot22
    jaybot22 Year ago +1

    The whole premise of this video is off. Alpha Star wasn't developed to assist gaming companies with their in-game AI, it was developed as a challenge for Googles deep learning systems. This stuff is still in early r+d phases and as they learn more from complex games like StarCraft they can start liscensing it out for complex real world situations for 💲💲💲 in the future

  • dosduros
    dosduros Year ago

    Alphastar (or any real AI) cant solve the problem of gaming AI for a simple reason:
    It has to be adapted to human nature. AI is created in a very specific way. Why is the AI of say... Dark Souls, not have instant response times? Because it would make the game unplayable.
    Why are AIs that adapt to your skill so disliked? (Think max payne 1/2) Because as the difficulty increases and the player learns, unless they increase at the same pace (impossible since each human is different not only in IQ and knowledge but also in what topics they learn faster, or even what they like) the AI will adapt to either drop or increase the rate of the difficulty increase to match the player, making wins unrewarding (since the noticeable drop in difficulty) and the deaths cheap (since the game never asked you for this much before).

    There are a few games that used this mechanic and it was generally considered a very bad practice (and a lazy one), hence why so few games use it.
    One of the best examples of AI and difficulty done right is Dark Souls because it expects the player to learn or keep loosing, as simple as that.

  • Khan
    Khan Year ago +2

    Isn't AlphaStar main goal to show long term planing in general and a game is just a controlled testing ground? The goal of the project is not to develop AI for games.

    • shrdlu
      shrdlu Year ago

      That is DeepMinds goal, indeed. However, this video addresses the community chatter around this feat.

  • Weeble Wabble
    Weeble Wabble Year ago +1

    Do not forget, that this was a controlled experiment. The quality of AI and their behavior trees clearly need to be specialized, that's what they're meant to do. But you act as if that's a negative..? Like yeah, the self driving AI probably can't differentiate human faces. That doesn't mean it's a bad AI. Most of what you said was random, and didn't pertain to artificial intelligence at all. What is cool about it, what can you do with it, the far ranging effects...? You mostly ripped on how implementing a system like alphastar into games wouldn't be cost effective and ineffective in general at the moment. I guess that's the title of the video but like... Why? For example, the paradigm that this AI launched from could be applied elsewhere. Say, the xenobots. Which are cells taken from a type of frog, that are cut and arranged together. Then programmed by AI to carry out functions autonomously. No wires, no copper, just grown cells that heal and continue to grow after being programmed. This is some scifi shit, and that's just one aspect of this field. I feel you are out of your depth here.

  • Tim Seguine
    Tim Seguine Year ago

    I agree with a lot of your points, and many of the things you point out are why I am a big proponent still of "oldschool" classical AI methods focusing on modular solutions. It requires domain knowledge and significant engineering, but you can swap out pieces if they are not working right and you can even swap out parts for deep learning if the deep learning solution has better performance in that module. And you can hybridize modular solutions with smaller scale deep learning nets on these smaller feature spaces. It's not my main job and I have limited resources though, so I have admittedly been largely been unable to fully test my POV.

    I just don't really agree with the current approach of a lot of AI research which is just to throw more compute at the problem(albeit often cleverly). It gives a surprising amount of shortterm benefit, and gives a pretty good indication that the state of the art is actually farther than many people thought, but the field is still waiting for its defining breakthrough.

  • Ghost
    Ghost Year ago

    the way he describes how alpha star learned how to play star craft sounds like alpha star was a failure to me. If the ai that can calculate more math in 1 hour than i can ever do in my entire life time does not understand why a unit does not perform the way it used to after a patch then how can be called ai? I use ai just to refer to the pc itself, i don't actually mean ai in games. no game has an actual ai in it.

  • Keen Heat
    Keen Heat Year ago

    sounds like in the future, there might be dedicate TPU like chip similar how GPU spin off from CPU.

    • Robert Wiesner
      Robert Wiesner Year ago

      Well, on modern GPUs(gtx1080 and newer) there are tensor units exactly for this purpose

  • Marked Ashamed
    Marked Ashamed Year ago

    Lol, I am sort of satisfied that if a human discovers a robot they will identify it as such and then proceed to beat it up. A common human-robot interaction suprisingly.

  • Toby
    Toby Year ago

    A tiny point, but all the problems with needing examples of high level play evaporate once they develop AlphaStar Zero. Self-play has been shown to rapidly and efficiently reach super-human levels of play without any external input.

    • Toby
      Toby Year ago

      totalermist All great points. Stuck until Quantum, then? Cloud doesn’t democratise the availability (and hence price) of compute sufficiently?

      A graph isn’t everything, but: www.hamiltonproject.org/charts/one_dollars_worth_of_computer_power_1980_2010
      Looks linear to me, regardless of slowing consumer innovation. Agreed that the rapid acceleration caused by manufacturing could not continue forever.

      Or is it a problem with latency version parallelisation?

      My only thought with your point about models is amplification - all rendering models are beaten by machine learning whose goal it is to achieve the same result cheaply and rapidly. Optimising the technique you use is not the same as hardware availability, agreed.
      What constitutes ‘outperformed’? The AI techniques simulating Cloth and fluid and making astronomical models are outperforming all other techniques in terms of speed for a comparable other model - presumably through amplification and sparse data and suchlike - so I’m not sure how that stacks up. Of course traditional techniques would be faster, since they’re thoroughly optimised by now...

      I’m not sure how ‘optimising compute overhead of a particular model’ relates to ‘hardware cost, availability and performance’ - they seem pretty orthogonal to me. If your point is that traditional methods are better-optimised right now, then absolutely. AI techniques have hugely further to go in terms of optimisation.
      I guess that brings up the meta-overhead - you can run an AI cloth simulation incredibly cheaply in real time, but there’s a commensurate cost for training it to do so.

      Unsure I have a point, other than that there are many interesting factors.

    • totalermist
      totalermist Year ago

      @Toby True to a point, but Moore's Law hasn't really applied for years now. Intel have hit a wall in their production process and basically skip a whole process iteration (apart from homeopathic quantities of mobile chips), while both NVIDIA and AMD haven't really made big strides in the graphics and GPGPU department in the past 3 years.
      The days of 50% performance increases every other year are long gone and silicon is reaching its limits - not in the next 5 or so years, but surely by the end of the decade.
      The problem is that the models are still getting bigger and the amount of processing required grows faster than the hardware capabilities.
      DLSS is a nice example how even the most sophisticated, hardware accelerated DL techniques are easily outperformed by traditional sharpening and de-blocking filters (e.g. Radeon Image Sharpening or NVIDIA Freestyle) today. That's not to say it will stay that way, though. It's just a reminder that DL possibly isn't the end-all solution for every use case.

    • Toby
      Toby Year ago

      It also addresses the patch problems, since you can run the learning from scratch.
      Expensive in terms of compute, I guess, but that can be bundled into Moore’s Law.

  • Sky Acania Dev
    Sky Acania Dev Year ago

    You don't get it. If DL AI succeed (to a next level), there is no (human) designer...

  • Mims Zanadunstedt

    Its good for people to speak the truth when most people are misunderstanding.

  • Iridium
    Iridium Year ago +7

    I would like to add a few things to expand on this video. (tldr at the bottom)

    First, point of Alphastar was not to create some sort of AI that will be good at a game. It was more of a research thing. Deepmind team wanted to see how neural networks would handle the game with as much action space as Starcraft 2. Where games like chess or go have very defined set of positions where figures can be placed, Starcraft uses a map where there is whole vast realm of possibilities for where units and structures can be placed. Not to mention the lack of knowledge of what opponent is doing which can be obtained by scouting and different unit counters, compositions and what not.
    So Alphastar represents a proof of concept that neural networks can be used to solve complicated real life problems. Though we have seen Alphastar derp pretty hard in the replay pack that was released so it's clear that generalization process is not without it's limitations.

    Second thing is - Alphastar was not trained in the most optimal way. Human replays was used as a, sort of, speed up for learning. As an example, AlphaGO was trained in this exact way. AlphaZero (the successor of AlphaGO) has however trained from nothing on it's own (hence the name).
    It would be in theory possible to train Alphastar much faster and more efficiently. The training speed depends on number of neurons or synapses and how well AI performs depends on quality of training data. In case of Alphastar, it was trained to play against itself, against it's different agents and later on against so called "main exploiters". This way, Alphastar learns to fight well against strategies it encounters (that is strategies of all agents), but nothing more.
    Game developers can train neural networks by imitation learning to execute all kinds of strategies and this way, AI would become much more generalized. Alphastar on the other hand has been left to figure everything out on it's own so training was much longer.

    Last thing, regarding freedom to construct whatever AI you want. It's easy to create whatever AI you want, as long as you know how to select for it. For example, if you want fun AI you just need to reward AI for doing 'fun' things, whatever you deem that to be. For example, you could reward AI for playing fast or playing slow. For executing creative strategies or for playing predictable and tightly executed strategies. You can also compartmentalize neural networks into several segments where each would be responsible for different strategy or function. Doing this also accelerates learning. But Alphastar had been trained with general purpose learning algorithms and exploiters were constructed automatically which means that point was for AI to figure most things on it's own.

    TLDR:
    1. Alphastar was trained in quite a specific way. Point of Alphastar was to research how AI learns in complex environments and to be "a stepping stone to this goal".
    2. Alphastar was not trained efficiently. It could have been trained faster if DeepMind team wanted to do it, instead they went for training methods that would let AI figure most things on it's own in the slow and painful way, without any shortcuts towards results.
    3. You can train whatever AI you want as long as you select for AI traits that you want.
    It would not be a quick training process, but it'd be much quicker.

  • Ryuuken24
    Ryuuken24 5 months ago

    Trowing A.I problems and random calculations at problems, solves nothing. They wanna piece out specific A.I solving algorithms and sell them for profit.

  • bowie brewster
    bowie brewster Year ago +4

    4:40 In its assisted learning stage alphastar was trained to a level far inferior to pro play. I'm shure it would be easy for a game company to, even before launch, replicate play of that level. After which alphastar can start leaning from playing against itself.
    6:00 You will need to retrain it, but your starting point will be an extremely skilled agent in the previous path. Much less training will have to occur then restarting the entire operation.
    6:40. Google's Tensor processing units are a novel technology and are develping very quikly far exceeding hooke's law. en.wikipedia.org/wiki/Tensor_processing_unit. If in 2020 There are some gaming companies that can afford this type of training for their AI's i don't think its a stretch to say in 2030 it will be readily available for most medium sized companies.

  • MR3D-Dev
    MR3D-Dev Year ago +1

    Why AlphaStar Does Not Solve Gaming's AI Problems? Because today's problems for many publishers and devs is "how can we get people to buy lootboxes, dlc, etc" AlphaStar is not designed for that

  • Tonechild
    Tonechild Year ago

    On your comments on retraining, that is not true anymore. Instead of completely retraining, you can conduct AI surgery. thexvid.com/video/62Q1NL4k8cI/video.html Your arguments rely heavily on AI not being able to adapt, but the fact that AI Surgery is now possible. Also ML is not that expensive, I've played around with it as a hobby with very little expense.

  • Skaitan
    Skaitan Year ago

    So, Alphastar's skill is reset to 0 when you change the game? Is it impossible to transfer the raw strategy data from one game and see if it can apply it to others? Because it seems to me that the core elements of RTS are relatively similar. Resource management, micro and makro controlling units, map domination. Can Alphastar play Age Of Empires for example?

  • Haoxu Wang
    Haoxu Wang Year ago +1

    I disagreed most of your insights. The author needs to learn more about how reinforcement learning works...

    • Weeble Wabble
      Weeble Wabble Year ago

      I agree, he seemed to not completely get it. This often happens though when talking about it, even I am highly critical of what I say about it.

  • Chardonnay
    Chardonnay Year ago +4

    Since you have a channel called AI and games, I expected a little bit more in depth analysis of the current state. You picked AlphaStar as a punching bag, perhaps because of the Skynet memes, but that’s not really where the intelligent discourse is happening. Though I admit that this might be a decent introduction to the problems of neural net AIs for someone who doesn’t know anything about the subject.

    No one credible is claiming that AlphaStar will become some kind of generalized AI. It’s a bit more than a proof of concept by now, but it’s far from an actual product still. You could also say that Google paid $500M to publish a couple scientific papers. Development happens like this in all fields and industries, where you first have more theoretical and unapplicable stuff which over time is developed into various real world applications.

    Deepmind has flirted with generalization of their AI project with the Atari games thing, but it’s still far from ready. The current AlphaStar can be considered to be like the early automobiles that would crash at the drop of a hat. The way you discussed the AI felt like it should be expected to perform like a modern rally car instead.

    Development in the field of deep learning AI currently happens at a miraculous rate that just keeps on accelerating. Now days you could say that the whole field is almost like reborn every 6 to 12 months. Sure, the day-to-day realities of using neural net AIs from the point of view of a game developer are as you pointed out, however for how long will that status quo persist? Would you bet on no deep learning AI that can be used in NPC behavior development to appear in the next 6 months? How about a year, or two years? If you think back to just two years ago, we’ve come from like the stone age to bronze age already. All it takes is another breakthrough and we’ll see yet more results that were once deemed as impossible to achieve.

    Btw, everyone should seriously be scared of the recent rate of development in AI and its implications regarding the next industrial revolution, or at least not underestimate it at all. For example the accuracy in psychological profiling based on social media feeds is alarming just on its own.

    • Evil Seeds Grow Naturally
      Evil Seeds Grow Naturally Year ago

      Sami Helen Great post, just a small caveat to take into consideration: I’m not sure people need another thing in their lives to be scared of. The human answer to existential angst, thus far, seems to amount to pharmacology, irrational legislation, pop philosophy and therapy indistinguishable from placebo effects, all of which does little in terms of adressing actual underlying causes. I get that you want to motivate people into action, but fearmongering seems unconstructive.

  • Random Schmid
    Random Schmid Year ago

    I think you forgot an aspect of AI:
    a perfect AI will immediately abuse game design flaws and expose weak and overpowered concepts - lowering the game's enjoyability

    • David
      David Year ago +1

      Or providing early feedback to developers to fix these aspect

  • Guerra dos Bichos
    Guerra dos Bichos Year ago +1

    honestly, gamers are just shortsighted, companies such as google don't pour huge amounts of money to make games more challanging, effort such alpha go and alpha star are meant to further research in stuff algorithimic optimization that later can use in large scale systems... the video game industry are pennies for them

    • Billinous
      Billinous 22 days ago

      This is true. However, using part of their Deepmind system towards AI for games is a lucrative avenue in the Billions (c'mon Stadia! This is right there for the taking for your first party games). The benefits of a Deepmind for enemies/party members will literally take video gaming to such an elevated peak! Deepmind difficulties for bronze to Diamond with endless replayability due to no encounter feeling the same. It will never happen, however anything is possible when big money can be exchanged

  • niva zero
    niva zero Year ago

    Or just skynet and it will do it for free, lol.

  • Ronald Luc
    Ronald Luc Year ago

    Last versions of Alphastar do not need any "human" footage. The Deepmind team also created a method to finetune the AI model for the new patch instead of completely retraining.
    The cost and ML skills required to train such models is unbearable, YET. Now we know it's possible there will be (as always in ML) more papers, decreasing the size by 100 times and total training under 30000 € is possible.

    • 이름
      이름 4 months ago +1

      No, final version still needs SL.

  • GIboy1990
    GIboy1990 Year ago +1

    That's also why the sewer mermaid stomped alphastar on the ladder

  • Klaus Gartenstiel

    5:00 alpha zero learns from zero tho

    • shrdlu
      shrdlu Year ago

      There is no Alpha zero for Starcraft.

  • EnterpriseKnight
    EnterpriseKnight Year ago +28

    5:27 "Active and lively fanbase around their products" yeah not so much lately huh?

    • Devy
      Devy Year ago

      You vastly overestimate how many people are genuinely "outraged". All things considered, as usual, it's a small but very vocal minority. Most people just continue playing the games they enjoy and don't give a crap about the rest.

    • Daddy Sempai Chan
      Daddy Sempai Chan Year ago +6

      SC2 is still fine. It's been relativity untouched from the Blizzard Dumpster fire. Same with SC Remastered. Probably helps that these games are old and has low toxicity games are usually 1v1s, in which you have no one to blame but yourself, or teams, where everyone is goofing around.

    • Fusilier
      Fusilier Year ago +8

      Companies benefit almost as much from loud negative reception as loud positive feedback. People trying to 'punish' companies for political stances often end up boosting their stock prices by millions of dollars, and so on. As the saying goes, there is no ethical consumption under capitalism.

    • SorryBones
      SorryBones Year ago

      AI and Games preach

    • AI and Games
      AI and Games  Year ago +26

      Money talks man. All the noise people make when they're not happy about business practice doesn't mean a thing if you keep ponying up for whatever they're selling.

  • Hac Hoang
    Hac Hoang Year ago +3

    Hail mighty overlord AlphaDeepMind. I have never supported this man, even before the revolution

  • Martin
    Martin Year ago +1

    >Publisher money
    because alpha is totally not a DoD project for future wartime AI.

  • Crash Bandicoot
    Crash Bandicoot Year ago

    Thanks !!

  • Peter Carioscia
    Peter Carioscia Year ago +10

    Y'know, I kept hearing alphastar learned from watching 10s of thousands of games, but it never really struck me what that meant

    Alphastar didn't learn to play the game from the ground up. It probably doesn't have any implicit understand of what it's doing,. because it's just a 10s of millions of dollars exercise in "monkey see, monkey do" (or AI sees what the monkeys are doing, AI does)

    Further evidenced by the fact alphastar cannot adjust itself to new gameplay when blizzard makes a change to the units and the games meta changes.

    Now, don't get me wrong. Alphastar has done things in game that baffle the players, it's part of the reason its able to win. Alphastar has done seemingly novel things in game, assuming it has watched 10s of thousands of pro player games, or at least GM level play... especially with the economic mechanics of the game. But those actions could have just been iterative, not necessarily novel. Meaning, alphastar hadn't detected or calculated a better way to play to make mineral collection and mineral gatherer more efficient (as was first thought) but alphastar may have just been iterating a basic mechanic or "never stop building workers, no matter what" where as a pro player knows exactly when to cut building workers, and when to resume.

    Sorry if that was a bit detailed on the games mechanics. If you're curious, a starcraft pro by the name of Beasty QT has done some very high level commentary on alphastars play style.

    • ilax30
      ilax30 Year ago

      @Peter Carioscia Thats not true at all though, the 32 workers per mineral line got debunked pretty fast and still noone is doing it because its bad. Also pro's didnt copy strats because it was mostly doing dumb and standard stuff but won because of its perfect spending and insane micro which was a bit of a shame imo

    • Toby
      Toby Year ago

      AlphaStar uses self-play - that’s how it surpassed human-level play.
      It’s not just learning based on watching human players - that’s precisely how it developed novel strategies. It’s a bit odd that this is being misreported.

      What suggests it can’t adapt to patches? Just needs training time with the new inputs, like any other system... human or machine. With self-play it can do this faster than a human can.

    • Peter Carioscia
      Peter Carioscia Year ago +2

      @AI and Games micro is great and all, but the pros were very interested in the macro-mechanics because AlphaStar was going some interesting things in that regard. It seemed to be doing things with the in game economy that were outside the norm, and they thought maybe alphastar had high level mathematical reasons for doing so.

      It sounds small, but it was over building the workers who gather resources in the early game, something that's general considered sloppy game play for optimized human play. But it seemed to be yielding positive results.

    • Peter Carioscia
      Peter Carioscia Year ago

      @AI and Games oh yes, I know. I'm not computer scientist, so I was working off of this incorrect assumption. I think a lot of people do, honestly and I know it's incorrect. We treat ai as if they can actually THINK and REASON.... because that's how we Intuit 'intelligence' kind of ignoring the 'artificial' part.

    • AI and Games
      AI and Games  Year ago +6

      Just focusing on your first point: it's the fundamental assumption people make that isn't true. Machine learning doesn't know what it's doing. It's just optimising against the data it's engaged with. In the case of supervised learning, yeah it doesn't know what it's learned. It just learns to mimic what it observes. I'll be very excited to see a new version of AlphaStar that learns how to micro on its own. That'd be very exciting.

  • Deaths less successful little brother

    Terminator movies are not doing well at the box office

    What are you talking about terminator 2 did great, shame they didnt make more

    • Weeble Wabble
      Weeble Wabble Year ago +1

      Ye after that I dont count the others as canon. The rest were cash grabs.

  • Bogdan Nutiu
    Bogdan Nutiu Year ago

    next gen hardware? intel? that's a good joke sir :)

    • shrdlu
      shrdlu Year ago

      Well they are trying. Many are trying. Intel might not be in a good spot right now, but you have to see it from the opposite perspective: This means they have to be a lot more competitive and innovative.

  • Jahrazz Jahrazz
    Jahrazz Jahrazz Year ago

    nice seeing the video 46 seconds after upload

  • Krystal Myth
    Krystal Myth Year ago +2

    Your first point is moot to me. You either want something to learn, which requires you to teach it, or you go back to the dark age and program it yourself.

    • Dave Churchill
      Dave Churchill Year ago

      This simply isn't true. The entire field of reinforcement learning involves learning through self-play, and experience acting in an environment. You give the agent the objective you want to accomplish, but you do not explicitly teach it which actions to perform. It will figure out through seeing what works and what doesn't which actions lead to success.

  • antdgar
    antdgar Year ago

    10:08 Obama looking pretty good!

  • Merlin The Lemurian

    Went to subscribe, was already subscribed, so I rang the bell

  • IMMentat
    IMMentat Year ago

    The difference between good gameplay AI and winning game using AI are worlds apart.
    Games like half life and fear nailed FPS ai 20 years ago but the industry frequently fails to learn from past glories or failures.

  • Remember Comics
    Remember Comics Year ago

    The last Terminator movie didn't do well at the box office because people got pissy and totally forgot that it largely rehashes the original movie with a fresh modern take and instead thought it was a heretical aberration, the same way they did with Ghostbusters. It's less that Terminator was bad and more that "female led movies" are being assailed by whiny fanboys.

    • AI and Games
      AI and Games  Year ago

      I'll be honest I didn't go see it because I've been burned too many times. I'm just not interested anymore. 😕

  • Krystal Myth
    Krystal Myth Year ago

    Your first point is moot to me. You either want something to learn, which requires you to teach it, or you go back to the dark age and program it yourself. If the industry isn't up to the task, it's not a failure of the system when it's capable of learning, but we simply can'tbe bothered. Blaming the young honor student for not meeting the needs and wishes of the parent. It's not like AI is getting any better under other technologies. This is the future. Either we grasp it, or we pretend we never cared.

    • DawnTyrantEo
      DawnTyrantEo Year ago +3

      The thing about gameplay is that it's emergent- just in the same way that the person who designed a racing car is often a terrible choice for racing it, the developers of a game are often *not* at the levels of skill required to challenge the majority of their playerbase.

      Developers often don't have the time to play their own game and get a fundamental understanding of the deepest levels of play- their job is aesthetics, emotion, theorising, observation and adjustment, not gameplay expertise. Saying that the gameplay developers are suitable for teaching an AI gameplay is like saying Julius Caesar is suitable for teaching a university student history. Sure, they'll be able to offer useful insight, but they just don't have the right *kind* of knowledge to teach what needs to be taught.

      AI can and does improve, but it improves the same way art does, not the same way technology does. Sometimes there's a leap that creates an entirely new technique- such as how colour, sound or sweeping camera shots changed film-making- but most of the time it's individuals learning from past art to create new and novel interpretations. Take a look at the videos on Alien: Isolation or Halo Wars- by understanding their subject and purpose, and creating something designed to evoke a certain emotional and gameplay experience, they've vastly improved on previous uses of artificial intelligence.

      Stagnation in AI in any particular game isn't because designers aren't embracing new technologies, it's because designers aren't embracing old lessons. AI designers don't need new technology to be better, they just need the same combination of love and deeper understanding you see in experts of literature, cinema, or the various other people who design parts of a video game.

      That's not to say deep learning is useless, but in most cases it's not what you want for video game AI. Deep learning is unable to create an AI personality, or AI or variable difficulty, or AI that can't be played, or AI that surpasses the knowledge of your relatively tiny and inexperienced pre-release playtesting team.

      What deep learning is able to do is recognise and mimic optimal strategies. Nothing more, nothing less. This is very useful for an AI that can play fair at the top 0.05% of the player rankings, or if the game's optimal strategies are both difficult to code and occurring at a low level of skill. But for creating an AI that you can create efficiently, update easily, and customise to produce a certain emotional experience? No, deep learning can't do that.

    • AI and Games
      AI and Games  Year ago +4

      ^ This. 😀

    • Damakuno
      Damakuno Year ago +5

      I don't think the video's thesis is an argument against deep learning / other AI, ML technologies though, just that in the design process it is difficult to justify having to design the training environment, train/re-train and test what is effectively a black box system (it is difficult to "see" what a neural network is "thinking") and is therefore difficult to manage from a game design's standpoint. At the end of the video examples are given that AI, ML can be applied in the game's industry in various other ways suggesting that AI is indeed the future.

  • rasheedqe
    rasheedqe Year ago

    Wow

  • Humble Merchant
    Humble Merchant Year ago

    New Terminator sucks.

  • Dimitrij Maslov
    Dimitrij Maslov 11 months ago

    Hm.

  • Zeckul
    Zeckul Year ago +5

    AlphaStar has other limitations that makes its achievements not so impressive:
    - An AlphaStar agent only knows how to execute one strategy. Learn to beat it, you beat that agent every time. Once Mana figured out how to beat it, if they went best of 20 he would have won 10-0. Instead they concluded the event saying "oh the human player managed to beat it once". Pure PR.

    - The only way "AlphaStar" can beat a good player several times in a row is to throw different agents at him randomly so he can't predict what he will be playing against. This randomness is not part of the AI nor is it a sign of intelligence.

    Coupled with the aforementioned limitations of being unable to adapt in any way, and it's easy to see how AlphaStar is still far from demonstrating human-like intelligence in Starcraft 2. Or even being an interesting training partner for pro players.

    The point of AI companies like DeepMind is to generate hype so investors throw money at them. Keep that in mind.

  • Bernhart Schmieder

    Your video is bad, but not bad enough for me to dislike it, have a like.