A lot of people here seem to be underestimating the difficulty of this problem. There are several incorrect comments saying that in SC1 AIs have already been able to beat professionals - right now they are nowhere near that level.
Go is a discrete game where the game state is 100% known at all times. Starcraft is a continuous game and the game state is not 100% known at any given time.
This alone makes it a much harder problem than go. Not to mention that the game itself is more complex, in the sense that go, despite being a very hard game for humans to master, is composed of a few very simple and well defined rules. Starcraft is much more open-ended, has many more rules, and as a result its much harder to build a representation of game state that is conducive to effective deep learning.
I do think that eventually we will get an AI that can beat humans, but it will be a non-trivial problem to solve, and it may take some time to get there. I think a big component is not really machine learning but more related to how to represent state at any given time, which will necessarily involve a lot of human-tweaking of distilling down what really are the important things that influence winning.
I think a big component is not really machine learning but more related to how to represent state at any given time, which will necessarily involve a lot of human-tweaking of distilling down what really are the important things that influence winning.
I agreed with everything you said until here. Developing good representations of state is precisely what today's machine learning is so good at. This is the key contribution of deep learning.
You seem to be supposing that a human expert is going to be carefully designing a set of variables to track, and in doing so conveying what features of the input to pay attention to and what can be ignored. Presumably the ML can then handle figuring out the optimal action to take in response to those variables.
I think it's much more likely to be the other way around. ML is really good at taking high dimensional input with lots of noise and figuring out to map that to meaningful (to it, if not to us) high-level variables. In other words, modern AI is good at perception.
What it's significantly less good at compared to humans is what might formally be called the policy problem. Given high level variables that describe the situation, what's the best course of action? This involves planning. We think of it in terms of breaking the problem into sub-objectives, considering possible courses of action, decomposing a high level plan into a sequence of directly executable actions, etc. AIs might "think" of this problem in different terms than these, but it seems like it still has to do this kind of work if it is going to have a chance to succeed.
We don't have obvious ways to model this part of the problem. For the perception/representation building problem, I can almost guarantee the solution is going to be a ConvNet to process individual frames combined with a recurrent layer to track state over time. On the other hand, I'm seeing some plausible solutions to the policy problem emerging in the literature, but it's still very much an open question what will emerge as the go-to. In AlphaGo, this part of the problem is where they brought in non-ML algorithmic solutions like Monte Carlo tree search, and one of the reasons StarCraft is interesting compared to Go is that those algorithmic solutions are harder to apply.
I wonder if we will see any advanced cheese strats come out of this. I'm assuming some implementations will eventually develop micro control that is far beyond any human player's capabilities, which would make things like all-in probe rushing much more viable. Instead of playing the normal meta in a computer-vs-human, I imagine an advanced AI would simply send all of its workers off the mineral line as soon as the game starts, and attempt to out micro the human opponent before they can build an army-producing building.
As a long time StarCraft fan I don't share your point of view :
People usually refer to StarCraft as a strategy game but there's actually really little strategy involved : during the first weeks after a new map pool is released, the pro players explore different build orders that are strong on it. And after this period, when the meta-game has settled, the winner of a match (best of 3 or 5) is almost always the one who has the best mechanics (including scouting, unit micro-management and multi-tasking) and sc1 AI are already way better than humans in that field.
Unless you add some artificial limitation to the AI (for instance, a hard limit of APM[1], at an arbitrary level) I don't really think the challenge will be exciting. Imho it will look like a race between a cyclist and a motorcycle : on the mechanics point of view, the machine wins easily without need for intelligence.
Minor nitpick, video games running on digital computers are by definition still discrete even if they feel continuous. Networked multiplayer wouldn't be possible in RTS games if that wasn't the case. The granularity of unit positions and turns in Starcraft obviously leads to a much larger state space, so I get what you're saying, for AI its effectively continuous.
I don't know if I would label SC2 as continuous. I don't think anything happens to the game state at a finer granularity than tick level. So to me it seems that it's also discrete (but with the state changing 44.8 a second at default speed). I agree though that this looks more challenging for ML methods.
I haven't looked at if they limit the rate of commands that the AI can issue, otherwise this will be something that can be a very big advantage to the AI once it learns to micro ...
> Starcraft is a continuous game and the game state is not 100% known at any given time.
It seems to me that multiplayer games may feel continuous to a human player but are still designed around a series of discrete states called ticks where each tick is determined from the previous state plus inputs.
Why is this distinction made in the context of how difficult it is to develop an AI?
From what I saw in the API, the AI will potentially have some key advantages like more accurate micromanagement, and that can make a significant difference in a combat setting. They can try to compensate for this by throttling the number of actions per minute, but that won't compensate for extremely well-planned pixel-perfect clicks. This is a very powerful tactical advantage that can offset strategic deficiencies, if any.
Now, I would not compare SC1 bots to whatever DeepMind is going to create. SC1 bots were in their majority just rule-based bots with hand-coded strategies. DeepMind will create machine learning based bot, train it with data based on thousands if not millions of replays, and test it privately, maybe hiring a professional in the process (same they did with Fan Hui 5p), and make it play itself millions of times. It's a matter of time until they get it right and they get to pick when that time is. They will not organize a match until they feel their probability of winning is significant.
This. Somehow I was expecting the implementation of mechanics to be the easy bit, compared to high-level strategic planning and tactics. Curious to see whether these will emerge by themselves, or if they will need to provide some heuristics (use drops, harass, all-in, etc. )
I known nothing about what they are trying to solve, but it would be interesting if their goal was not just to beat humans but to make a game AI that was actually fun to play.
>A lot of people here seem to be underestimating the difficulty of this problem. There are several incorrect comments saying that in SC1 AIs have already been able to beat professionals - right now they are nowhere near that level.
Mostly because no one cared enough about solving this to spend 1/100th of the resources Google will undoubtedly throw at it.
As a long-time high level SC2 player, one additional thing that makes SC2 so difficult is that the game has multiple layers of tactics and strategy that require specialized logic, but those layers also interact and synergize in a deep way.
- There is the overall strategic game of 'Who is ahead economically? Given that, should I be expanding, attacking, or defending?', with the implicit understanding that the player with the current economic advantage puts pressure on its opponent to attack
- There is a resource management and build-order system where you need to plan and optimize building as big and as effective a unit composition as quickly as possible, except there are a lot of tradeoffs: you can build for a stronger army sooner, as opposed to a weaker army alter
- There is a tactical micromanagement battle where small groups of units are pitted against one another, and where small tactical movements can gain very large materiel advantages. Units are relatively short ranged, so to damage or defend effectively requires effective positioning. Most armies fight better as a cohesive group ('ball'), except there are units that specifically punish and do splash damage that need individual micromanagement. Battles can take place over a short period and be over quickly, or can be long-running positional skirmishes that last for half the game, where each player is constantly probing for weakness before one finally goes for the throat.
- The economy fundamentally depends on worker units that are vulnerable to harassment, so the tactical battle requires a choice between putting everything into one large army and pushing, or splitting units into smaller groups and harassing in multiple places, or various mixes (small group to harass, bulk of army to defend, etc.)
- If keyboard and mouse action rates are capped, then at every moment in time, the player must decide whether it is more profitable to devote actions to managing the army (micro) or managing the overall economy (macro). Choosing wrongly usually results in a loss
- There is an implicit rock-paper-scissor tradeoff at the highest levels of the game: a 'greedy' strategy that cuts corners and favors economy over military will generally beat a 'safe' balanced strategy. Very aggressive strategies win against greed and generally lose against safe
- There is the ability to scout your opponent to see whether they are going greedy, safe, or aggressive, but scouting requires an early investment in units and making subtle inferences about the opponent's build order, so the choice of whether to scout and how is not a trivial one
- There can be bluffs where your opponent purposefully allows a scout of a key building, kills your scout, then cancels that building and chooses an entirely different technology instead
And all these layers interact:
- For example, if you go for an aggressive strategy, then you must commit blindly at the beginning of the game and often try to deny enemy attempts to scout you
- If you scout that your opponent's army consists of units that are faster than yours, then they generally have much higher harassment potential, which pushes you towards a defensive posture. On the flip side, your opponent can use this threat to improve their economic position instead of attacking.
There is long-term planning at the strategic, informational, and also tactical levels. Effective high-level play requires an accurate model of what your opponent is doing in an environment where it's easy for your opponent to deny acquiring that information.
I'd wager that if you took two evenly matched professional level players, and then revealed the entire map to one player but not the other, you would go from a 50% to a 95%+ win rate.
Related: Today I learned that a group of AI researchers has released a paper called: STARDATA: A StarCraft AI Research Dataset. According to one of the authors: "We're releasing a dataset of 65k StarCraft: Brood War games, 1.5b frames, 500m actions, 400GB of data. Check it out!"
The API Blizzard is exposing is really nice. Sadly most of the advantages AI had in SC1 were just due to the fact that an automated process could micro-manage the tasks the game didn't automate for you (a lot of boring, repetitive work). SC2 got rid of a lot of that while still allowing room for innovative and overpowered tactics to be discovered (MarineKing's insane marine micro, SlayerS killing everyone with blue flame hellions, some more recent stuff I'm sure from the newest expansions). Hopefully the API lets AIs converge on optimal resource management and get to exploring new and innovative timings, transitions, army makeups, etc.
This seems all in good fun but I wonder if it's come too late.
Starcraft 2 is at its twilight.
The biggest leagues of South Korea have disbanded. [1]
The prolific progamers who transitioned to Starcraft 2 have gone back to Broodwar. [2]
Blizzard itself has scrubbed all references to Starcraft 2 on the very home page of Starcraft. [3] Except for the twitter embed, it has only only one "2" character... in the copyright statement.
My take is that the future for the Starcraft franchise will be through remastered and potential expansion packs following it.
Starcraft 2 had a good run but, with the entire RTS genre stagnating [4], I don't think Blizzard wants to bet on anything less than the top horse.
SC2 does seem to be at its twilight in Korea, and I agree progamers and fans there are super interested in Remastered.
But I don't think Remastered will be very popular outside KR. The SC2 "war chest" promo appears to have made more money than expected, as measured by hitting its funding ceiling within a few days.
So I don't think it's "Remastered replaces SC2", I think it's a divergence into KR playing Remastered and non-KR playing SC2, and the number of progamers and players doesn't have to be zero-sum: it could enlarge the population playing either game, too.
Personally, I think focusing on BW would have been more interesting (as long as the APM limit still stands), but I guess SC2 is alright too. The fact that they're even doing this though makes me happy.
The reason I say BW would be especially interesting is simply because the game has remained basically unchanged balance-wise since v1.08 which came out in 2001. Despite that, the pro scene never left, and we're still seeing some shifts in the meta even today. It would be cool to see a strong AI flip the script completely for such an established and "well understood" game. Opportunities like that are kind of rare, at least when it comes to video games.
I disagree I got into Starcraft recently and find it very much vibrant, both in the pro scene and casual. But that’s irrelevant. The point is it’s still a great ai challenge
People were still very excited about Go even if people in the US likely didn't really play a lot of Go before AlphaGo.
It will be super good PR for DeepMind and Facebook AI Research (who are doing Broodwar).
It will probably not reanimate the pro scenes in any lasting manner, however.
It's a bit too bad they're having to move towards supervised learning and imitation learning.
I totally understand why they need to do that given the insane decision trees, but I was really hoping to see what the AI would learn to do without any human example, simply because it would be inhuman and interesting.
I'm really interested in particular if an unsupervised AI would use very strange building placements and permanently moving ungrouped units.
One thing that struck me in the video was the really actively weird mining techniques in one clip and then another clip where it blocked its mineral line with 3 raised depots...
I also want to see the algorithm win on unorthodox maps. Perhaps a map they have never seen before, or one where the map is the same as before but the resources have moved.
Don't tell the player or the algorithm this, and see how both react, and adapt. This tells us a great deal about the resiliency of abilities.
When Watson won at Jeopardy, one of its prime advantages was the faster reaction time at pushing the buzzer. The fairness of that has already been hashed out elsewhere, but.....
We already know that computers can have superior micro and beat humans at Starcraft through that(1). Is DeepMind going to win by giving themselves a micro advantage that is beyond what reasonable humans can do?
My understanding is that in a full match, AIs still have no hope against humans, since even though they can crush humans at micro, their macro is still abysmal [1]. I'm not aware of a match where any AI has beat a pro human player at Starcraft -- I'd be interested in learning otherwise!
That example might be misleading because I assume the AI has perfect information- I don't know how it could know which zergling was targeted before the tank fire landed without knowledge of the game's internal state.
In any case I saw in the comments above they are planning on limiting the APM. But right now they're not at the stage where they can compete with the in-game rules based AI, so it may be a little while.
Thanks for that video. That's exactly what I hope to see. AI vs. AI with insane micro capabilities. I want to see SC2 played as close to a "perfect" game as possible.
I know that, as a player, the high mechanical limitations of Starcraft are part of why it's such a difficult, high-skill-ceiling game. But.. I've tried to enjoy watching SC2 on Twitch, and while it's kinda fun, it's just so disappointing when a complicated strategic game is thrown away because a player doesn't react fast enough to workers being sniped or a drop being shot down.
I wish the individual units had some automatic behavior -- for example, marines would could run in spread out formations near tanks or banelings; workers would flee from hazards; flying units would avoid turrets unless specifically directed to fly over them. It would require a lot of rebalancing, of course, but it would make the game so much more tactical and strategic and (imo) enjoyable to watch.
That would be quite interesting, having humans handle the macro while the AI focuses on the micro. I'm reminded of "Advanced Chess": https://en.wikipedia.org/wiki/Advanced_Chess
> Advanced Chess is a relatively new form of chess, wherein each human player uses a computer chess program to help him explore the possible results of candidate moves. The human players, despite this computer assistance, are still fully in control of what moves their "team" (of one human and one computer) makes.
Is this how we are going to accidentally let AGI loose into the world!? /s
On a more realistic note I think this will degenerate into a game of who can fuzz test for the best game breaking glitch. Think of all the programming bugs that turned into game mechanics in BW that we haven't discovered for SC2 yet: http://www.codeofhonor.com/blog/the-starcraft-path-finding-h...
The SCAI bots I've seen are more hardcoded tactics engines rather than machine learning models. They're still impressive, but their logic isn't quite 'learned' it's hand coded which is a crucial difference.
SC1 really doesn't make sense for this, 80% of the skill is just keeping on top of the mindless but mechanically intensive stuff, which is trivial beyond trivial for an AI.
SC2's automated away most of this (pretty much everything but production cycles), which makes it a better measure for AI vs human.
I thought this was already happening. Right after AlphaGo beat Lee, I remember hearing about it. Did they give up on having their AI playing SC2? I wondered if that would work, since it seemed to take turns in Go at the same speed as a normal player, I wondered if it was trying to compute the most likely winning move each turn and the late game implications of those moves. If it tried that in a fast paced game how it would deal with the speed. It obviously would need to develop a pattern of pre-baked strategies that would win it the game. Would it play the same build every round or would it realize that changing things up each match wins it more games?
It's a bit too bad they're having to move towards supervised learning and imitation learning.
I totally understand why they need to do that given the insane decision trees, but I was really hoping to see what the AI would learn to do without any human example, simply because it would be inhuman and interesting.
I'm really interested in particular if an unsupervised AI would use very strange building placements and permanently moving ungrouped units.
One thing that struck me in the video was the really actively weird mining techniques in one clip and then another clip where it blocked its mineral line with 3 raised depots...
There's something funny about a company that is actively developing bleeding edge AI technology, but who can't design a webpage that works on mobile without crashing.
When I used to play a lot of StarCraft, and then later with Total Annihilation, I wished for the ability to customize the AI.
So then BWAPI came along ... and ... AI is hard. The best SCBW bots are still pretty pathetic compared to a human player, never mind an expert human player.
I'd be really interested in how differently tiered data sets (ladder rank) would work as sources for teaching.
Is it possible that training on diamond players is less effective than training on, say, silver? Is that actually even an interesting thing to look at?
[+] [-] qub1t|8 years ago|reply
Go is a discrete game where the game state is 100% known at all times. Starcraft is a continuous game and the game state is not 100% known at any given time.
This alone makes it a much harder problem than go. Not to mention that the game itself is more complex, in the sense that go, despite being a very hard game for humans to master, is composed of a few very simple and well defined rules. Starcraft is much more open-ended, has many more rules, and as a result its much harder to build a representation of game state that is conducive to effective deep learning.
I do think that eventually we will get an AI that can beat humans, but it will be a non-trivial problem to solve, and it may take some time to get there. I think a big component is not really machine learning but more related to how to represent state at any given time, which will necessarily involve a lot of human-tweaking of distilling down what really are the important things that influence winning.
[+] [-] gradys|8 years ago|reply
I agreed with everything you said until here. Developing good representations of state is precisely what today's machine learning is so good at. This is the key contribution of deep learning.
You seem to be supposing that a human expert is going to be carefully designing a set of variables to track, and in doing so conveying what features of the input to pay attention to and what can be ignored. Presumably the ML can then handle figuring out the optimal action to take in response to those variables.
I think it's much more likely to be the other way around. ML is really good at taking high dimensional input with lots of noise and figuring out to map that to meaningful (to it, if not to us) high-level variables. In other words, modern AI is good at perception.
What it's significantly less good at compared to humans is what might formally be called the policy problem. Given high level variables that describe the situation, what's the best course of action? This involves planning. We think of it in terms of breaking the problem into sub-objectives, considering possible courses of action, decomposing a high level plan into a sequence of directly executable actions, etc. AIs might "think" of this problem in different terms than these, but it seems like it still has to do this kind of work if it is going to have a chance to succeed.
We don't have obvious ways to model this part of the problem. For the perception/representation building problem, I can almost guarantee the solution is going to be a ConvNet to process individual frames combined with a recurrent layer to track state over time. On the other hand, I'm seeing some plausible solutions to the policy problem emerging in the literature, but it's still very much an open question what will emerge as the go-to. In AlphaGo, this part of the problem is where they brought in non-ML algorithmic solutions like Monte Carlo tree search, and one of the reasons StarCraft is interesting compared to Go is that those algorithmic solutions are harder to apply.
[+] [-] strgrd|8 years ago|reply
[+] [-] littlestymaar|8 years ago|reply
People usually refer to StarCraft as a strategy game but there's actually really little strategy involved : during the first weeks after a new map pool is released, the pro players explore different build orders that are strong on it. And after this period, when the meta-game has settled, the winner of a match (best of 3 or 5) is almost always the one who has the best mechanics (including scouting, unit micro-management and multi-tasking) and sc1 AI are already way better than humans in that field.
Unless you add some artificial limitation to the AI (for instance, a hard limit of APM[1], at an arbitrary level) I don't really think the challenge will be exciting. Imho it will look like a race between a cyclist and a motorcycle : on the mechanics point of view, the machine wins easily without need for intelligence.
[1] action per minute
[+] [-] Impossible|8 years ago|reply
[+] [-] haeffin|8 years ago|reply
I haven't looked at if they limit the rate of commands that the AI can issue, otherwise this will be something that can be a very big advantage to the AI once it learns to micro ...
[+] [-] greedy_buffer|8 years ago|reply
It seems to me that multiplayer games may feel continuous to a human player but are still designed around a series of discrete states called ticks where each tick is determined from the previous state plus inputs.
Why is this distinction made in the context of how difficult it is to develop an AI?
[+] [-] partycoder|8 years ago|reply
Now, I would not compare SC1 bots to whatever DeepMind is going to create. SC1 bots were in their majority just rule-based bots with hand-coded strategies. DeepMind will create machine learning based bot, train it with data based on thousands if not millions of replays, and test it privately, maybe hiring a professional in the process (same they did with Fan Hui 5p), and make it play itself millions of times. It's a matter of time until they get it right and they get to pick when that time is. They will not organize a match until they feel their probability of winning is significant.
[+] [-] gobugat|8 years ago|reply
[+] [-] eksemplar|8 years ago|reply
[+] [-] gambler|8 years ago|reply
Mostly because no one cared enough about solving this to spend 1/100th of the resources Google will undoubtedly throw at it.
[+] [-] smallnamespace|8 years ago|reply
- There is the overall strategic game of 'Who is ahead economically? Given that, should I be expanding, attacking, or defending?', with the implicit understanding that the player with the current economic advantage puts pressure on its opponent to attack - There is a resource management and build-order system where you need to plan and optimize building as big and as effective a unit composition as quickly as possible, except there are a lot of tradeoffs: you can build for a stronger army sooner, as opposed to a weaker army alter - There is a tactical micromanagement battle where small groups of units are pitted against one another, and where small tactical movements can gain very large materiel advantages. Units are relatively short ranged, so to damage or defend effectively requires effective positioning. Most armies fight better as a cohesive group ('ball'), except there are units that specifically punish and do splash damage that need individual micromanagement. Battles can take place over a short period and be over quickly, or can be long-running positional skirmishes that last for half the game, where each player is constantly probing for weakness before one finally goes for the throat. - The economy fundamentally depends on worker units that are vulnerable to harassment, so the tactical battle requires a choice between putting everything into one large army and pushing, or splitting units into smaller groups and harassing in multiple places, or various mixes (small group to harass, bulk of army to defend, etc.) - If keyboard and mouse action rates are capped, then at every moment in time, the player must decide whether it is more profitable to devote actions to managing the army (micro) or managing the overall economy (macro). Choosing wrongly usually results in a loss - There is an implicit rock-paper-scissor tradeoff at the highest levels of the game: a 'greedy' strategy that cuts corners and favors economy over military will generally beat a 'safe' balanced strategy. Very aggressive strategies win against greed and generally lose against safe - There is the ability to scout your opponent to see whether they are going greedy, safe, or aggressive, but scouting requires an early investment in units and making subtle inferences about the opponent's build order, so the choice of whether to scout and how is not a trivial one - There can be bluffs where your opponent purposefully allows a scout of a key building, kills your scout, then cancels that building and chooses an entirely different technology instead
And all these layers interact:
- For example, if you go for an aggressive strategy, then you must commit blindly at the beginning of the game and often try to deny enemy attempts to scout you - If you scout that your opponent's army consists of units that are faster than yours, then they generally have much higher harassment potential, which pushes you towards a defensive posture. On the flip side, your opponent can use this threat to improve their economic position instead of attacking.
There is long-term planning at the strategic, informational, and also tactical levels. Effective high-level play requires an accurate model of what your opponent is doing in an environment where it's easy for your opponent to deny acquiring that information.
I'd wager that if you took two evenly matched professional level players, and then revealed the entire map to one player but not the other, you would go from a 50% to a 95%+ win rate.
[+] [-] dpflan|8 years ago|reply
> Article: https://arxiv.org/abs/1708.02139
> Github: https://github.com/TorchCraft/StarData
[+] [-] siegecraft|8 years ago|reply
[+] [-] hitekker|8 years ago|reply
Starcraft 2 is at its twilight.
The biggest leagues of South Korea have disbanded. [1] The prolific progamers who transitioned to Starcraft 2 have gone back to Broodwar. [2]
Blizzard itself has scrubbed all references to Starcraft 2 on the very home page of Starcraft. [3] Except for the twitter embed, it has only only one "2" character... in the copyright statement.
My take is that the future for the Starcraft franchise will be through remastered and potential expansion packs following it.
Starcraft 2 had a good run but, with the entire RTS genre stagnating [4], I don't think Blizzard wants to bet on anything less than the top horse.
[1] https://www.kotaku.com.au/2016/10/the-end-of-an-era-for-star...
[2] http://www.espn.com/esports/story/_/id/18935988/starcraft-br...
[3] http://starcraft.com
[4]http://www.pcgamer.com/the-decline-evolution-and-future-of-t... (Aside from MOBAs)
[+] [-] cjbprime|8 years ago|reply
SC2 does seem to be at its twilight in Korea, and I agree progamers and fans there are super interested in Remastered.
But I don't think Remastered will be very popular outside KR. The SC2 "war chest" promo appears to have made more money than expected, as measured by hitting its funding ceiling within a few days.
So I don't think it's "Remastered replaces SC2", I think it's a divergence into KR playing Remastered and non-KR playing SC2, and the number of progamers and players doesn't have to be zero-sum: it could enlarge the population playing either game, too.
[+] [-] solicode|8 years ago|reply
The reason I say BW would be especially interesting is simply because the game has remained basically unchanged balance-wise since v1.08 which came out in 2001. Despite that, the pro scene never left, and we're still seeing some shifts in the meta even today. It would be cool to see a strong AI flip the script completely for such an established and "well understood" game. Opportunities like that are kind of rare, at least when it comes to video games.
[+] [-] gcp|8 years ago|reply
Couldn't it be the opposite? Blizzard was willing to do this release exactly because SC2 is dead?
[+] [-] Synaesthesia|8 years ago|reply
[+] [-] make3|8 years ago|reply
[+] [-] aerovistae|8 years ago|reply
Using SC2 as a starting point isn't really of much consequence. "Too late"? It's not as if the algorithms developed will die alongside the game.
[+] [-] lardo|8 years ago|reply
[+] [-] SiempreZeus|8 years ago|reply
I totally understand why they need to do that given the insane decision trees, but I was really hoping to see what the AI would learn to do without any human example, simply because it would be inhuman and interesting.
I'm really interested in particular if an unsupervised AI would use very strange building placements and permanently moving ungrouped units.
One thing that struck me in the video was the really actively weird mining techniques in one clip and then another clip where it blocked its mineral line with 3 raised depots...
[+] [-] dontreact|8 years ago|reply
[+] [-] Synaesthesia|8 years ago|reply
[+] [-] arcanus|8 years ago|reply
Don't tell the player or the algorithm this, and see how both react, and adapt. This tells us a great deal about the resiliency of abilities.
[+] [-] jmcmahon443|8 years ago|reply
[+] [-] ktRolster|8 years ago|reply
We already know that computers can have superior micro and beat humans at Starcraft through that(1). Is DeepMind going to win by giving themselves a micro advantage that is beyond what reasonable humans can do?
(1)https://www.youtube.com/watch?v=IKVFZ28ybQs as one example
[+] [-] obastani|8 years ago|reply
[1] http://spectrum.ieee.org/automaton/robotics/artificial-intel...
[+] [-] mattnewton|8 years ago|reply
In any case I saw in the comments above they are planning on limiting the APM. But right now they're not at the stage where they can compete with the in-game rules based AI, so it may be a little while.
[+] [-] Waterluvian|8 years ago|reply
[+] [-] sidusknight|8 years ago|reply
[+] [-] jahabrewer|8 years ago|reply
[+] [-] daemonk|8 years ago|reply
[+] [-] ajkjk|8 years ago|reply
I wish the individual units had some automatic behavior -- for example, marines would could run in spread out formations near tanks or banelings; workers would flee from hazards; flying units would avoid turrets unless specifically directed to fly over them. It would require a lot of rebalancing, of course, but it would make the game so much more tactical and strategic and (imo) enjoyable to watch.
[+] [-] neuronexmachina|8 years ago|reply
> Advanced Chess is a relatively new form of chess, wherein each human player uses a computer chess program to help him explore the possible results of candidate moves. The human players, despite this computer assistance, are still fully in control of what moves their "team" (of one human and one computer) makes.
[+] [-] arnioxux|8 years ago|reply
https://www.reddit.com/r/programming/comments/1v5mqg/using_b...
https://bulbapedia.bulbagarden.net/wiki/Arbitrary_code_execu...
Is this how we are going to accidentally let AGI loose into the world!? /s
On a more realistic note I think this will degenerate into a game of who can fuzz test for the best game breaking glitch. Think of all the programming bugs that turned into game mechanics in BW that we haven't discovered for SC2 yet: http://www.codeofhonor.com/blog/the-starcraft-path-finding-h...
[+] [-] krasi0|8 years ago|reply
[+] [-] krasi0|8 years ago|reply
[+] [-] siliconc0w|8 years ago|reply
[+] [-] Havoc|8 years ago|reply
[+] [-] yflu|8 years ago|reply
SC2's automated away most of this (pretty much everything but production cycles), which makes it a better measure for AI vs human.
[+] [-] convefefe|8 years ago|reply
[+] [-] Companion|8 years ago|reply
I totally understand why they need to do that given the insane decision trees, but I was really hoping to see what the AI would learn to do without any human example, simply because it would be inhuman and interesting.
I'm really interested in particular if an unsupervised AI would use very strange building placements and permanently moving ungrouped units.
One thing that struck me in the video was the really actively weird mining techniques in one clip and then another clip where it blocked its mineral line with 3 raised depots...
[+] [-] hacker_9|8 years ago|reply
[+] [-] unknown|8 years ago|reply
[deleted]
[+] [-] JabavuAdams|8 years ago|reply
So then BWAPI came along ... and ... AI is hard. The best SCBW bots are still pretty pathetic compared to a human player, never mind an expert human player.
[+] [-] Ntrails|8 years ago|reply
Is it possible that training on diamond players is less effective than training on, say, silver? Is that actually even an interesting thing to look at?
[+] [-] ipnon|8 years ago|reply
[+] [-] naveen99|8 years ago|reply
Then, why not release code for the built in ai, and improve on it ? Or is the built in ai cheating ?