Player of Games
364 points by vatueil 3 years ago | 231 comments- captn3m0 3 years agoIf you are interested in this, I maintain a list of boardgame-solving related research at https://github.com/captn3m0/boardgame-research, with sections for specific games.
This looks really interesting. It would be a good project to test this against a general card-playing framework to easily test it on a variety of imperfect-information games based on playing cards.
- fho 3 years agoI tried my hand once or twice at (re-)implementing board games [0], so that I could run some common "AI" algorithms on the game trees.
What tripped me up every time is that most board games have a lot of "if this happens, there is this specific rule that applies". Even relatively simple games (like Homeworlds) are pretty hard to nail down perfectly due to all the special cases.
Do you, or somebody else, have any recommendations on how to handle this?
[0] Dominion, Homeworlds and the battle part of Eclipse iirc.
- anonymoushn 3 years agoDominion and Homeworlds are pretty complicated! Maybe you can start with a simpler game like Splendor.
In my 2-player Splendor rules engine, the following actions are possible:
1. Purchase a holding. (90 possible actions, one for each holding)
2. If you do not have 3 reserved cards, reserve a card and take a gold chip if possible. (93 possible actions, one for each holding and one for each deck of facedown cards)
3. If there are 4 chips of the same color in a pile, take 2 chips of that color. (5 possible actions)
4. Take 3 chips of different colors, or 2 chips of different colors if only 2 are available, or 1 chip if only 1 is available. (25 possible actions)
5. If after any action you have at least 11 chips, return 1 chip. (6 possible actions which are never legal at the same time as any other actions)
This still doesn't correctly implement the rules though. In the actual game, you'd be allowed to spend gold chips when you don't need to, which would make purchasing holdings contain extra decisions after you pick which holding to purchase about which chips you'd like to keep.
- fho 3 years agoI actually played Splendor for the first (three) time(s) some time ago and honestly didn't really like it. It's a very simple game, true. I feel like there are not many decision points for me as a player and therefore there is not much strategy involved. But maybe that is just my view after very few games.
(At the same time that probably makes it a good choice for a game implementation)
Thing is that for all my examples above I had a "good" reason to implement that specific game:
1. Dominion (shortly after it came out) To evaluate strategies to best my friends (obviously). 2. Eclipse Has a nice rock-paper-scissors type of ship combat, where you can counter every enemy build (if you have enough time and resources). Calculating the odds of winning would be interesting. 3. Homeworlds Seems to be a very fascinating game. But without any players to compete with [0] ... AI to the rescue ;-)
[0] I am aware of SDG where I could play online, but that site is in decay mode. Getting an account involved mailing the maintainer and those times I tried to start a game no players showed up.
- fho 3 years ago
- captn3m0 3 years ago+1 to boardgame.io. It provides very good abstractions for turns, phases, players, and partial information. I’ve implemented small games with a few hours of effort, and that includes a UI.
- penteract 3 years agoIt's a good set of abstractions, but I've found that the system used for immutability (immerjs) carries noticable performance costs (a factor of more than 2), to the point that it was faster to make a mutable copy of almost all the gamestate at the start of the 'apply move' code.
- penteract 3 years ago
- iwd 3 years agoIf you’re doing it for fun, one option is to start with a simplified version of the game. It’s faster to implement and faster to run. And you’ll get insights you can apply to the full game.
That’s what I did when I applied RL to Dominion, because the complexity of the game depends heavily on the cards you include! See part 3 of https://ianwdavis.com/dominion.html
- LeifCarrotson 3 years ago> What tripped me up every time is that most board games have a lot of "if this happens, there is this specific rule that applies". Even relatively simple games (like Homeworlds) are pretty hard to nail down perfectly due to all the special cases.
The key is to build a data-driven state machine, rather than writing logic with a bunch of 'if' statements.
- mathgladiator 3 years agoYou are correct, but some games can yield exceptionally complicated state machines.
I designed a language for solving this: http://www.adama-lang.org/
I get all the benefits of a data driven state machine with the simplicity of a language that supports asynchronously asking users for updates.
- fho 3 years agoI am "camp Haskell", so my approach was pretty much data-driven. But what is a state machine if not a big nest of if-else statements? :-)
- mathgladiator 3 years ago
- nicolodavis 3 years agoYou could consider using a library like boardgame.io for this.
- fho 3 years agoI'll look into that.
- fho 3 years ago
- mathgladiator 3 years agoI'd appreciate you checking out my language and providing feedback. An element that helps is building a stateful server and using streams where the people behave like servers:
- anonymoushn 3 years ago
- JoeDaDude 3 years agoThank you for posting! Maybe you can include the game of Arimaa [1]. Arimaa was designed to be hard(er) for computers and level the playing field for humans. Algorithms were developed eventually, though I have not kept up to know where that stands today.
- captn3m0 3 years agoArima has enough research that it’s covered in the Wikipedia section[0] as well as the Chess Programming Wiki[1], which is linked in the README. I’m specifically trying to collect research on contemporary games, which are not so easily available. Chess/Go and alike games are very covered already, however imperfect information games are much rarer for eg.
- captn3m0 3 years ago
- mathgladiator 3 years agoThanks for this! I'm currently designing a language for complex board games like Battlestar Galactica: http://www.adama-lang.org/
Something that I found amazing was inverting the flow control such that the server asks players questions with a list of possible choices simplifies the agent design tremendously. As I'm looking to retire to work on this project, I can generate the agent code and then hand-craft an AI. However, some AIs are soooo hard to even conceptualize.
- majani 3 years agoImperfect information games will always have a luck element that gives casual players an edge. That's basically the appeal of card games over board games.
- ketzo 3 years agoAnd why so many board games incorporate decks/hands of cards.
- mathgladiator 3 years agoNot just luck but deception as well which takes some games to new levels.
- ketzo 3 years ago
- alper111 3 years agoThis looks very good, thanks.
- fho 3 years ago
- sdenton4 3 years agoThis is clearly part of DeepMind's long-game plan to achieve world domination through board game mastery. Naming the new algorithm after the book is a real tip of their hand...
- sillysaurusx 3 years agoThe abbreviation is PoG too. I bet that was totally on purpose. At least one person in Brain is a dota player, so you better believe they watch twitch.
Funny that most of the comments are about the name. What an excellent choice.
- chrisweekly 3 years agoPSA: The "Culture" novels by Iain M Banks are fantastic and can be read in any order. "Player of Games" was the 1st one I read and still probably my favorite.
- bduerst 3 years agoPlayer of Games is the second book, and the one I recommend people start The Culture series with.
The first book Consider Phlebas isn't bad, but it isn't as well developed as the rest of the series IMO.
- hesperiidae 3 years agoIt's a great starting point, since not only is the story both fun and interesting, but it also shows what the Culture's values and methods are in a very satisfying way by juxtaposing them against the Empire through the tournaments of the latter's own game.
- hesperiidae 3 years ago
- bewaretheirs 3 years agoI keep hearing recommendations for the Culture books so I tried reading it recently and it just didn't work for me -- I gave up on it halfway through, which is rare for me.
- Jordanpomeroy 3 years agoThey are a slow burn, but the ends always justify the means with those novels. If you really did make it 1/2 way, I’d encourage you to go back and finish reserve judgment.
- mrslave 3 years agoMe too with Consider Phlebas. Then I hit the Alastair Reynolds novels pretty hard and now I'm stuck for new material. Dune is en vogue so perhaps that's the right read next? I really enjoyed Vernor Vinge's A Deepness in the Sky but couldn't quite get into A Fire Upon the Deep but it still sits on my shelf taunting me.
- pault 3 years agoWhich one? They each have a unique feel and setting.
- Jordanpomeroy 3 years ago
- kmtrowbr 3 years agoYes! I love this one. It's my favorite too.
- bduerst 3 years ago
- 7thaccount 3 years agoPretty amazing book. I wish I could play a board game like that as well.
- automatic6131 3 years agoI always imagine the board game as essentially being SM's Civilisation but really, really good in an indescribable way - with some card games inbetween.
- 0_gravitas 3 years agoI believe Banks himself said that he used to play Civ and took some inspiration from it
- 0_gravitas 3 years ago
- stavros 3 years agoI second this, it was excellent. I've only read a few Banks books, but this was my favorite.
- arvinsim 3 years agoI started with Consider Phlebas but stopped because it seems too slow for me.
Does it get better in the later chapters?
- arvinsim 3 years ago
- automatic6131 3 years ago
- WithinReason 3 years ago"In 2015, two SpaceX autonomous spaceport drone ships—Just Read the Instructions and Of Course I Still Love You—were named after ships in the book, as a posthumous tribute to Banks by Elon Musk"
- omnicognate 3 years agoShame they didn't go with Pure Big Mad Boat Man.
- omnicognate 3 years ago
- 6510 3 years agoThe end game is pinball and we are the balls.
- zeristor 3 years agoWe are the pins.
- zeristor 3 years ago
- 3 years ago
- 3 years ago
- sillysaurusx 3 years ago
- sfkgtbor 3 years agoI really like seeing references to the Culture series when naming things:
- CobrastanJorji 3 years agoAllusions are fun and all, but I disagree. These are important problems that a lot of people have put their whole careers into researching. Silly names like these lack gravitas.
- sjg1729 3 years agoAlways sad to see these projects suffer from A Shortfall of Gravitas
- auggierose 3 years agoI see what you did there :-)
- NoGravitas 3 years agoGravitas? What Gravitas?
- 314 3 years agoIndeed they would do well To Consider The Lack Of Gravitas.
- auggierose 3 years ago
- moritonal 3 years agoSorry, to explain the joke. The ships name themselves, and when they pick jokey names they're often mocked by the humans (which are in every way essentially ants to the spaceships) for not having enough gravitas. So the ships start naming themeselves things like the "Death-ray 9000 super-killer deluxe", to essentially take the piss.
Funnily enough you can see the exact same effect in principal game-engineers or computer-hacking.
- robbie-c 3 years agoI believe the user you are replying to was also joking, given that many of Banks' ship names reference the g-word
Edit: if not that's even more amusing
- robbie-c 3 years ago
- ZeroGravitas 3 years agoVery little Gravitas Indeed.
- 0_gravitas 3 years agoAh so its __you__ that took that one
- 0_gravitas 3 years ago
- 0_gravitas 3 years agoindeed
- gremloni 3 years agoIf anything the caliber and lore of the series gives the project an incredible amount of gravitas. Plus the scheme is just plain beautiful in my opinion.
- lacker 3 years agoYou may find this Iain Banks interview enjoyable. TLDR search for "gravitas" ;-)
https://www.theguardian.com/books/2000/sep/11/iainbanks-scie...
- lacker 3 years ago
- sjg1729 3 years ago
- doctor_eval 3 years agoI suppose it's better than "Use of Weapons".
- OneTimePetes 3 years agoWhy not have a seat, take that chair over there.
- _0ffh 3 years agoOne of the best, and executed to perfection! You can sort-of-see the point coming for a long, long time in the book, as he gradually builds the suspicion by dropping the occasional hint here and there, but it's always so that it must remain a highly uncertain speculation until he drops the reveal. Just the right balance between "How should I have suspected that?" and "Those hints were too much on the nose!".
- _0ffh 3 years ago
- OneTimePetes 3 years ago
- dane-pgp 3 years agoI think it is also a reference to "PogChamp", although it's disappointing that PoG apparently wasn't evaluated against the Arcade Learning Environment (ALE) corpus of Atari 2600 games.
- abledon 3 years agomuch more refined to think a spam of "POG!" stands for Player of Games when reading twitch chat
- abledon 3 years ago
- Borrible 3 years agoBanks should have named one of Culture's General System Vehicles 'Don't be Evil'.
- hoseja 3 years agoKinda ironic since in the novel, a human player is better than the strong AI (albeit a little inexplicably).
- pharmakom 3 years agoNo he is not, but AIs are not allowed in the competition the story centers around.
- hoseja 3 years agoNear the end of the competition, as he is deep in his analysis, the light craft AI gives up on helping him since it gets overwhelmed. Granted it's not a full Culture Mind (kinda hazy, been a while) but still a point for the meatbag.
- hoseja 3 years ago
- 7thaccount 3 years agoI thought the protagonist wasn't nearly as talented as the culture AIs (even the ones that are not all that powerful)?
- thom 3 years agoIs that clear from the text? Gurgeh supposedly perceives the result of the last game before the AIs so we’re led to believe he’s seeing deeper. Obviously he could have been wrong and still won. The AIs lied to and manipulated him the entire time so it’s hard to know, but it would seem a very odd weakness for an AI to have. I think Banks pretty quickly recanted on the subject of the Culture’s ‘referrers’ but I don’t think he plays a full Mind, so it’s not a clear cut conversation.
- hoseja 3 years agoI don't think a full Culture Mind is present but he outstrips his spacecraft's ability to help him with preparation in later stages of the competition. I clearly remember this.
- thom 3 years ago
- pharmakom 3 years ago
- CobrastanJorji 3 years ago
- fxtentacle 3 years agoThis is a great result, but you can see that it's more of a theoretical case because of this: "converging to perfect play as available computation time and approximation capacity increases." That is true for pretty much all current deep reinforcement learning algorithms.
The practical question is: How much computation do you need to get useful results? Alpha Go Zero is impressive mathematics, but who is willing to spend $1mio daily for months to train it? IMPALA (another Google one) can learn almost all Atari games, but you need a head node with 256 TPU cores and 1000+ evaluation workers to replicate the timings from the paper.
- sillysaurusx 3 years agoYou often don't need anywhere near the amount of compute in these papers to get similar performance.
Suppose you're a business that needs to play games. Most people seem to think that it's a matter of plugging in the settings from the paper, buying the same hardware, then clicking a button and waiting.
It's not. The specific settings matter a lot.
But my main point is that you'll get most of your performance pretty rapidly. The only reason to leave it running for so long is to get that last N%, which is nice for benchmarks but not for business.
DeepMind overspends. Actually, they don't; they're not paying anywhere close to the price of a 256 core TPU. (Many external companies aren't, either, and you can get a good deal by negotiating with the Cloud TPU team.)
But you don't need a 256 core TPU. Lots of times, these algorithms simply do not require the amount of compute that people throw at the problem.
On the other hand, you can also usually get access to that kind of compute. A 256 core TPU isn't beyond reach. I'm pretty sure I could create one right now. It's free, thanks to TFRC, and you yourself can apply (and be approved). I was. https://sites.research.google/trc/
It kills me that it's so hard to replicate these papers, which is most of the motivation for my comment here. Ultimately, you're right: "How much compute?" is a big unknown. But the lower bound is much lower than most people realize (and most researchers).
- fxtentacle 3 years agoMy personal experience was the opposite. I'm currently trying different approaches for building a Bomberman AI for the Bomberland competition that was discussed here on HN a few weeks ago.
"IMPALA with 1 learner takes only around 10 hours to reach the same performance that A3C approaches after 7.5 days." says the paper, but I can run A3C on a cheap CPU-only server but to get that IMPALA timing, I need to spend a lot of money. But my biggest roadblock so far is that I need compute far exceeding what the papers claim.
The diagrams for IMPALA show good performance starting at 1e8 environment frames and excellent performance at 1e9 frames. By now, I'm at 2.5e9 frames and performance is still bad. In my opinion, the reason is that the sequence lengths for Bomberland are quite long. To clear a path, you place a bomb, wait 5 ticks for it to become detonatable, then detonate it, then wait 10 ticks for the fire to clear. With 7 possible actions per tick, the chance of randomly executing this 17 tick sequence becomes (1/7)^17 = 4e-15. If I calculate optimistically that all moves are valid, too, while we wait, then I can get up to (1/7)(5/7)^5(1/7)*(5/7)^10 = 1e-4. But that still means that at 1e8 env steps, I only have 1000 successful executions to learn from.
- Javantea_ 3 years agoI don't have a lot of experience with IMPALA, but the sequence of events you describe should be very easy for an end-to-end system. Assuming you don't have an end to end system, just getting a gradient would result in rapid learning of that sequence. I'm surprised that at 2.5e9 frames you're not done. Perhaps there is a hyperparameter issue. Sorry I can't help but it sounds like you are in the same place I am with ML project. Good luck.
- iwd 3 years agoNot an expert, but I believe many papers on other video games make a single decision for the next X frames at once, possibly including a delay factor that governs exactly when to act. I think OpenAI’s Dota2 agent does this.
- ericd 3 years agoHm not an expert in this, but would something with a world model help, rather than depending on stochastic random action choices? It seems like it should be possible to learn that a frame sequence where you've been next to a bomb for 6 ticks is rapidly decreasing your expected score, and that your score would be significantly better if you weren't in line with the bomb pretty soon.
- Javantea_ 3 years ago
- loxias 3 years agoMy thoughts, not being in the field, are parallel to the parent post. "It's nice and all that we're achieving better and better computer performance at things that used to require the human brain, but it seems we're doing so by building larger and larger computers."Not to detract from that achievement, I love large computers in their own right!
I'm a dabbler in Go, and "somewhere below professional" at the game of poker. I've followed the advances in the latter for more than a decade, eagerly reading every paper the CPRG publishes. They use a LOT of compute power!
I know from experience that "The specific settings matter a lot.". For several years, I made my living "implementing papers for hire". It's real work, no argument there. Sometimes the settings are the solution, and heck, sometimes the published algorithm is outright wrong, and you only discover so when trying to implement it.
But the second part of your point, that it's not simply achieving more performance by throwing more transistors at it, I don't have experience with, and I sorta don't believe you. :)
Your comment is quite well written, making me (irrationally?) predisposed to suspect you're correct on factual matters, or at least more of a domain expert than I. Can you cite sources, or simply elaborate more?
- fault1 3 years ago> "The specific settings matter a lot.".
Yes, and in the case of deep RL, the ability to to get "lucky" random initialization seems to (still) matter a lot.
I work in real time control systems, which are roughly decision making under uncertainty problems. A lot of the RL research has become noise buoyed with large marketing budgets.
- fault1 3 years ago
- fxtentacle 3 years ago
- gwern 3 years ago> That is true for pretty much all current deep reinforcement learning algorithms.
Is that true? I was unaware that PPO, SAC, DQN, Impala, MuZero/AlphaZero etc would all automatically Just Work™ for hidden information games. Straight MCTS-inspired algorithms seem like they'd fail for reasons discussed in the paper, and while PPO/Impala work reasonably well in DoTA2/SC2, it's not obvious they'd converge to perfect play.
- fxtentacle 3 years agoYou can mathematically prove for a lot of different algorithms (including PPO, DQN, IMPALA) that given enough experience with the game world, they will eventually converge to the optimal policy. It's just that the "enough experience" part might be so large that it's practically useless.
If I remember correctly, the DeepMind x UCL RL Lecture Series proves the underlying Bellman equation in this video: https://www.youtube.com/watch?v=zSOMeug_i_M
As for "hidden information" games, I thought the trick was to concatenate the current state with all past states and treat that as the new state, thereby making it an MDP.
- gwern 3 years agoI don't think you can prove that (forgive me if I don't sit through a 2h video). Those all are susceptible to the deadly triad, and AFAIK there are no convergence proofs of any kind for the big model-free DL algs, and it would've been big news if someone had proved that a real-world version of PPO/DQN/IMPALA does in fact converge in the limit. Sutton's book and earlier proofs only cover cases where you drop the nonlinear approximator or something.
(History stacking may turn POMDPs into MDPs, but I don't know if they handle the specially adversarial nature of games like poker. That's quite different from stacking ALE frames.)
- gwern 3 years ago
- fxtentacle 3 years ago
- sillysaurusx 3 years ago
- tsbinz 3 years agoComparing against Stockfish 8 in a paper released today and labeling it as "Stockfish" is bordering on being dishonest. The current stockfish version (14) would make AlphaZero look bad, so they don't include it ...
- dontreact 3 years agoThe name of the game here is generality. For a really general agent, they are looking to have superhuman performance, not get state of the art on every individual task. Beating stockfish 8 convinces me that it would be superhuman at chess.
- remram 3 years agoThey could still be honest that it's Stockfish 8, not the Stockfish everyone has. Your product having genuine value does not excuse lying about that value.
- Skyy93 3 years agoI observed this kind of behavior in many papers nowadays. This extremely painful for research, because some better candidates could be overseen and FAANG publishs a majority in the ML-paper section. Its a mess.
- ShamelessC 3 years agoThey were? They say they use Stockfish 8 the very first time they mention it.
- Skyy93 3 years ago
- remram 3 years ago
- ShamelessC 3 years agoThe first mention says "Stockfish 8, level 20" in the paper. This isn't a blog post that you can skim, you need to read the whole thing before critiquing.
- karpierz 3 years agoThat's actually the second mention, the first is when they introduce the games in section 4:
> Today, computer- playing programs remain consistently super-human, and one of the strongest and most widely-used programs is Stockfish.
They also go back to referring to it as Stockfish for the rest of the paper.
An analogous situation in my mind would be if AMD released a new CPU and benchmarked it against an Intel CPU, only mentioning once, somewhere in the middle of the paper, that it was a Pentium 4.
- Vetch 3 years agoThis sort of evasiveness around speaking on method limitations, down playing or de-emphasizing related work but boosting senior authors previous work is standard academic fare. It's partly a strategy against novelty nitpickers and results in a net negative for all.
I also suspect part of the reason they chose Stockfish 8 was as a basis of comparison with AlphaZero. Their baselines for Go and poker are also pretty weak so their emphasis is clearly on displaying generality and reduced domain specialized input, not supremacy.
A single algorithm to play perfect and imperfect information games is difficult to achieve. Standard depth limited solvers and self-play RL result in highly exploitable agents. PoG appears to be very strong at Chess, decently strong at Go and decent at Poker (Facebook AI's ReBeL, the strongest prior work in this area, performed better against slumbot). What's unique about PoG is its ability to also play an imperfect information game (Scotland Yard) that has many rounds and a relatively long horizon (although it still has scaling issues).
- ska 3 years ago> An analogous situation
It really isn't though. Technical papers have conventions, and they following them reasonably. You expect the methods description to be specific, the abstract not to be hyperbolic, and conclusions to be balanced. The general discussion parts are just that, general.
In the methods area they discuss the exact versions and parameters used, and how they compared them.
In the conclusions:
It would have perhaps been interesting to include a more recent stockfish, but it wouldn't really impact the paper.In the perfect information games of chess and Go,PoG performs at the level of human experts or professionals, but can be significantly weaker than specialized algorithms for this class of games, like AlphaZero, when given the same resources.
- ShamelessC 3 years ago> Today, computer- playing programs remain consistently super-human, and one of the strongest and most widely-used programs is Stockfish.
This is just a general effort to describe the present state of things. When they explicitly describe their evaluation process, they are sure to use the version number. They then _immediately_ drop the version number in subsequent usage which is culturally standard in research papers so they don't concern themselves with minute details of every single thing they find themselves redescribing. Believe me, you don't want to read the verbose version of this paragraph.
> In chess, we evaluated PoG against Stockfish 8, level 20 [81] and AlphaZero. PoG(800, 1) was run in training for 3M training steps. During evaluation, Stockfish uses various search controls: number of threads, and time per search. We evaluate AlphaZero and PoG up to 60000 simulations. A tournament between all of the agents was played at 200 games per pair of agents (100 games as white, 100 games as black). Table 1a shows the relative Elo comparison obtained by this tournament, where a baseline of 0 is chosen for Stockfish(threads=1, time=0.1s).
- ahefner 3 years agoI'd be interested to see that benchmark. A ~3 GHz Pentium 4 sounds like a good reference point for single threaded performance since it's a reasonably modern OoO microarchitecture and reflects the moment that clock scaling stopped.
- Vetch 3 years ago
- tsbinz 3 years agoI obviously read it, otherwise I wouldn't have known which version they are using. They are banking on others, that do just skim the figures and tables, not noticing their usage of outdated baselines.
- dontreact 3 years agoI honestly don’t care what version of stockfish they used and neither does most of their intended audience, for the reasons I stated.
- dontreact 3 years ago
- karpierz 3 years ago
- david_draco 3 years agoIsn't the point comparing traditional heuristic techniques against DNN-learned techniques? I understand the latest Stockfish is etching quite close to AlphaZero techniques, but maybe I am wrong.
- tsbinz 3 years agoIt does have the option to use a neural network (nnue) in its evaluation, but it is very different from what AlphaZero/Lc0 do. You can choose not to use it, so you still could have a "traditional" evaluation (which would still blow Stockfish 8 out of the water). Also, Stockfish 8 isn't the last version before they merged nnue ...
- tsbinz 3 years ago
- moondistance 3 years agoThe abstract clearly states that the best chess and Go bots are not beaten: "Player of Games reaches strong performance in chess and Go, beats the strongest openly available agent in heads-up no-limit Texas hold’em poker (Slumbot)..."
- nixed 3 years agothe same goes for slumbot in poker, its super old like 2013, the game is played completely different now and current bots would destroy it.
- bluecalm 3 years agoThe problem with poker is that there is money to be made from having a strong AI so there is 0 incentive to release it. What's publicly available are solvers (which solve game abstractions similar to the full game but don't play themselves) and shitty bots.
- scrozart 3 years agoAs a commenter above noted, this work is about generality, being able to play every game, and not being the best at every game.
- seoaeu 3 years agoThe abstract claims they beat the "strongest openly available agent in heads-up no-limit Texas hold'em poker". To a non-expert that certainly sounds like they're claiming to be the best
- Skyy93 3 years agoAs noted before, the reason for including old tech is to look better. Why not mention the current state of the art and show that with a general player we can come close to this results?
This is just benchmark cherry picking and does not reflect real performance or comparison.
- seoaeu 3 years ago
- bluecalm 3 years ago
- 3 years ago
- dontreact 3 years ago
- hervature 3 years agoI think this is a good step forward that generalizes an algorithm to play both perfect and imperfect information games. However, table 9 shows (I believe it shows, it is not the most intuitive form), that other AIs (Deepstack, ReBeL, and Supremus) eat its lunch at poker. It also performs worse than AlphaZero at perfect information games. So, while a nice generalizing framework, probably will not be what you use in practice.
- SuoDuanDao 3 years agoI didn't even know about the book until I read the comments here, I thought it was a reference to the Grimes song. Funny coincidence the song and the engine would appear so close in time to one another.
- Severian 3 years agoThe Grimes song is a reference to the book too. She also has Marain subtitles in her video for "Idoru", which is the language used in The Culture. Weird mix of two author's (Idoru being William Gibson) works to be sure.
- Severian 3 years ago
- ArtWomb 3 years agoThis seems like a significant milestone in AI. I mean what can't an agent with mastery of "guided search, learning, and game-theoretic reasoning" accomplish?
- ausbah 3 years agomodeling every task as a game seems like a big hurdle, or even just getting a working "environment"
- ausbah 3 years ago
- WilliamDampier 3 years agoso this is what Grimes latest song is about?
- 323 3 years ago> All the lyrical evidence that Grimes’ new song ‘Player of Games’ is about ex Elon Musk
> Grimes seemingly makes multiple, thinly veiled references to Musk in the song
https://www.independent.co.uk/arts-entertainment/music/news/...
- cwkoss 3 years agoSpaceX's landing pad barges are also named after Culture series starships
- cwkoss 3 years ago
- junon 3 years agoYeah wtf, was my first thought. This is mind blowing if true.
- junon 3 years agoActually she probably got the name from the sci-fi novel this is named after:
- junon 3 years ago
- 323 3 years ago
- pixelpoet 3 years agoAnyone else surprised to see that Demis Hassabis didn't have a hand in this research? Given his background as a player of many games, and involvement in a lot of their research.
- thomasahle 3 years agoI'm more surprised David Silver isn't on it, since his background is in imperfect information games, with papers such as https://arxiv.org/abs/1603.01121 He did multiple poker papers before he was the main author of Alpha Zero.
- thomasahle 3 years ago
- BeenChilling 3 years agoI want to see deepmind make a bot to play team based first person shooters like csgo and rainbow6 siege, to stack up five of them against a team of professional players.
- fho 3 years agoHonestly that probably won't be too interesting as (a) one AI could perfectly control several agents (ie perfect coordination of global strategies) and (b) an AI has low to no reaction times and perfect aim (aimbots already have that) so I would expect that would quickly result in a slaughterfest.
- arlort 3 years agoWhat would be interesting would be 5 independent AIs (even just different instances of the same AI of course) using the same interface as human players, so the same controls and the same video output
I am pretty sure aimbots access the internals of the game rather than reading the video output to identify the silhouette of the enemy.
- ausbah 3 years agoIIRC multi-agent domains are in their own category specifically because a single agent posing as "multiple agents" usually can't solve such environments, you need multiple agents with varying degrees of dependence
- gverrilla 3 years agoSame applies to dota2, and it was very interesting what they did there. But yeah first they would need to simulate how human players react and aim, or it would be impossible to play against.
- LudwigNagasena 3 years ago(a) make them independent (b) add 100-200ms delay
- arethuza 3 years ago"...such consummate skill, such ability, such adaptability, such numbing ruthlessness, such a use of weapons when anything could become weapon..."
- arlort 3 years ago
- ausbah 3 years agothat's what OpenAI did a couple yewrs ago with Dota 2
- mensetmanusman 3 years agoThey probably won’t for publicity reasons.
- fho 3 years ago
- skinner_ 3 years agoIt would be awesome to have two interacting communities: AI experts building open source general game playing engines, and gaming fans writing pluggable rule specifications and UIs for popular games.
A bit of googling shows that there is a General Game Playing AI community with their own Game Description Language. I never really encountered them before, and the DeepMind paper does not cite them, either.
- dpflug 3 years agoLast I looked, the GGP community is focused on perfect information games currently. I had the same thought, though.
- dpflug 3 years ago
- cab404 3 years agoSCP-like name for SCP-like neural network.
"SCP-29123 Player Of Games"
- wiz21c 3 years agoCouldn't resist :
- antonpuz 3 years agoAnyone knows whether the agent is publicly available?
- simonebrunozzi 3 years agoCan this be realistically used by game companies to provide a much better AI experience for strategy games?
- bkartal 3 years agoImpressive work! Most authors, if not all, are from DeepMind Edmonton office.
- crhutchins 3 years agoI'll try to look into a brighter light into this one.
- RivieraKid 3 years agoWow, it can beat a good poker bot, that is impressive.
- 3 years ago
- loxias 3 years agoPsh, wake me when it can play Mao. ;)
- 3 years ago
- 3 years ago
- 3 years ago
- 3 years ago
- wly_cdgr 3 years agoThe future is so depressing
- wetpaws 3 years agoFun fact: The consensus between professional go and chess players is that all new AI systems (alphago, etc) have really revitalised the game and introduced incredible amount of new strategies and depth.
- loxias 3 years agoI wish alphago was more "democratized" -- that is to say, I have many questions and experiments I'd love to run on it (a friend of mine and I have frequently pondered Go played in various different topological spaces, and I'd love to see an AI's result, for example).
- kadoban 3 years agoLook into Katago. It's an open source AI in the same general style as AlphaGo, with an empasis on training speed. On 9x9 you can get up to superhuman really quickly on just a decent home machine (I think hours/days, can't remember exactly and it's probably improved since I looked).
- franknstein 3 years agoFun idea. Did you reach any interesting conclusions?
- kadoban 3 years ago
- jart 3 years agoSad fact: Lee Sedol retired after AlphaGo defeated him.
- wly_cdgr 3 years agoYeah, whatever. As someone who grew up playing chess and is almost certainly much better at it than you, this future sucks
- loxias 3 years ago
- wetpaws 3 years ago
- mudlus 3 years agoYawn, show me a computer that game make fun games
- TaupeRanger 3 years agoYou're getting downvotes but honestly I agree. Who cares about board games? We should've moved on from this once we "solved" chess and Go. There are more important things and it's not remotely surprising that a computer can beat a human when there's a simple, abstract optimization problem to throw computing power at. Make it creative...now that's a challenge worthy of the top AI talent.
- newswasboring 3 years agoI agree. I have always wondered if I can feed GPT-3 a bunch of rule books and ask it to generate game rules.
- kadoban 3 years agoYou haven't seen AlphaGo play Go then, it plays creatively as hell at points.
- TaupeRanger 3 years agoIt might play creatively, but it doesn't create any useful knowledge by doing so, making it kind of amusing but not the kind of creativity anyone is really interested in.
- TaupeRanger 3 years ago
- newswasboring 3 years ago
- Buttons840 3 years agoSolving the game comes before solving for fun. If we create an AI that can win, then we can hamper the AI in fun ways, or give it an altered objective function that maximizes the players fun.
- mbrodersen 3 years agoYes indeed. AI research will only take a real step forward when it learns how to be creative instead of just very good at optimising simple formal systems like board games.
- baq 3 years agoif making games is a game...
- TaupeRanger 3 years ago