Saturday, March 12, 2016

AI Triumphs As Google's Go Program Beats Human World Champion

It was over 20 years ago that IBM's Deep Blue chess playing program defeated world chess champion, Kasparov. Now, Google's Go playing program has defeated Lee Se-dol of Korea in the first of several rounds of playing.  But the reports have it that this first round involved a pretty decisive outcome, so probably this is it and AI has triumphed over humans yet again.

It is a sign of how much more complicated Go really is than chess that it has taken this long for this outcome.  By most accounts the program involved deep neural network learning systems far beyond what has been done before.  In a way it is curious in that the rules of Go are much simpler than those of chess, with only black and white stones that are  placed in locations on a 19 by 19 board in an effort to securely surround territory, in contrast with chess where pieces have these different powers and functions, although on a smaller board.  There are far more  possible strategies in Go, making it much harder to use the brute force methods used in other game playing programs. 

I do not claim any great expertise in the game, although relative to most I am probably better at it than I am at chess, where I have now been beaten by two of my grandsons (they have not been able to beat me in Go yet).  I never could beat my old man at it, although I once gave him a serious run for his money at it.  In any case, when I played against a relatively crude program some years ago, it slaughtered me.  So, I have my limits.

Nevertheless, I thought I might give a picture of  how subtle the strategies are in the game by recounting a story my late father told me from his days as a grad student in math at Princeton in the early 1930s, a time when the place was crawling with some truly brilliant people.  A Go master from Japan came and played against a group of the math grad students (Go has long been popular among people like John Nash and other brilliant math types).  As first the grad students were ahead, but as the game progressed the Go master gradually caught up, finally winning by precisely 8 stones (not a lot).  Later someone found out that if a player is really far superior to  another he will win by that amount, which represents the 8-fold way of Buddhism, supposedly not to humiliate the opponent, although clearly really doing so seriously.  In any case, that a computer  program can now beat someone who is probably capable of pulling that off is quite an achievement.

No, I am not going to go off on some Luddite rambling about the fall of humanity or whatever.  Yeah, maybe we are facing some long term issue of  ever more capable robots replacing humans in the work force, but I do  not know if this is the final straw in that or not.  I am not prepared to  follow Robin Hanson or others into believing in The Singularity when computers simply take over everything and totally replace us, but who knows?  I do not.  Maybe I need to install an deep learning neural network system in my brain...

Barkley Rosser

6 comments:

Sandwichman said...

"Now, Google's Go playing program has defeated Lee Se-dol of Korea in the first of several rounds of playing."

Correction: the programmers who wrote the code defeated the champ.

rosserjb@jmu.edu said...

They were not playing. It was the program that was, and it was able to defeat him only after a long period of learning how to play.

Heck, S-man, I thought you were going to get on my case over my remarks about robots, :-).

rosserjb@jmu.edu said...

Whew! I was afraid I was going to get excoriated for implying incorrect things about the lump of labor fallacy!

In terms of the substantive issue, it is that while the original programmers set it up, there appears to be a dramatically new system of self-learning that has been created by this basic program, with the system partly learning how to play by playing versions of itself. What Lee is playing against is something that has developed very far beyond what was written in the original code, and far beyond anything in the minds of those writing that code except only its possibility. it is not like people playing a piece of music.b

mike shupp said...

I am waiting for Google to enter the Tour de France.

Algosome said...

Although the software incorporates significant advances, there are no fundamental breakthroughs here. It's more like the development of James Watt's steam engine compared to previous steam engines. Playing against itself has been a standard method for development of game-engine skill since the beginning. Deep neural nets have been considered since neural nets were invented, but they were unfeasible because they required too much data and too much compute resources. Deep Blue, the first superhuman chess computer, had 32 CPUs and 512 special purpose cores. AlphaGo's competition configuration had 1202 CPUs, each more than 10X as powerful as Deep Blue's and 376 GPU's, with the number of compute cores unspecified, but 300 cores/GPU is two-year old technology, nowhere near state of the art.

It's important to recall the panics when steam-powered railroads first appeared. I learned the legend of John Henry so long ago that I can't recall the context any more. This is simply another in a long line of unequivocal demonstrations that humans are not the best at absolutely everything.

We did not evolve to solve board games. I'll start worrying (just a bit) when the first robot comes back from a year-long solo "walkabout" in the Australian outback. Mars might do, as well. When it goes out with a band of friends and they come back with more solar cells than they started with, then things will get interesting.

rosserjb@jmu.edu said...

For those not following, the human, Lee Sedol, beat the program in the fourth round out of 9. Reportedly made astoundingly brilliant 78th move that blew the poor computer away.

We shall see.