New York, March 9 (IANS) In a new feat, Google-run artificial intelligence (AI) programme “AlphaGo” has defeated legendary player Lee Se-dol in Go — a complex Chinese board game that is considered the “quintessential unsolved problem” for machine intelligence.
The win came in the first tie of the five-match series currently being held in Seoul, South Korea. The tournament “Google Deepmind Challenge match” started on March 8 and will conclude on March 15.
“Lee resigned after about three and a half hours, with 28 minutes and 28 seconds remaining on his clock. The series is the first time a professional 9-dan Go player has taken on a computer, and Lee is competing for a $1 million prize,” The Verge reported on Wednesday.
Go — a game of profound complexity — is played by more than 40 million people worldwide. The number of possible positions in the game are more than, as they say, the number of atoms in the universe.
Se-dol is a South Korean professional Go player. As of February 2016, he ranked second in international titles, behind only Lee Chang-ho.
“I have heard that Google DeepMind’s AI is surprisingly strong and getting stronger, but I am confident that I can win at least this time,” he said earlier this year.
Google’s taciturn artificial intelligence arm DeepMind said that its programme AlphaGo combines an advanced tree search with deep neural networks.
In January, “AlphaGo” defeated Fan Hui – the European champion of the game that was developed in China 2,500 years ago.
Fan Hui has devoted his life to the game since the age of 12 and extended his streak five games to nil in October last year in London.
“After all the training, we put AlphaGo to the test and held a tournament between AlphaGo and the other top programmes at the forefront of computer Go,” a Google’s DeepMind executive had said.
Reportedly, Facebook is also working on beating Go and its CEO Mark Zuckerberg said that his artificial intelligence (AI) scientists are “getting close”.
“The ancient Chinese game of Go is one of the last games where the best human players can still beat the best artificial intelligence players. Scientists have been trying to teach computers to win at Go for 20 years,” he wrote in a post in January.
The first game mastered by a computer was noughts and crosses (also known as tic-tac-toe) in 1952. In 1997, IBM’s Deep Blue computer famously beat Garry Kasparov at chess.
The game involves players taking turns to place black or white stones on a board, trying to capture the opponent’s stones or surround empty space to make points of territory.