Google held a global conference call yesterday, and Demis Hassabis, founder of Deep MInd, announced Google’s important advancement in artificial intelligence: developing a program that beats pros in Go - AlphaGo The latter can master the game skills through machine learning.
Computers and human races are not uncommon in chess competitions. On the chessboards of three-game chess, checkers and chess, computers have successfully challenged humans. But for Go, which has a history of more than 2,500 years, computers have never beaten humans before. Go looks like a simple board and rules are not difficult. There are 19 nine equidistant, vertically intersecting parallel lines on the checkerboard, which constitute 19 & TImes; 19 (361) intersections. The two sides of the game alternately, the purpose is to occupy as much space as possible on the board.
Under the minimalist gameplay, Go has incredible depth and subtleties. When the board is empty, the first hand has 361 options. In the course of the game, it has more choices than chess, which is why the developers of artificial intelligence and machine learning always hope to make breakthroughs here.
From the perspective of machine learning, the calculation of Go has a maximum of 3^361 situations, and the approximate volume is 10^170. In the observed universe, the number of atoms is only 10^80. Chess has a maximum of 2^155 situations, called the Shannon number, which is roughly 10^47.
The traditional artificial intelligence approach is to build all possible moves into a search tree, but this method does not apply to Go. AlphaGo, launched by Google, combines advanced search trees with deep neural networks. These neural networks convey the description of the board through 12 processing layers, which contain millions of connection points similar to nerves.
One of the neural network "policy networks" is responsible for choosing the next move, and the other neural network "value network" predicts the winner of the game. Google uses the 30 million steps of Go with the master of Go. At the same time, AlphaGo also researched new strategies on its own, running thousands of Go games between its neural networks, and using trial and error to adjust the connection points. This process is also called reinforcement learning. A lot of research work has been done through the extensive use of the Google Cloud Platform.
Conquering Go is important to Google. AlphaGo is not only an "expert" system that follows manual rules, but it also learns how to win a Go game through "machine learning." Google hopes to use these technologies to solve the most serious and pressing problems in the real world – from climate modeling to complex disaster analysis.
In specific machine training, the way to make a decision network is to enter the game of the human Go expert, until the system can predict 57% of human actions, before the best score is 44%. Since then, AlphaGo has begun to learn to explore new Go strategies by playing inside the neural network (which can be easily understood as playing chess with itself). AlphaGo's decision-making network can now beat most of the most advanced Go programs with huge search trees.
Value networks are also trained by playing chess on their own. The current value network can assess how much odds each move can have. This was previously considered impossible.
In fact, AlphaGo has become the best AI Go program. In the game with other programs, AlphaGo won 500 games with one machine, and even had a record of winning the opponent's 4 hands. From October 5th to October 9th last year, Google arranged for AlphaGo to compete with European Go champion Fan Hui (Fan Wei: head coach of the French national Go team), and Google won 5-0.
The open competition will be held in March this year. AlphaGo will compete with South Korean Go player Li Shishi in Seoul. Li Shishi is the player with the most titles in the world in the past 10 years. Google has provided $1 million for this. bonus. Li Shishi said that he is very much looking forward to this matchup and is confident of winning.
It is worth mentioning that the last famous man-machine game dates back to 1997. At that time, IBM's supercomputer "Deep Blue" defeated chess champion Kas Barov. But the algorithm of chess is much simpler than Go. Winning in chess only needs to "kill" the king, while in Go, it uses a number or a better way to calculate the outcome, not simply killing the opponent. Previously, the designer of the "dark blue" computer published an article in 2007, stating that he believes that a supercomputer can defeat humans in Go within ten years.
In addition, the release of AlphaGo was also the first voice since Deep MInd was acquired by Google in January 2014. Prior to the acquisition, the London-based company in the field of artificial intelligence also received investment from Tesla and SpaceX founder Musk.
Adapter with interchangeable plugs,Switching Adapter with detachable AC plug,universal adapter, switching interchangeable power adapter,interchangeable power supply,24v Detachable Plug Power Adapter with 5 different plug types (CN,EU, USA, AU and UK).
24V Detachable Plug Power Adapter for robot cleaner, massage chair, humidifier, juice extractor, coffee machine, small household appliance, fingerprint scanner, CCTV camera-etc
With this interchangeable plug -- CN,US,EU,AU,UK plug,you can only take one 24V Power Adapter with you when you traveling among these 5 countries.
We can also make it with more plug if you tell us your specific requirement.
24v Detachable Plug Power Adapter
24 Volt Adapter,Ac Adapter 24V,24V Ac Adapter,24V Dc Power Adapter
Shenzhen Juyuanhai Electronic Co., Ltd. , https://www.powersupplycn.com