Not long ago, SebasTIan Thrun, co-founder and president of Udacity, communicated with Chinese netizens by knowing about Live.
AI and the future world in the eyes of SebasTIan Thrun
AI, also known as artificial intelligence, is one of the hottest topics in Silicon Valley. The goal of artificial intelligence is to make machines as smart as humans and even smarter than humans. Historically, about 300 years ago, humans invented steam engines and agricultural equipment that physically surpassed human capabilities.
What AI has to do is to let the machine transcend our human beings from the intellectual level. Machines can also become very smart, they can play games, drive, fly, and there are many things they can do. The most interesting area of ​​artificial intelligence at present is machine learning.
Machine learning refers to the ability of a machine to learn from experience. Again, let's compare it. When you program a computer, you will tell the computer exactly what kind of reaction you need to do in any possible situation. Today's computer programs usually have tens of thousands of lines of code because there are thousands of situations that need to be processed.
And a programmer needs to be smart enough to anticipate all possible situations and avoid crashing your computer. This is why software engineers are paid so much. In the age of machine learning, machines no longer need to be instructed one by one, they can also be "educated", just like the education of human children. When we teach children, we don't instill all possible behavioral instructions to them, but let them keep trying, fall, and then stand up and learn from the experience of falling. Machine learning allows you to learn from experience or data and grow like a child.
I have been exploring the field of artificial intelligence for a long time. You may not believe it, I wrote a master's thesis on machine learning and robotics in 1993. Since then, I have been deeply fascinated by a question: Can the machine learn in the same way as humans?
In 2005, more than a decade ago, I participated in the DARPA Super Challenge organized by the US government, an unmanned car driving competition. 196 teams participated in the $1 million prize, and my team at Stanford won the victory. I was the director of the Stanford Artificial Intelligence Lab and played the same role in Google's artificial intelligence project. At the core of my work, I used machine learning. Stanley is a robotic car that will learn. It learns from the data, sometimes it learns from its own experiences and mistakes, and more often it learns the behavior of human drivers, making it possible to drive like humans. In the end, Stanley's learning module made it stand out from the 196 teams and won the DARPA Super Challenge in 2005, with a decisive advantage.
Google's driverless cars will also learn, and we have the same problem: there are too many rare situations to consider during driving, and unmanned vehicles need to be able to handle any of them. So, the Google driverless car eventually drove millions of kilometers on the road to train the software on how to drive. One difference between humans and computers is that the speed of learning varies greatly. For example, if a human driver makes a mistake, he will learn from it, maybe he can make no more mistakes next time.
But others don't have the same gains. If there is a mistake in a driverless car, not only will it learn from it, but all other driverless cars and even all future driverless cars will gain new experience. This means that one driver can train all the driverless cars in the world, and the speed of learning unmanned vehicles far exceeds that of humans. This difference will lead to one day in the future, drone driving will be much safer than human driving. This is a very important difference between artificial intelligence and human beings, and it is also applicable to many other fields.
Machine learning is being used in many, many, and many areas. For example, medical diagnosis. Machine learning can diagnose cancer more accurately than the best human doctors. In the legal field, the most senior lawyers will also lose to machine learning in the search for information and drafting contracts. Of course, there are also the Internet, Google and Baidu, both because of the power of machine learning, can search for information with accuracy beyond human imagination. There are many other areas, such as accounting, flying aircraft and playing games.
Some time ago, Google's AlphaGo just defeated the world champion in Go. And all of these artificial intelligence applications have one thing in common: they all use machine learning to learn from vast amounts of data. If you look at machine learning again, like AlphaGo, you'll find it can learn from thousands of games. No human expert can live that long, and watch millions of games. This difference allows AlphaGo to use much more empirical data than humans, and the Go level eventually surpasses all humans on Earth.
In the future, the change of artificial intelligence to human life will be the same as the changes brought about by the agricultural revolution and the industrial revolution, so that we can become stronger. It will free us from repetitive work that doesn't require brains, such as the many things you do every day in the office. In the future, lawyers can waste less time searching for information and spend more time on creative thinking; doctors' misdiagnosis will be greatly reduced, they can better diagnose human diseases, and they can spend more time communicating with patients. Rather than staring at skin tissue samples.
Wonderful question and answer
Q: How do you view the future of computer vision?
Thrun : Computer vision is one of the most exciting areas of artificial intelligence. Until a few years ago, we couldn't even identify the most basic parts of the camera image, such as your face, or the chair you were sitting on, or the flowing cloud. But thanks to deep learning, we are now able to analyze very complex things.
For example, a car parked in a parking lot, a computer placed on a table, or even a soft, angular object, such as food on the edge of a refrigerator. This is just the beginning. Computer vision enables a new way to control a car. In the past, driverless cars used radar and laser as a way to sense the environment. Now there is a new way to analyze the direction of travel with computer vision and cameras, and it is making rapid progress.
Q: Is there a big gap between the homes about autopilot technology and the degree of implementation? Like Tesla and Geohot?
Thrun : You asked the market for driverless cars and the difference between Tesla and Geohot. If you look at the news of Geohot, you will know that Geohot has decided to terminate the project of driverless cars and switch to other projects. This decision responded to the US government's question about whether Geohot is really safe for users. I have to admit that I am very sad to see this news today, because I hope that the technology of driverless cars will have a market in the short term. Instead, Tesla has built an unmanned technology called Autopilot.
It's not perfect until you can sleep while driving, you have to stay focused. But if you stay focused, it's great to be on the route. I have a Tesla and I use Autopilot every day. Tesla's technology was based on Mobileye, an Israeli company, now replaced by Nvdia, a US company. So you can see the progress of driverless performance as the specific technology is improving. In today's market, most of them are non-driving technologies, driving assistance technology, and cars have only one function. Tesla is the most advanced one because it is a great company that invented Autopilot.
Still, Autopilot is not the perfect driverless car system. However, the industry has made rapid progress. In the next two or three years, I expect more mainstream auto companies to do unmanned driving and develop similar driverless technologies.
Q: How far is the current artificial intelligence away from strong artificial intelligence? When can robots really understand what people say, understand what they see, and understand human emotions, can (depth) neural networks do this?
Thrun : The current artificial intelligence is far from strong artificial intelligence, and now it is specialized artificial intelligence. Every artificial intelligence system is proficient in one task, but if you want it to do a different task you need to learn from zero. This is important because we don't need a driverless car to play chess, and we don't need the airplane to shoot. These are different areas.
However, the question now is whether robots can understand human emotions. I think this requires a lot of progress. It is now a great improvement to train artificial intelligence to understand that human beings are happy or depressed, uncomfortable or exhausted. The artificial intelligence system on your mobile phone can identify your current mood. This can be done with the most basic artificial intelligence system. You only need to observe the frequency of your texture, the frequency of WeChat, and the way you walk.
I want to say that I am very interested in human emotions, but I am not interested in machine emotions. I don't need my artificial intelligence to have emotions, I don't want it to be angry or happy, I don't want to go to the kitchen and find that my fridge falls in love with my dishwasher. It is not willing to work for me because I tempered it last night. I want the machine to work steadily, so I don't think the artificial intelligence machine needs emotion.
Q: What is the root cause of machine learning based on a learning approach? How will it interact with more traditional AI methods?
Thrun : The root cause of machine learning is that it must be done in a similar way to human learning. If you want to teach your child to do the right thing, you can't sit down and write down the rules of life. It is impossible to teach him. On the contrary, you should let children learn from his own experiences.
Machine learning is not learning as we have written code, but also learning from experience, so it is not done in many areas. For example, Google learns from hundreds of billions of online web pages every day. There are no rules for this data, but the data is regular. So today's machine learning is to use the human learning method to draw conclusions on the data, which is more powerful than the conventional programming method.
Q: What are the new requirements for future hardware technologies to meet the trend of AI?
Thrun : A major enabler of artificial intelligence development today is the size of computers. This is impossible 10 years ago. Ten years ago, our computer systems were the largest in size than the mouse brain, and now they are bigger than the human brain. This makes everything different. Most of the artificial intelligence field has found that the best algorithms in the world have appeared 30 years ago, but the reason for the rise now is that computers are getting faster and faster.
Looking ahead, computers must be faster, cheaper, and more focused on floating-point operations, which can be connected to each other in large numbers. At the moment when all these things are happening, the biggest difference between computer companies (such as Google and Amazon we know) is that the most important thing for AI is the ability to handle floating-point operations. Therefore, GPU (graphics, graphics processor) and floating-point computing power are necessary and driving forces for artificial intelligence.
Features of Fishing Rod Cover Sleeve
1. Products are widely used on the fishing rod.To protect and play the role of beautify,Also undertake various other styles of fishing rod sheath products we can tailor-made packages according to customer's sample.
2. Material: PET non-toxic environmentally friendly halogen-free materials
3. Usage: set on the fishing rod
4. Flexibility,excellent,easy to bend,can be loose to tight,with good flexibility,elasticity and abrasion resistance.
Rod Sleeve For Fishing,Sleeve For Fishing Rod,Rod Cover For Fishing,Braided Cover For Fishing Pole
Shenzhen Huiyunhai Tech.Co.,Ltd , https://www.hyhbraidedsleeve.com