We have explored neural networks and deep learning techniques in a previous article, and now it's time to discuss another major component of deep learning: data, ie images, video, email, driving patterns, phrases, objects, etc. .
Surprisingly, although our world is almost overwhelmed by data, a large part of it is unmarked and unorganized, which means that it is not available for most current supervised learning.
Deep learning relies in particular on a large number of good, structured, tagged data. In the second part of our "Non-Mathematics Guide to Neural Networks," we will examine why high-quality, tagged data is so important, where it comes from, how it is used, and what solutions will be available in the near future. The machines we make that are eager to learn.
Supervised learning: let me hold your hand
In an article about neural networks, we explained how to make a carefully crafted "sausage press" (sausage)
Press) Enter data into the machine, which can quickly analyze, analyze and even refine itself.
This process is considered supervised learning because a large amount of data is entered into the machine and the data is painstakingly tagged. For example, to train a neural network to recognize images of apples or oranges, you need to label these images. The machine can understand the data by recognizing all the pictures marked as apple or orange. These pictures have something in common, so the machine can finally use these recognized pictures to more accurately predict what appears in the new image. The more tag data they see, the larger the data set they see, the better the accuracy of their predictions.
This method is useful for teaching machines to learn visual data, and can teach the machine how to recognize things from photos and videos to graphics and writing. One obvious advantage is that in many applications, machines do better than image recognition in humans.
For example, Facebook's deep learning software can match two photos of a stranger with the same accuracy as humans (actually 97% better than humans), and Google introduced a type earlier this year. The neural network of tumors can be detected from medical images, and its accuracy is even higher than that of physicians.
Unsupervised learning: conclusions without the guidance of guardians
As you might expect, unsupervised learning corresponds to supervised learning. This means that you loosen the belt attached to the machine, let it sneak into the data, autonomously discover and experience, find patterns and connections, and draw conclusions without the guidance of the guardian. This technology has long been criticized by some of the artificial intelligence scientists, but in 2012, Google showed a deep learning network that can decipher cats, faces and other objects from a large number of unmarked images. This technique is impressive and brings some very interesting and useful results, but so far, unsupervised learning in any aspect has failed to achieve the accuracy and effectiveness of supervised learning.
Ubiquitous data
The difference between these two approaches has led us to a discussion of a larger, confusing topic. It is useful to compare these machines to human babies. We know that as long as our children are relaxed, they will learn without guiding them, but what he learns is not necessarily what we want him to learn, and the way of learning is unpredictable.
However, since we also teach children through education, we need to expose children to a large number of objects and concepts through objectively infinite topics. We need to teach children directions, animals and plants, gravity and other physical properties. Reading and language, food types and elements, etc. In fact, it is all things that exist. Over time, almost all of this can be explained by showing and telling about events and answering the infinite number of questions for young people. These questions are all raised by any curious young person.
This is a huge project, but all parents and ordinary children are doing this every day. Neural networks have the same needs, but their focus is usually narrower and we don't socialize with them, so tags need to be more precise.
Currently, artificial intelligence researchers and scientists can take many ways to obtain data to train their machines. The first method is to go out and collect a lot of tag data yourself. This is true for companies like Google, Amazon, Baidu, Apple, Microsoft, and Facebook. Interestingly, these companies have amazing amounts of data—most of which are provided free of charge by customers. It would be foolish to list all of this data; but you should consider the billions of tagged images uploaded to these companies' database storage.
Then think about all the documents, search queries via voice, text, photos and optical character recognition, location data and maps, ratings, likes and shares, shopping information, courier addresses, phone numbers and contacts, address books and social networks. . Companies with these resources -- and any large company -- often have unique advantages in machine learning because they have a rich set of specific types of data.
Data difficulties
If you happen to not have a Fortune 100 company with a lot of data, then you should know how to share it with others. Obtaining a large amount of diverse data is a key part of artificial intelligence research. Fortunately, there are now a large number of free and open tag datasets covering a wide variety of different categories. As you can imagine, you can find a variety of data sets that display human facial expressions and sign language to public face shapes and skin colors.
You can also find millions of pictures of people, forests and pets, including photos of all pets; you can also get information by screening a large number of users and customer reviews. In addition, there are data sets, including spam, multilingual tweets, blog posts, and legal case reports.
New data types come from more and more ubiquitous sensors in the world, such as medical sensors, motion sensors, gyroscopes for smart devices, thermal sensors, and more. There are also photos taken by people for food, wine labels and satirical slogans.
Where is the problem?
Despite the sheer number of data, it turns out that much of the data is not that useful. Either they are too small, they are not very good, or they are only partially labeled, or the labels are not suitable. In short, they are unable to meet your needs. For example, if you want to teach a machine to identify the Starbucks logo in an image, you may only find a database of images for training that may be labeled "Beverages," "Beverages," "Coffee," "Containers." Or the name "Joe", without the correct label, they are useless.
A typical law firm or an established company may have millions of contracts or other instruments in its database, but the data cannot be used because they may simply be saved in an unlabeled PDF format. Another challenge in obtaining optimal data is to ensure that the number of training data sets used is large enough and diverse.
In addition, when training a complex model, such as a deep neural network, the use of small data sets can lead to so-called overfitting, a common problem in machine learning. In fact, overfitting is caused by a large number of learnable parameters associated with the training samples. Such parameters act as “neuronsâ€, which we have previously adjusted through backpropagation. The result can be a model that remembers these training data, rather than a model that learns general concepts from the data.
Think back to our Apple-Orange Network. Because there are very few apple images as training data, and the neural network is very large, we are likely to let the network carefully study the specific details - red, brown stem, round, these details need to be between the training data Accurately differentiated. These tiny details may well describe the picture of training Apple, but when the machine is asked to identify a new Apple in the test, these details may prove to be irrelevant or even incorrect, because at the time of testing There may be a new apple that the machine has not seen before.
Another important principle is the diversity of data. Statistically, the more unique the data you accumulate, the more likely your data will be more diverse.
In the "Apple-Orange" example, we want the machine to have a reasonable generalization so that it can identify images of all apples and oranges, whether or not they appear in the training set.
After all, not all apples are red. If we only train our network on the picture of the red apple, it is very likely that it will not recognize the green apple when testing. Therefore, if the type of data used in the training is not comprehensive enough to cover all the possibilities in the test, then this problem will occur. In many areas of artificial intelligence, the issue of partiality has begun to emerge. Neural networks and the data sets used to train them reflect prejudice in their manufacturer's population. Once again, if we only use red apples to train our Apple-Orange network, we may make the machine biased and think that Apple can only be red.
If you push to other applications, such as facial recognition, the impact of not comprehensive data will become very obvious, as the old saying goes: "Where is the garbage, the garbage is going out"
Create a mousetrap that can be thought of independently
The lack of manpower to tag data is a problem, which is expensive. Or if all the companies in the world suddenly open up their data resources and willingly provide them to scientists around the world, the lack of good training data will cease to exist.
Rather than working towards the goal of getting as much data as possible, the future of deep learning may move in the direction of unsupervised learning technology.
If we think about how we teach infants and young children about the world, this is justified; after all, although we have taught children a lot, the most important learning as human beings is experience. This is unsupervised.
[English source: techcrunch Compilation: Netease sees the external intelligent compilation platform review: Ecale]
15 Woofer Speaker,15 Inch Woofer Speaker,Speaker Woofer 15 Inch,Pro Audio Speaker Drivers
Guangzhou Yuehang Audio Technology Co., Ltd , https://www.yhspeakers.com