When cloud computing, big data, and artificial intelligence are integrated, the process of encounter, acquaintance, and mutual understanding is completed.

Today, I will talk about cloud computing, big data, and artificial intelligence. Why do these three topics interest me? Because they are currently very popular and seem to be interconnected: when discussing cloud computing, big data often comes up, and when talking about artificial intelligence, cloud computing is also mentioned. It feels like these three areas complement each other and are inseparable. However, for non-technical people, it might be challenging to understand their relationship, so it's important to explain. First, the initial goal of cloud computing Let’s start with cloud computing. The original purpose of cloud computing was to manage resources. The main areas of management include computing resources, network resources, and storage resources. 1. A data center is like a computer What are computing, networking, and storage resources? For example, if you want to buy a laptop, you care about the CPU and memory—these are called computing resources. To access the internet, your computer needs a network port or a wireless card to connect to a router. Your home must also have an internet connection from providers like China Unicom, Mobile, or Telecom, such as 100M bandwidth. Then, a technician will install the network cable in your home, configure the router, and set up the network connection. This way, all your computers, phones, and tablets can go online through your router. This is network resource. You may also ask about the hard drive size. In the past, hard drives were small, around 10G. Later, 500G, 1T, and even 2T hard drives became common (1T equals 1000G). This is storage resource. This is the same for a computer as it is for a data center. Imagine having a large server room with many servers, which also have CPUs, memory, hard disks, and internet access through router-like devices. The question then becomes: How do people who operate data centers manage these devices in a unified way? 2. Flexibility: You want everything, you want more The goal of management is to achieve flexibility in two areas. What are these two aspects? For example, if someone needs a small computer with one CPU, 1G memory, 10G hard disk, and 1MB bandwidth, can you provide it? Such a small-sized computer is now less powerful than a typical laptop. However, on a cloud computing platform, you can get this resource whenever needed. In this case, two aspects of flexibility can be achieved: Time flexibility: When you need it, you can get it. Spatial flexibility: How much you want. If you need a small computer, it can be satisfied; if you need a large space, such as a cloud disk, the space allocated by the cloud disk to each person is very large, and you can upload anytime without worrying about running out. Spatial flexibility and time flexibility, which we often call cloud computing flexibility. Solving this flexibility has gone through a long period of development. 3. Physical equipment is not flexible The first phase was the physical device period. During this time, customers needed a computer, and we bought one in the data center. Physical devices are becoming more and more powerful, such as servers with hundreds of G of memory; network devices with tens of G or even hundreds of G bandwidth; and storage systems with PB-level capacity (1P is 1000T, and 1T is 1000G). However, physical devices cannot achieve great flexibility: First, they lack time flexibility. If you buy a server or a computer, you have to wait for the purchase. If a user suddenly wants to open a computer and use a physical server, it is difficult to purchase at that moment. A good relationship with a supplier may take up to a week, while a general relationship may require a month of purchase. The user waits a long time before the computer is in place, and then slowly starts deploying their own application. Time flexibility is poor. Second, spatial flexibility is limited. For example, the user needs a very small computer, but there may not be such a model available. Buying a larger one would cost more, even though the user only needs a small amount. This leads to paying more unnecessarily. 4. Virtualization is much more flexible Someone found a solution. The first method is virtualization. Isn't the user asking for a small computer? The physical devices in the data center are very powerful. We can virtualize a small portion from the physical CPU, memory, and hard disk to the customer, and also virtualize a small portion for other customers. Each customer sees only their own small part, but in reality, each customer uses a part of the entire large device. Virtualization technology makes the computers of different customers appear isolated. That is, I see my part, you see yours, but the actual situation may be that my 10G and your 10G are on the same large storage. And if the physical devices are ready in advance, the virtualization software can virtualize a computer very quickly, solving the problem in minutes. So creating a computer on any cloud can be done in minutes, which is the reason. This spatial flexibility and time flexibility are basically solved. 5. The earnings and feelings of the virtual world In the virtualization phase, the most successful company was VMware. It was a company that implemented virtualization technology earlier and could virtualize computing, networking, and storage. This company was very good, the performance was excellent, and the virtualization software sold well, making a lot of money. Later, it was acquired by EMC (one of the top 500 companies, the leading brand in storage manufacturing). But there are still many people in the world who have feelings, especially programmers. What do people with emotions like to do? Open source. There are many closed-source sources in the world, and the source is the source code. In other words, a certain software is doing well, everyone loves to use it, but the code of this software is closed by me, only my company knows, others don't know. If other people want to use this software, they have to pay me, this is called closed source. But there are always some big cows in the world who can't get used to money and let a family earn. The big cows think that you will know me this technology; you can develop it, I can. I developed it is not to collect money, the code is taken out and shared with everyone, who can use it all over the world, all people can enjoy the benefits, this is called open source. For example, the recent Tim Berners-Lee is a very affectionate person. In 2017, he won the 2016 Turing Award for "inventing the World Wide Web, the first browser, and the basic protocols and algorithms that allowed the World Wide Web to expand." The Turing Award is the Nobel Prize in the computer world. However, his most admirable is that he freely contributed the World Wide Web, which is our common WWW technology, to the world for free. All of our current online behaviors should be thanked for his credit. If he used this technology to collect money, he should be almost as rich as Bill Gates. There are many examples of open source and closed source: For example, in the world of closed source, there is Windows. Everyone has to pay for Microsoft with Windows; Linux appears in the open source world. Bill Gates made a lot of money by relying on closed-source software such as Windows and Office. It is called the world's richest man, and Daniel has developed another operating system, Linux. Many people may not have heard of Linux. Many programs running on the back-end server are on Linux. For example, everyone enjoys double eleven, whether it is Taobao, Jingdong, Koala... The system that supports the double eleven snaps is running on Linux. If there is Apple, there is Android. Apple's market value is high, but the Apple system code we can't see. So there is a big cow wrote the Android mobile phone operating system. So you can see almost all other mobile phone manufacturers, which are loaded with Android. The reason is that the Apple system is not open source, and Android can be used by everyone. The same is true for virtualization software. With VMware, this software is very expensive. Then there are two open source virtualization software written by Daniel. One is called Xen and the other is called KVM. If you don't do technology, you can ignore these two names, but you will mention it later. 6. Semi-automatic and fully automatic cloud computing after virtualization To say that virtualization software solves the problem of flexibility is not entirely correct. Because virtualization software generally creates a virtual computer, it is necessary to manually specify which physical computer the virtual computer is placed on. This process may also require more complex manual configurations. So using VMware's virtualization software, you need to test a very good certificate, and the person who can get this certificate, the salary is quite high, but also the complexity. Therefore, the cluster size of physical machines that can only be managed by virtualization software is not particularly large, generally in the scale of a dozen, dozens, and hundreds. This aspect will affect time flexibility: although the time to virtualize a computer is very short, as the size of the cluster expands, the process of manual configuration becomes more and more complex and time-consuming. On the other hand, it also affects space flexibility: when the number of users is large, the size of this cluster is far less than how much it wants. It is likely that this resource will soon be used up and it will have to be purchased. Therefore, as the size of the cluster grows larger, it is basically thousands of starts, tens of thousands, or even tens of millions. If you check BAT, including Netease, Google, Amazon, the number of servers is scary. It is almost impossible for so many machines to rely on people to choose a location to put this virtualized computer and configure it accordingly. It still needs a machine to do this. People have invented a variety of algorithms to do this, the name of the algorithm is called Scheduler. Generally speaking, there is a dispatch center. Thousands of machines are in a pool. No matter how many CPUs, CPUs, and hard disks the user needs, the dispatch center will automatically find a place in the big pool to meet the needs of users. Start the virtual computer and configure it, the user can use it directly. At this stage we call pooling or clouding. At this stage, it can be called cloud computing. Before that, it can only be called virtualization. 7. Private and public cloud computing There are two types of cloud computing: one is a private cloud, the other is a public cloud, and some people connect a private cloud to a public cloud as a hybrid cloud. Private Cloud: Deploy the virtualized and clouded software in someone else's data center. Users who use private clouds tend to have a lot of money, buy their own space to build a computer room, buy a server themselves, and then let cloud vendors deploy themselves. In addition to virtualization, VMware also launched cloud computing products and made a lot of money in the private cloud market. Public cloud: Deploying virtualization and cloud software in the cloud vendor's own data center, users do not need a lot of investment, just register an account, you can click on a web page to create a virtual computer. For example, AWS is the public cloud of Amazon; for example, Alibaba Cloud, Tencent Cloud, and Netease Cloud in China. Why does Amazon want to be a public cloud? We know that Amazon was originally a relatively large e-commerce company abroad. When it is doing e-commerce, it will definitely encounter a scenario similar to the double eleven: at a certain moment, everyone rushes to buy things. When everyone rushes to buy something, the time flexibility and spatial flexibility of the cloud are especially needed. Because it can't always prepare all the resources, it's too wasteful. But you can't be prepared without anything, watching so many users of the Double Eleven want to buy things. Therefore, when it is necessary to double eleven, a large number of virtual computers are created to support the e-commerce application. After the double eleven, these resources are released and dried up. So Amazon needs a cloud platform. However, commercial virtualization software is too expensive, and Amazon can't give all the money it earns from e-commerce to virtualization vendors. So Amazon developed a set of its own cloud software based on open source virtualization technology, Xen or KVM as described above. I did not expect that after the Amazon, the caller will become more and more cattle, and the cloud platform will become more and more cattle. Because its cloud platform needs to support its own e-commerce application; traditional cloud computing vendors are mostly from IT vendors, and almost no applications, so Amazon's cloud platform is more friendly to applications and rapidly develops into the first brand of cloud computing. And made a lot of money. Before Amazon announced its cloud computing platform earnings report, people speculated that Amazon e-commerce makes money, and the cloud also makes money? Later, when the financial report was published, it was found that it was not ordinary to make money. Last year alone, Amazon AWS had revenues of $12.2 billion and operating profit of $3.1 billion. 8. Cloud computing to make money and feelings The first Amazon in the public cloud is very cool, and the second Rackspace is just fine. No way, this is the cruelty of the Internet industry, and most of the winners are eating. So if the second place is not in the cloud computing industry, many people may have never heard of it. The second place is like, what can I do without the boss? Open source. As mentioned above, although Amazon uses open source virtualization technology, the cloud code is closed source. Many companies that want to do and can't do cloud computing platforms can only watch Amazon's big money. Rackspace puts the source code open, and the whole industry can work together to make the platform better and better. The brothers will work together and fight with the boss. So Rackspace and NASA co-founded the open source software OpenStack, as shown in the above figure, the architecture diagram of OpenStack, not the cloud computing industry does not need to understand this picture, but can see three keywords: Compute computing, Networking network, Storage storage. It is also a cloud management platform for computing, networking and storage. Of course, the second-place technology is also very good. With OpenStack, it’s really like Rackspace thinks. All the big companies that want to be cloud are crazy. You can imagine all the big IT companies like IBM: Hewlett-Packard, Dell, Huawei, Lenovo, etc. are crazy. Everyone wants to do the cloud platform. I watched Amazon and VMware make so much money. I couldn’t help but look at it. It seems that the difficulty is quite big. Now, with such an open source cloud platform OpenStack, all IT vendors have joined the community, contributed to this cloud platform, packaged into their own products, and sold together with their own hardware devices. Some have done private clouds, some have done public clouds, and OpenStack has become the de facto standard for open source cloud platforms. 9. IaaS, resource level flexibility As OpenStack technology becomes more mature, the scale of management can be larger and larger, and multiple OpenStack clusters can be deployed in multiple sets. For example, a set of deployment in Beijing, two sets of deployment in Hangzhou, and a set of deployment in Guangzhou will be followed by unified management. This way the whole scale is even bigger. At this scale, for the perception of ordinary users, it is basically possible to know what to do and what to expect. Or take the cloud disk example, each user cloud disk is allocated 5T or even more space, if there are 100 million people, how much space is added. In fact, the mechanism behind it is this: to allocate your space, you may only use a few of them, for example, it assigns you 5 T, such a large space is only what you see, not really For you, you actually only used 50 G, then the real one is 50 G. As your files are continuously uploaded, more and more space will be allocated to you. When everyone uploads and the cloud platform is almost full (for example, 70%), it will purchase more servers and expand the resources behind it. This is transparent to the user and cannot be seen. From the perspective of feeling, the flexibility of cloud computing is realized. In fact, it is a bit like a bank. It gives the depositors the feeling of when to withdraw money. As long as they do not run at the same time, the bank will not be embarrassed. 10. Summary At this stage, cloud computing basically realizes time flexibility and space flexibility; it realizes the flexibility of computing, network, and storage resources. Computing, networking, and storage are often referred to as infrastructure, so the flexibility at this stage is called resource-level resiliency. The cloud platform for managing resources, we call infrastructure services, is the IaaS (Infrastructure As A Service) we often hear. Second, cloud computing not only manages resources, but also applies applications. With IaaS, is it enough to achieve flexibility at the resource level? Obviously not, there is flexibility at the application level. Here is an example: For example, to realize the application of an e-commerce, ten machines are enough, and the double eleven needs one hundred. You may find it very easy. With IaaS, you can create 90 new machines. However, 90 machines were created empty, and the e-commerce application was not put on. It only allowed the company's operation and maintenance personnel to get one and one, and it took a long time to install. Although the resource level is flexible, there is no flexibility in the application layer, and flexibility is not enough. Is there a way to solve this problem? People have added a layer on top of the IaaS platform to manage the application flexibility of resources. This layer is often called PaaS (Platform As A Service). This layer is often difficult to understand, roughly divided into two parts: some of the author called "automatic installation of your own application", some of the author called "universal applications do not need to install." Automatic installation of your own application: For example, the e-commerce application is developed by you, except for yourself, others do not know how to install it. Like the e-commerce application, you need to configure Alipay or WeChat account to install, so that when someone else buys something on your e-commerce, the money paid is in your account. No one knows you except you. So the installation process platform can't help, but it can help you automate, you need to do some work, and integrate your configuration information into the automated installation process. For example, in the above example, the 90 machines newly created by the Double Eleven are empty. If a tool can be provided and the e-commerce application can be automatically installed on the new 90 machines, the real flexibility at the application level can be achieved. For example, Puppet, Chef, Ansible, and Cloud Foundary can do this. The latest container technology, Docker, can do this better. Universal applications do not need to be installed: the so-called general-purpose applications generally refer to some complexities, but everyone is using them, such as databases. Almost all applications use databases, but database software is standard. Although installation and maintenance are more complicated, they are the same regardless of the installation. Such an application can be placed on the interface of the cloud platform by an application that becomes a standard PaaS layer. When the user needs a database, one point comes out and the user can use it directly. Someone asked, since the installation is the same, then I am coming, I don't need to spend money to buy on the cloud platform. Of course not, the database is a very difficult thing, the company Oracle can make so much money by relying on the database. Buying Oracle also costs a lot of money. However, most cloud platforms will provide an open source database such as MySQL, which is open source, and the money does not need to spend so much. But to maintain this database, you need to hire a large team. If the database can be optimized to support the double eleven, it will not be able to get it in a year or two. For example, if you are a bicycle, there is no need to recruit a very large database team to do this. The cost is too high. It should be handed over to the cloud platform to do this. Professional things are done by professional people. The platform is dedicated to hundreds of people to maintain this system, you only need to focus on your cycling application. Either it is deployed automatically or not deployed. In general, you have to worry about the application layer. This is the important role of the PaaS layer. Although the scripting method can solve the deployment problem of your own application, different environments are very different. A script often runs correctly in one environment, and it is not correct in another environment. And the container is better able to solve this problem. The container is a Container, and the Container is another container. In fact, the idea of the container is to become a container for software delivery. The characteristics of the container: one is the package, and the other is the standard. In the era of no container, it is assumed that the goods will be transported from A to B, and three terminals will be passed in the middle and three times. Every time you have to unload the cargo, it will be put up, and then put on the boat and re-arranged. Therefore, in the absence of a container, each time the ship is changed, the crew must stay on the shore for a few days to go. With the container, all the goods are packed together, and the dimensions of the containers are all the same, so every time the ship is changed, the whole box can be moved, the hour level can be completed, and the crew no longer has to go ashore for a long time. This is the application of the two characteristics of container "package" and "standard" in life. So how does the container package the application? Still have to learn the container. First of all, there must be a closed environment to enclose the goods so that the goods do not interfere with each other and are isolated from each other, so that loading and unloading is convenient. Fortunately, the LXC technology in Ubuntu can do this long ago. The closed environment mainly uses two technologies, one is the technology that seems to be isolated, called Namespace, that is, the application in each Namespace sees different IP addresses, user spaces, process numbers, and so on. The other is to use isolation technology, called Cgroups, which means that the whole machine has a lot of CPU and memory, and an application can only use some of them. The so-called mirror image is the moment when you weld the container, save the state of the container, just like Sun Wukong said: "fix", the container is set at that moment, and then save the state of this moment into a series of documents. The format of these files is standard, and anyone who sees them can restore the moment that was fixed at the time. The process of restoring a mirror to a runtime (that is, reading an image file and restoring that moment) is the process of running the container. With the container, the PaaS layer becomes fast and elegant for the automatic deployment of the user's own application. Third, big data embraces cloud computing A complex and common application in the PaaS layer is the big data platform. How is big data stepping into cloud computing step by step? 1. Data is small and contains wisdom At the beginning, this big data is not big. How much data did you have? Now everyone is going to read e-books and watch the news online. In the post-80s, when we were young, the amount of information was not so big. Just look at books and look at newspapers. How many words do you have in a week's newspapers? If you are not in a big city, there is not a few bookshelves in the library of an ordinary school. Later, with the arrival of information technology, information will become more and more. First, let's take a look at the data in big data. There are three types, one is structured data, one is unstructured data, and the other is semi-structured data. Structured data: There is a fixed format and a limited length of data. For example, the completed form is structured data, nationality: People's Republic of China, nationality: Han, gender: male, which is called structured data. Unstructured data: There are more and more unstructured data now, that is, data of variable length and no fixed format, such as web pages, sometimes very long, sometimes a few words are gone; for example, voice, video are non-structured data. Semi-structured data: It is in the form of some XML or HTML. It may not be known if it is not engaged in technology, but it does not matter. In fact, the data itself is not useful and must be processed. For example, if you run a bracelet every day, it is also the data collected. So many web pages on the Internet are also data. We call it Data. The data itself is of no use, but the data contains a very important thing called Information. The data is very messy and can be called information after being combed and cleaned. Information will contain many rules. We need to summarize the rules from the information, called knowledge, and knowledge changes fate. The information is a lot, but some people see that the information is equivalent to white, but some people have seen the future of e-commerce from the information, some people have seen the future of the live broadcast, so people will be cattle. If you don't extract knowledge from the information, you can only see a circle of friends in the Internet. With knowledge, and then use this knowledge to apply to actual combat, some people will do very well, this thing is called Intelligence. Knowledge is not necessarily wise. For example, many scholars are very knowledgeable. What has happened can be analyzed from all angles. But when it is done, it can't be transformed into wisdom. The reason why many entrepreneurs are great is to apply the knowledge gained to practice and finally do a lot of business. So the application of data is divided into four steps: data, information, knowledge, and wisdom. The final stage is what many businesses want. You see that I have collected so much data, can I use this data to help me make the next decision and improve my product. For example, when a user watches a video, an advertisement pops up next to it, which is exactly what he wants to buy; when the user listens to music, he also recommends other music that he really wants to listen to. The user randomly clicks the mouse on my application or website. The input text is data for me. I just want to extract some of them, guide the practice, form wisdom, and let the user fall into my application. I didn't want to leave when I went to my network. I kept buying my hands and kept buying. Many people say that I have to break the net for the double eleven. My wife is constantly buying and buying on it. I bought A and recommended B. My wife said, "Oh, B is what I like, my husband wants to buy." You said how this program is so ox, so smart, I know my wife better than me, how is this thing done? Fourth, artificial intelligence embraces big data 1. When can the machine understand the heart? Although we have big data, human desires cannot be satisfied. Although there is a search engine on the big data platform, you can get what you want by searching. But there are situations where you don't know what you want and can't express it, and the search results are not what you want. For example, a music app recommends a song you've never heard of, and you can't search for it. But the app recommends it, and you really like it. This is something that search can't do. When people use this application, they feel that the machine knows what they want, rather than going to the machine to search when they want it. This machine is like a friend who understands me, which is a bit like artificial intelligence. People have been thinking about this for a long time. At the beginning, people imagined that there was a wall, and behind the wall was a machine. I spoke to it, and it responded. If I couldn't tell if it was a person or a machine, then it would truly be an artificial intelligence. 2. Let the machine learn to reason How can this be achieved? People thought: I first need to tell the computer the ability of human reasoning. What is important for humans? The difference between humans and animals is the ability to reason. If I can tell the machine this reasoning ability, let the machine reason according to your question, and give the corresponding answer, that would be great. In fact, people are gradually letting machines do some reasoning, such as proving mathematical formulas. This is a very surprising process, where the machine can prove a mathematical formula. But later, it was found that this result was not so surprising. Because people found a problem: mathematical formulas are very strict, the reasoning process is also very strict, and mathematical formulas are easy to express for machines, and the program is relatively easy to express. However, human language is not so simple. For example, tonight, you are on a date with your girlfriend, and she says: "If you come early, I won't be there; wait for me. If I come early, you won't be there, wait for me!" This is difficult for the machine to understand, but people understand. So when you go on a date with your girlfriend, you dare not be late. 3. Teach the machine knowledge Therefore, merely telling the machine strict reasoning is not enough; we also need to teach the machine some knowledge. But teaching the machine knowledge is something that most people cannot do. Experts can, such as experts in language or finance. Can knowledge in language and finance be expressed like mathematical formulas in a slightly stricter way? For example, a language expert might summarize subject-verb-object, adjectives, nouns, and verbs, and the subject must be followed by a predicate, and the predicate must be followed by an object. These summaries can be strictly expressed, right? Later, it was found that this was not feasible, it was too difficult to summarize. Language expressions vary infinitely. Take the example of subject-verb-object: often in spoken language, the verb is omitted. For example, someone asks, "Who are you?" I answer, "I'm Liu Chao." But you can't require that in speech recognition, you speak standard written language, which is still not intelligent, like Luo Yonghao once said in a speech, every time you speak to your phone, you say "Please call someone," which is very embarrassing. Artificial intelligence at this stage is called an expert system. Expert systems are not easily successful, one reason is that knowledge is difficult to summarize, and the knowledge summarized is difficult to teach to the computer. Because you yourself are confused, you think there is a pattern, but you can't express it, how can you teach it to the computer through programming? 4. Let's not teach it, let it learn itself So people thought: the machine is a completely different species from humans, let it learn by itself. How does the machine learn? Since the machine's statistical capability is so strong, based on statistical learning, it can discover certain patterns from a large number of digits. In the entertainment industry, there is a good example: A netizen counted 117 songs from 9 albums of a famous singer in mainland China, and the frequency of each word in a song was counted once. The top ten adjectives, nouns, and verbs are as follows (the numbers after the words are the frequencies): If we randomly write a string of numbers, and take one word from adjectives, nouns, and verbs in order, what will happen? For example, taking π 3.1415926, the words are: strong, road, fly, freedom, rain, bury, confusion. Slightly connected and polished: Strong children, Still moving on the road, Spread wings to fly towards freedom, Let the rain bury his confusion. Does it feel a bit? Of course, the actual statistical learning algorithm is much more complex than this simple statistics. However, statistical learning is easier to understand simple correlations: for example, if one word always appears with another word, the two words should have a relationship; it cannot express complex correlations. And statistical methods often have very complex formulas, to simplify calculations, various independence assumptions are often made, to reduce the calculation difficulty, but in real life, events with independence are relatively rare. 5. Simulate the working method of the brain Then people began to reflect on how the human brain works from the machine's world. The human brain doesn't store a lot of rules or a lot of statistical data, but it works through the activation of neurons. Each neuron has inputs from other neurons, and when it receives input, it produces an output to stimulate other neurons. Thus, a large number of neurons interact, eventually forming various output results. For example, when people see a beautiful woman, their pupils dilate, not because the brain judges the body proportions according to rules, nor does it statistically analyze all the beautiful women it has ever seen, but because the neurons from the retina trigger to the brain and then back to the pupil. In this process, it is difficult to summarize what each neuron contributes to the final result, but it just works. Then people began to use a mathematical unit to simulate neurons. This neuron has input, output, and the input and output are represented by a formula. The input influences the output according to different weights. Then n neurons are connected together like a neural network. The number n can be very large, and all the neurons can be divided into many columns, each column with many arranged. Each neuron has different weights for the input, thus each neuron's formula is also different. When we input something into this network, we hope to get the correct result for humans. For example, the input is an image with the number 2, and the output list has the second number as the largest. Actually, from the machine's perspective, it doesn't know the image is 2, nor does it know the meaning of the list of numbers. It doesn't matter, humans know the meaning. Just like for neurons, they don't know that the retina sees a beautiful woman, nor do they know that the pupil dilates to see clearly, but as long as the beautiful woman is seen, the pupil dilates, that's enough. For any neural network, no one can guarantee that inputting 2 will result in the second number being the largest. To ensure this result, training and learning are needed. After all, seeing a beautiful woman and dilating the pupil is the result of human evolution over many years. The learning process is to input a large number of images, and if the result is not what you want, adjust it. How to adjust? Each weight of each neuron is adjusted toward the target. Due to the large number of neurons and weights, the result of the entire network is difficult to show a binary result, but it gradually improves toward the desired result, ultimately reaching the target result. Of course, these adjustment strategies are very technical and require the expertise of algorithm experts. Just like humans seeing a beautiful woman, the pupil initially doesn't dilate enough to see clearly, so the beautiful woman runs away with someone else. Next time, the pupil is slightly dilated, not the nose. 6. No logic, but achievable It sounds not very logical, but it is achievable! That's how it is! The universal approximation theorem of neural networks states that if someone gives you a complex and peculiar function, f(x): No matter what this function is, there will always be a neural network that can ensure that for any possible input x, the value f(x) (or a close approximation) is the output of the neural network. If the function represents a rule, it means that this rule, no matter how strange or incomprehensible, can be expressed through a large number of neurons and the adjustment of a large number of weights. 7. Economic explanation of artificial intelligence This reminds me of economics, which makes it easier to understand. We treat each neuron as an individual engaging in economic activities in society. Thus, the neural network is equivalent to the entire economy. Each neuron adjusts its weights based on social input and makes corresponding outputs, such as wage increases, price increases, stock drops, and how to handle your money. Does this have a rule? Definitely yes, but what specific rule is it? It's hard to say. Economic planning based on expert systems is a planned economy. The representation of economic rules is not through the independent decisions of each economic individual, but through the insights and foresight of experts. But experts can never know which street in which city lacks a sweet tofu vendor. Thus, experts say how much steel and steamed bread should be produced, often deviating significantly from the actual needs of people's lives. Even if the entire plan is written in hundreds of pages, it cannot express the hidden small rules in people's lives. Macroeconomic control based on statistics is more reliable. Every year, the National Bureau of Statistics will count the overall employment rate, inflation rate, GDP, and other indicators. These indicators often represent many internal rules, although they cannot be precisely expressed, but relatively reliable. However, the rule summary based on statistics is relatively rough. For example, economists see these statistical data and can summarize whether the housing prices and stocks will rise or fall in the long term. For example, if the economy is booming, housing prices and stocks should both rise. But based on statistical data, it is impossible to summarize the subtle fluctuations of stocks and prices. Microeconomic theory based on neural networks is the most accurate expression of the entire economic law. Each individual adjusts their own input in society and feeds back as input to society. Imagine the subtle fluctuations in the stock market, which are the result of each independent individual trading continuously, without a unified rule. Each person makes independent decisions based on the input of the entire society, and after multiple trainings, macroscopic statistical rules will emerge, which is what macroeconomics can see. For example, each time a large amount of money is issued, the housing prices eventually rise, and after multiple trainings, people will learn. 8. Artificial intelligence needs big data However, neural networks contain so many nodes, and each node contains many parameters, the total number of parameters is really huge, and the computational requirements are really high. But no problem, we have a big data platform, which can gather the power of multiple machines to calculate together, and get the desired result within a limited time. Artificial intelligence can do many things, such as identifying spam emails, identifying yellow and violent text and images, etc. This has gone through three stages: The first stage relies on keyword blacklists and filtering technology, containing

Pluggable Terminal Block

Cixi Xinke Electronic Technology Co., Ltd. , https://www.cxxinke.com