How much do you know about the origin of AI technology?

In the 1940s, the basic framework of artificial intelligence had existed. Since then, various organizations have been innovating in the development of artificial intelligence.

In recent years, big data and advanced deep learning models have pushed the development of artificial intelligence to an unprecedented level. Will these new technical components eventually produce the smart machines envisioned in science fiction, or will they maintain the current trend of artificial intelligence, just “put the same wine in a higher-end bottle”?

“This is actually a new wine, but there are all kinds of bottles, and there are different years,” said James Kobielus, who is the chief analyst for Wikibon's data science, deep learning and application development.

Kobielus added that, in fact, most of the old wines are still quite tasty; the new generation of artificial intelligence uses the previous methods and builds on them. For example, the technology used by Apache's big data framework Hadoop.

However, today's enthusiasm for artificial intelligence is due to the lack of specific developments in some of the former artificial intelligence candidates. According to Kobielus, the existing technology brings us closer to machines that look like humans. “The most important of these is big data,” he said in a studio at CUBE in Marlborough, Massachusetts.

Why is big data stimulating people's interest in artificial intelligence? Because this is a huge help for training the deep learning model, enabling it to make more human-like inferences. Kobielus and Dave Vellante have made technological breakthroughs in the fields of artificial intelligence and machine intelligence. Dave Vellante is the principal analyst at Wikibon, and he is the co-host of the live studio of SiliconANGLE.

The artificial intelligence revolution will be algorithmized

The rapid advancement of artificial intelligence in intelligent dialogue also reflects its rapid revenue growth. According to a survey by research firm TracTIca LLC, the artificial intelligence software market will be $1.4 billion in 2016 and will increase to $59.8 billion by 2025.

“Artificial intelligence has applications and use cases in the verticals of almost all industries and is considered to be the next major technological shift, similar to the changes that have occurred in the past, such as the industrial revolution, the computer age, and the smartphone revolution,” said TracTIca LCC Research Director Aditya Kaul said. Some of these vertical areas include finance, advertising, healthcare, aerospace and consumer.

The next industrial revolution will revolve around artificial intelligence software, which may sound like an imaginative nerd fantasies. But even outside of Silicon Valley, this sentiment is spreading. "Time" Weekly recently published a special feature entitled "Artificial Intelligence: The Future of Mankind". However, this idea of ​​artificial intelligence has existed for decades in the wild swamps of science fiction and technology circles. Has this technology developed so fast in the past few years? What can we get from reality from today's artificial intelligence and foreseeable future?

First, artificial intelligence is a broad label—in fact, more of a hot phrase than a precise technical term. According to Kobielus, artificial intelligence refers to "any method that helps machines think like humans." But, in the strictest sense, isn't the machine "thinking" different from the human brain? The machine doesn't really think, isn't it? It depends on the situation. If the synonym for "thinking" is "inferred," then the machine may be considered equal to the brain.

When people talk about artificial intelligence, they usually talk about the most popular way of artificial intelligence - machine learning. This is a mathematical application, the principle is to infer a certain pattern from the data set. Kobielus said: "For a long time, people used software to infer patterns from data." Some existing methods of reasoning include support vector machines, Bayesian logic, and decision trees. These technologies have not disappeared and are being used in the growing field of artificial intelligence technology. Machine learning models or algorithms trained on data can make their own inferences, which are often referred to as artificial intelligence outputs or insights. This inference does not require pre-programming on a machine, only the model itself needs to be programmed.

The inference of the machine learning model is based on the possibility of statistics, which is somewhat similar to the process of human understanding. Inferences from data can occur in the form of predictions, correlations, classifications, classifications, identifying anomalies or trends. For machines, the learning model is hierarchical. The data classifier is called "perceptor", and by stratifying the perceptron, an artificial neural network is formed. This neural network relationship between the perceptrons activates their functions, including nonlinear perceptrons such as tangents. Through this neural process, the answer or output of a layer becomes the input to the next layer. The final layer outputs the final result.

Deep learning layer of neurons

Deep learning networks are artificial neural networks with a large number of perceptron layers. The more layers of the network, the greater its depth. These extra layers raise more questions, handle more input, and produce more output, abstracting higher levels of data.

Facebook's automatic face recognition technology is driven by a deep learning network. By combining more layers together, you can describe the image more abundantly. "You might ask, isn't this a face? But if it's a scene recognition deep learning network, it might recognize that it's a face that corresponds to a person named Dave, and he happens to be this too. Father in the family scene," Kobielus said.

Now that there is a neural network with 1,000 perceptron layers, software developers are still exploring the capabilities that deeper neural networks can achieve. The latest Apple iPhone face detection software relies on a 20-layer convolutional neural network. In 2015, Microsoft researchers won the ImageNet Computer Vision Contest through a 152-layer deep residual network. Peter Lee, director of research at Microsoft, said that thanks to a design that prevents data dilution, the network is able to gather more information from pictures than a typical 20- or 30-layer deep residual network. He said: "We can learn a lot of subtle things."

In addition to image processing, new artificial intelligence and deep learning use cases are emerging, from law enforcement to genomics. In a study last year, researchers used artificial intelligence to predict the outcome of hundreds of cases in the European Court of Human Rights. They predicted that the accuracy of the final decision of the human judge reached 79%.

Have the ability to "think" and have a wealth of resources, and even machines to draw conclusions more accurately than people. Recently, Stanford University researchers' deep learning algorithms are better at diagnosing pneumonia than human radiologists. The algorithm, called "CheXNet," uses a 121-layer convolutional neural network that is trained on a set of more than 100,000 chest X-ray images.

Artificial intelligence models continue to improve in learning

How much do you know about the origin of AI technology?

This highlights a key issue in deep learning: the algorithms themselves are just as good as the data they train. The accuracy of the predictions they make is roughly proportional to the size of the data set in which they are trained. And this training process requires expert supervision. Kobielus said: "You need a team of data scientists and other developers who specialize in statistical modeling. They are good at getting training data and tagging them (labels play a very important role), and they are good at Develop and deploy a model in an iterative way through developer operations."

The tag data for machine learning models is really important, but the human eye is still the best tool for work. IBM said last year that they are already hiring a lot of people just to mark data for artificial intelligence.

Researchers at the University of Toronto, Parham Aarabi and Wenzhi Guo, explored the way human brains and neural networks are combined. They developed an algorithm that learns from explicit human instructions rather than through a series of examples.

In image recognition, the trainer may tell the algorithm that the sky is usually blue and is at the top of the picture. Their approach works better than traditional neural network training. Kobielus said: "If you don't train the algorithm, you don't know if the algorithm works." He also concluded that a lot of training will be done in a cloud or other centralized environment, and scattered "Internet of Things" devices (such as autonomous vehicles) ) will make a decision on the spot.

Ningbo Autrends International Trade Co.,Ltd. , https://www.supermosvape.com

This entry was posted in on