Chapter 254
The reason why AI does well in cognitive tasks such as image recognition and speech recognition is that these tasks are static. The so-called static is a given input, and the prediction result will not change over time.

However, decision-making problems often have complex interactions with the environment. In some scenarios, how to make optimal decisions, these optimal decisions are often dynamic and will change over time.

Some people are now trying to apply AI to the financial market, such as how to use AI technology to analyze stocks, predict stock rises and falls, give advice on stock trading, and even replace people in stock trading. This type of problem is a dynamic decision-making problem.

The second difficulty in decision-making problems lies in the mutual influence of various factors, which can affect the whole body.

The ups and downs of one stock will affect other stocks, and one person's investment decision, especially the investment decision of a large institution, may have an impact on the entire market, which is different from static cognitive tasks.

On static cognitive tasks our predictions do not have any influence on the question (e.g. other images or speech).

But in the stock market, any decision, especially the investment strategy of a large institution, will have an impact on the entire market, other investors, and the future.

At present, deep learning has achieved great success in static tasks. How to extend and extend this success to such complex dynamic decision-making problems is also one of the current challenges of deep learning.

Zhang Shan believes that one possible idea is game machine learning.

In game machine learning, by observing the environment and the behavior of other individuals, and constructing a different personalized behavior model for each individual, AI can think twice before acting.

Choose an optimal policy that will adapt to changes in the environment and changes in the behavior of other individuals.

……

In this paper, Zhang Shan proposed a kind of machine learning that is almost completely anti-deep learning - shallow learning.

Emphasize the importance of enhancing game machine learning, emphasizing the logic and speculative nature of AI, and greatly reducing the amount of "machine learning" tasks.

There is no doubt that this is a whole new way of machine learning!

At the very least, the performance of this new model in processing dynamic information will be revolutionary.

Shallow learning names sounds a bit weird!

The reason why it is not called shallow learning that sounds more straightforward.

It is because in fact shallow learning has appeared on the stage of history!
Due to the invention of the backpropagation algorithm (also called the Back Propagation algorithm or BP algorithm) of the artificial neural network, it has brought hope to machine learning and set off a "statistical model-based" machine learning boom.This craze continues to this day.It has been found that using the BP algorithm can allow an artificial neural network model to learn statistical laws from a large number of training samples, so as to predict unknown events.This statistically-based machine learning method has shown advantages in many ways compared to past artificial rule-based systems.The artificial neural network at this time, although also known as a multi-layer perceptron (Multi-layer Perceptron), is actually a shallow model containing only one layer of hidden layer nodes.

In the 90s, various shallow machine learning models were proposed, such as Support Vector Machines (SVM, Support Vector Machines), Boosting, and maximum entropy methods (such as LR, Logistic Regression), etc.The structure of these models can basically be regarded as having a layer of hidden layer nodes (such as SVM, Boosting), or without hidden layer nodes (such as LR).These models have achieved great success in both theoretical analysis and application.In contrast, due to the difficulty of theoretical analysis and the need for a lot of experience and skills in training methods, shallow artificial neural networks were relatively quiet during this period.

However, it seems inappropriate to call it shallow learning. The previous shallow learning usually refers to shallow supervised learning~
Shallow supervised 1-hidden-layer neural networks have some desirable properties that make them easier to interpret, analyze, and optimize than deep networks; but they are not as representational as deep networks.

A learning problem with 1 hidden layer is generally used to sequentially build deep networks layer by layer, which can inherit the properties of shallow networks.

Zhang Shan also mentioned these in the paper~
Shallow Supervised Learning Deep convolutional neural networks trained on large-scale supervised data via the backpropagation algorithm have become the dominant method in most computer vision tasks.

This has also led to the successful application of deep learning in other fields, such as speech recognition, natural language processing, and reinforcement learning.However, it is still difficult to understand how deep networks behave and why they perform so well.A big reason for this difficulty is the end-to-end learning used in the layers of the network.

Supervised end-to-end learning is a standard approach to neural network optimization.

But it also has some potential problems worth considering.

First, the use of global objectives means that the final functional behavior of a single intermediate layer of a deep network can only be determined indirectly: how these layers work together to obtain high-accuracy predictions is completely unclear.

Some researchers have argued and experimentally shown that CNNs can learn to implement mechanisms that gradually induce invariance into complex but uncorrelated variability while increasing the linear separability of the data.

Sequential learning of CNN layers by solving shallow supervised learning problems is an alternative to end-to-end backpropagation.

This strategy can directly specify the goals of each layer, for example by incentivizing refinement of specific properties of the representation, such as asymptotic linear separability.Theoretical tools for deep greedy methods can then be developed from theoretical understanding of shallow degree subproblems.

The prospect of artificial intelligence is broad, but Zhang Shan feels that blindly seeking to use human advantages to transform machines is completely perverse.

The really reasonable approach should be to use artificial intelligence to assist humans in a better secondary evolution!

This is the real revolutionary direction!

In the next ten years, what new developments in artificial intelligence will be worth noting?

Zhang Shan remembers that Elsevier, a global information analysis company, asked some researchers in the field of artificial intelligence research, what do they think is the most important progress in this field?
Professor Wendy Hall, University of Southampton: "One of the interesting things about progress in artificial intelligence is trying to solve the problem of 'artificial general intelligence' - which is the larger question of whether we can create human-like Machines that think and act alike, have created some very intelligent tools. We’ve seen a lot of advances like facial recognition, voice translation, service automation. Machines are much better at processing data and learning from it than we are. Over the past 30 years, Things like facial recognition have come a long way. The development of this deep learning application is an amazing development."

Professor Virginia Dignamu of TU Delft said: "Probably the greatest progress we have yet to make. Currently, we rely too much on stochastic/probabilistic approaches to artificial intelligence..."

Gary Marcus, professor of psychology and neuroscience at New York University: "Many of the best advances were made in the early days, when people discovered something basic. For example, people found the basic logic to do symbolic operations, which is The basis for doing searches. The neural network thing was discovered in the '80s, but it has a much longer history and is obviously very useful for a whole bunch of problems like classification." "But we haven't made a ton of progress yet. To put it in perspective , it’s like asking me in 1600 what the biggest advance in chemistry was. I don’t know — in many aspects of artificial intelligence, we’re still trying to do alchemy.”

Professor Stuart Russell, professor of electrical engineering and computer science at UC Berkeley, said: "The greatest contribution artificial intelligence has made is this notion of knowledge-based systems, which have internally represented knowledge and programs that reason about it "We need a more organic form of combined perception and reasoning. People have a feedback loop between the eyes and the brain -- the brain doesn't just respond to what the eyes see, it controls what we see. Perceived stuff, what we recognize has just happened and is going to happen, what we can ignore, what we pay special attention to. In AI systems, we don’t have that right now.”

"AI is already being used in many systems in society. ... They just don't look the way people expect them to," notes Elizabeth Lin of Elsevier's web analytics firm.

Zhang Shan also feels that although the relevance of deep learning and AI is infinite, this is not the way we create intelligence.

One needs to use causal abstraction and other mechanisms that we don't have yet, in a way that is scalable and usable at scale.This is the next big thing.

We all know that deep learning is a hot topic.

What's exciting about the past 10 years is how pattern research has progressed, and how this has impacted computer vision.

This is probably the area where deep learning has had the greatest impact.You can see it in self-driving cars,
But in medical imaging, the same process can more accurately identify whether you have a certain type of cancer.

It's very interesting to connect this kind of image extraction with natural language processing and then apply it to health problems.

In addition to deep learning, computer vision and natural language processing will continue to be the focus of artificial intelligence research in the next 10 years, general artificial intelligence, causal abstraction, and the combination of perception and reasoning mentioned by the above-mentioned experts are likely to be the next A new hot spot worthy of attention in 10 years.

But, as Professor Dignam puts it, "the biggest advances are probably the ones we haven't made yet".For example, quantum computing is a field of cutting-edge scientific and technological research, and has received strong government support in many countries.Can artificial intelligence frameworks, such as search and generative systems theory, be executed quickly with quantum computers?Is it possible to use quantum phenomena (such as superposition and entanglement) to realize quantum computing to operate on data represented by quantum states, improve machine learning capabilities on a large scale, and contribute to the development of super artificial intelligence?The goals pursued by artificial intelligence and machine learning are ambitious, and could quantum computing help to take those ambitions a step further?None of these have accepted answers yet.

While people are talking happily about how the artificial intelligence revolution will change our world, Zhang Shan is worried about the possible negative effects of the future artificial intelligence revolution—worried that artificial intelligence will be used to fool and harm humans.

Indeed, artificial intelligence will definitely bring many benefits and improve our lives, for example, entertainment, work in dangerous places, elderly care, remote shopping, travel, etc.However, it is often said that the technological revolution is a "double-edged sword", that is to say, there are negative effects.How to deal with the negative effects of artificial intelligence and reduce or avoid adverse effects is an issue worthy of attention.

Artificial intelligence brings challenges to humanity and society, the most obvious of which is unemployment.Some predict that machines will first take over telemarketing, automated transport services, sewer management, tax preparers, photo manipulation, data entry jobs, librarians and library technicians.For example, the vast majority of the jobs of the millions of truck drivers will be automated by autonomous driving.Although this seems worrying, in fact, since the industrial revolution in the 18th century, the automation of human labor has been a major trend-of course, the breadth and depth of automation brought about by artificial intelligence will be unprecedented.Another issue is privacy.For example, AI can accurately predict our habits, preferences, and privacy from our social media feeds.There is also deep unease about AI-powered autonomous weapons — autonomous weapons that have the power to decide whether to take human life (although others argue that autonomous weapons could be designed to be more reliable than humans).

A more serious problem is algorithmic bias — while AI decision-making software can in principle be designed to be unbiased, poor algorithmic design can lead to poor decisions.If a machine learning program is trained by biased people or biased data, then the program will also be biased.Finally, the worrying question in the long run is what happens if we reach the "singularity" - the hypothetical point in time when AGI systems become smarter than humans.Perhaps artificial intelligence will be beyond human control, and may even pose a threat to human existence-of course, the scientific community has so far disagreed about whether the singularity will occur.

Regardless, there is a great deal of uncertainty about the long-term future of AI.In response to this possible negative impact, defensive measures should be studied to prevent machine deception, threats and attacks by artificial intelligence hackers, and to teach artificial intelligence to distinguish right from wrong.

This is why Zhang Shan mentioned shallow thinking when talking about artificial intelligence.

Shallow thinking is to give the machine some speculativeness, so that the machine can at least distinguish right from wrong.

Although it sounds a bit idealistic, Zhang Shan doesn't want to get caught in trouble one day because of the troubles caused by the AI ​​machine he researched.

(End of this chapter)

Tap the screen to use advanced tools Tip: You can use left and right keyboard keys to browse between chapters.

You'll Also Like