Chapter 256 Unfounded Worry
Chapter 256 Groundless worries
Although the complete arrival of this artificial intelligence era is just Eve Carly’s intuition.
But Eve Carley feels that intuition is not the same as illusion.
The essence of intuition is often the logic that is familiar to the heart and almost without thinking.
Just like some exams, the answers chosen in the first impression are almost always correct. If you change them again and again, you will correct the original correct answers.
All in all, Eve Carley trusted her scientific instincts.
And subjectively, Eve Carly would rather believe that the person who opened a new chapter in artificial intelligence is Lin Hui.
Similar to the progress in mathematics, progress in computers is basically driven by genius.
In other words, the person who opens a new chapter in artificial intelligence in the future is destined to be a genius.
Computer genius Eve Carly has many encounters.
But it can be said that there are very few people like Lin Hui who are not annoying but fascinating.
Instead of a new chapter in artificial intelligence being opened by some unreasonable villains.
It's better to open it by Lin Hui.
All in all, Lin Hui’s supplementary content in the paper makes Eve Carley full of expectations for the future.
Eve Carly is looking forward to this discussion with Lin Hui on the content of the paper.
Eve Carly knew that Lin Hui usually played the role of a listener in every communication.
Therefore, when conducting academic exchanges this time, Eve Carly also followed her past practices.
Eve Carly did not wait for Lin Hui to make a statement first.
Instead, she took the lead in expressing some of her views on Lin Hui’s supplementary content in the paper and some of her doubts.
Eve Carly gave almost all her previous thoughts to Lin Hui.
Including but not limited to the strong interest in the supplementary content of Lin Hui's paper and the concern that the future expectations of artificial intelligence will cause controversy at the social level.
He even told Lin Hui some of his guesses about the use of the patent that he had previously acquired from her.
I don’t know why, but since arriving in China, Eve Carly feels more assertive than before.
At this time, she seemed to have some changes in her approach to things, and now she even had a certain judgment in her mind.
She also hoped to confirm her previous guesses with Lin Hui.
Listening to Eve Carly's explanation, Lin Hui did not expect to add to the content of the previous paper what he considered to be quite common sense.
To be able to be given so many expectations by Eve Carly.
The expectant expression on Eve Carly's face made Lin Hui think of a little fox eager for meat for some reason.
However, Lin Hui may disappoint Eve Carly this time.
Although some of the content added to the paper is ahead of its time and space.
But in order to avoid the situation where the first teacher is one step ahead and dies two steps ahead.
Even when moving, Lin Hui was very restrained in what he actually moved.
Take the pre-training regime that Eve Kali rates so highly.
Although the introduction of pre-training mechanism into the machine learning aspect of natural language processing is indeed quite pioneering in this time and space.
But Lin Hui knew clearly in his heart that the pre-training mechanism he introduced could only be called a new level.
The "pre-training" of forest ash handling is based on pre-training of ordinary neural network language models.
It is much worse in terms of application efficiency than a truly reliable Transformer-based pre-trained model.
As for Lin Hui, why doesn’t he directly transfer the more mature Transformer-based pre-training mechanism?
The reason is very simple. After all, there is no Transformer at the moment. It would be ridiculous to create a model based on Transformer now.
As for "deep learning", Eve Carly also has great expectations.
Although Lin Hui can indeed tinker with deep learning in the true sense.
But it seems unnecessary for the time being. When it comes to deep learning, Lin Hui does not plan to launch it in the direction of natural language processing.
As for Lin Hui not intending to introduce real deep learning in the direction of natural language processing, why is deep learning still mentioned in the current paper?
That's because almost all researchers in neural network learning in this time and space are so confident that they call their neural network learning deep learning.
In this case, even if Lin Hui's neural network learning application is not actually that deep, wouldn't it look inferior to others if it is not called deep learning?
As for the idea of migration that Eve Kali is interested in.
Although in the long-term timeline, transfer learning can indeed break out of the small circle of natural language processing and migrate to all ML fields as Eve Carly expected.
But it’s actually quite difficult in a short period of time.
Despite these difficulties, Lin Hui did not dampen Eve Carly's enthusiasm.
Instead, it painted a more magnificent scene for Eve Carly.
The look of the cake even reminded Lin Hui of his leader in his previous life.
However, Lin Hui didn't feel guilty at all about this. The pie painted by the department leaders in his previous life was just illusory.
However, the blueprint outlined by Lin Hui will definitely be realized, after all, this has been verified in previous lives.
No matter how long the road is, one day Lin Hui will realize everything he describes.
And Lin Hui is already moving towards the blueprint he outlined.
Although the content added by Lin Hui in the previous paper is not as strong as Eve Kali expected, it is at least making progress.
Even some progress is from 0 to 1 compared to the current scientific research status of this time and space.
As for Eve Carley’s concerns about the social aspects of artificial intelligence.
This Lin Hui does know a thing or two. Many big names in the past life have indeed expressed concerns in this regard.
In previous lives, Stephen Hawking, Bill Gates, and Musk have all expressed concerns that artificial intelligence will have self-awareness and consciousness.
In particular, Hawking in his previous life exaggeratedly believed that artificial intelligence may be the greatest disaster for mankind. If not managed properly, thinking machines may end human civilization.
As for whether these people have made the same remarks in this life, Lin Hui has not paid any specific attention.
Anyway, from Lin Hui's point of view, this concern may be theoretically justified, but in fact it is quite outrageous upon closer inspection.
What can truly threaten human civilization must be strong artificial intelligence, not weak artificial intelligence.
These are not completely without threats when it comes to weak artificial intelligence.
But unlike strong artificial intelligence, a set of systems can handle various intelligent behaviors.
Weak AI requires new independent systems for each intelligent behavior.
In this case, even weak artificial intelligence poses a threat because of some intelligence.
Human beings only need to carry out a certain safety design for the independent system of this behavior and it will be ok.
When discussing the threat of weak artificial intelligence to human beings, it is better to consider the harm caused by people with ulterior motives abusing weak artificial intelligence.
Chapter completed!