AI Is a Superhuman Moron

Yuri Barzov
8 min readMar 10, 2019

--

Computer Is a Moron

“The computer is a moron. And the stupider the tool, the brighter the master must be,” Peter Drucker wrote back in 1967. “Once we have achieved real understanding of what we are doing, we can define our needs and program the computer to fill them.”

Shortly before he died in 2005, Peter Drucker was celebrated by BusinessWeek magazine as “the man who invented management.” Naturally, when most people hear that description, they think of corporate management. And Drucker did, in fact, advise a host of giant companies (along with nonprofits and government agencies). But he came to his life’s work not because he was interested in business per se. What drove him was trying to create what he termed “a functioning society.”

“We must realize, however, that we cannot put on the computer what we cannot quantify. And we cannot quantify what we cannot define. Many of the important things, the subjective things, are in this category… People are perceptually slow, and there is no shortcut to understanding…” he pointed out.

Peter Drucker called information the new electricity long before Andrew Ng, ex AI head at Baidu and Google, compared AI with electricity. Drucker emphasized that managers back in 1967 were so much occupied with doing the quantifiable job that computers could perform that they had no time left for doing the job that required understanding and that computer couldn’t perform. Now, 52 years later the job of understanding intelligence remains predominantly undone. It seems, however, that now current AI gurus are approaching the same level of understanding that Drucker demonstrated already half a century ago. Only they hypocritically call Drucker’s moron superficial but yet intelligence.

Golden Standard AI Is a Superficial Intelligence

Yoshua Bengio, a professor at the University of Montreal and the founder of Montreal Institute of Learning Algorithms, is considered one of the three “godfathers” of deep learning, along with Yann LeCun and Geoff Hinton.

Current models cheat by picking on surface regularities.” said Bengio in his public lecture Beyond the Hype: Limitations of current AI in January 2019.

“We put forward strong evidence that current deep learning methods are not yet sufficiently sample efficient when it comes to learning a language with compositional properties,” Bengio et al. concluded in a paper, published in January, 2019, “…current imitation learning and reinforcement learning methods scale and generalize poorly when it comes to learning tasks with a compositional structure. Hundreds of thousands of demonstrations are needed to learn tasks which seem trivial by human standards. Methods such as curriculum learning and interactive learning can provide measurable improvements in terms of data efficiency, but, in order for learning with an actual human in the loop to become realistic, an improvement of at least three orders of magnitude is required.”

Yann LeCun, another “godfather” of deep learning, is a professor of the New York University and the chief AI scientist for Facebook AI Research.

“There are cases that are very obvious, and AI can be used to filter those out or at least flag for moderators to decide,” Yann LeCun, another deep learning guru, chief AI scientist for Facebook AI Research, said in a recent interview with Business Insider. “But there are a large number of cases where something is hate speech but there’s no easy way to detect this unless you have broader context … For that, the current AI tech is just not there yet. If we actually train [algorithms] to do this, there is going to be significant progress in the ability of machines to capture context and make decisions that are more complex… This is not something that’s going to happen tomorrow.”

Deep learning and reinforcement learning are just ways to program the moron. Deep learning golden standard models demonstrate superhuman results only if they are supervised — trained on troves of data labelled by humans. Reinforcement learning models beat human players in human designed games which model human models of the real world not the world itself. Golden standard reinforcement learning algorithms practice playing those games hundreds of thousands of times before they can deliver. By programming a moron with new techniques one can’t expect making it intelligent. In other words, looks like the current golden standard AI has nothing to do with intelligence.

Yet AI Is Claimed to Be a Superhuman Intelligence

Ilya Sutskever, founder and research director of OpenAI, said in his keynote speech in November 2018 at the AI Frontiers Conference “We (OpenAI) have reviewed progress in the field over the past six years. Our conclusion is near term AGI should be taken as a serious possibility.”

We Don’t Know what Golden Standard AI Is

Dr. Eric Topol is a world-renowned cardiologist, geneticist, digital medicine researcher, and author. At NVIDIA’s GPU Technology Conference (GTC) on March 17–21, 2019 in Silicon Valley, he will present a talk on how AI and deep learning are beginning to affect medicine at three levels: clinicians, health systems and patients.

How good is AI for prediction in medicine? We don’t know.” Topol wrote on Twitter in January 2019. “All 15 papers I’ve summarized are in silico, retrospective, many w/ statistical methodologic issues, and we’ve yet to see a prospective validation study in a real world clinical environment.”

“Yes, well I share the concerns because a lot of it (AI) is long on promise and short on implementation and validation. So what I’m espousing is that we have a remarkable opportunity here. If we really take this seriously and we are activists for our patients we will accelerate AI implementation, get the validation needed, and bring back the past to bring the future.” Topol said to Forbes in February 2019.

Most AI Startups Aren’t about AI

It looks like the new hyped way of programming the moron is yet far from delivering tangible results in real world. How far is it? We don’t know. Yet startups which claim using it are getting 15–50% more funding, as Forbes reports. A story that almost half of startups which claim using AI are not actually using it makes the headlines. What’s the fuss? They are using programmable morons for sure. We all do. Giving to the moron a new name just brings them more cash.

Is it ethical to claim the moron is intelligent? We, probably, have to ask AI ethics specialists. AI ethics are a very hot topic now.

Is Ethical AI Bad?

The AI we have today is narrow AI: superhuman in certain narrow domains, like playing chess and Go, but useless at anything else.” Calum Chace, contributor to Forbes wrote in March 2019. “It makes no more sense to attribute moral agency to these systems than it does to a car or a rock. It will probably be many years before we create an AI which can reasonably be described as a moral agent.”

Is AI a Superhuman Moron?

Does all the above mean that AI actually stands for the same computer that Drucker called a moron? Maybe it’s just a much more powerful moron? A superhuman moron?

Could managers become morons after doing a moron’s job for decades? Can scientists become morons if they do quantifiable jobs according to Drucker?

An American mathematician and philosopher, as well as an esteemed professor at MIT, Norbert Wiener is widely recognized as being one of the greatest scholars in United States history. Not only did Weiner make important contributions to fields such as electronic engineering and control systems, but he is also considered by most as the founder of cybernetics.

“What sometimes enrages me and always disappoints and grieves me is the preference of great schools of learning for the derivative as opposed to the original, for the conventional and thin which can be duplicated in many copies rather than the new and powerful, and for arid correctness and limitation of scope and method rather than for universal newness and beauty, wherever it may be seen.” Norbert Wiener, the father of cybernetics wrote in his book The Human Use of Human Beings in 1950.

Humans can behave like morons, but humans can’t turn into morons permanently because they have intelligence. Even the best trained currently available computer morons don’t have even a trace of intelligence. People who claim the contrary are lying, unintentionally, of course.

Should we request ethical behavior from those scientists who continue to promote the current golden standard of AI, the Superhuman Moron, as intelligence? Most likely, we should because even a simple human moron put into a job that requires intelligence can be extremely dangerous. What can we expect from a superhuman moron in such a position? And, as Drucker put it, the stupider the tool, the brighter the master must be.

We, humans need to begin with understanding what intelligence is. The ultimate achievement of the AI hype is that now we know; whatever the understanding of intelligence will emerge it won’t come from the current golden standard methods of AI.

“When truly intelligent machines do finally arise—and they probably will—they will not be anything like deep neural networks or other current AI algorithms. The path ahead runs through systems that mimic biology. Like their biological counterparts, intelligent machines must learn by analogy to acquire an intuitive understanding of the physical phenomena around them,” Garrett Kenyon, a physicist and neuroscientist at Los Alamos National Laboratory, wrote in an opinion article in Scientific American in February 2019.

What the Real Intelligence Is?

“The most beautiful thing we can experience is the mysterious. It is the source of all true art and science. He to whom the emotion is a stranger, who can no longer pause to wonder and stand wrapped in awe, is as good as dead — his eyes are closed.” Albert Einstein says in his awe written for Living Philosophies, a collection of personal philosophies of famous people published in 1931.

The essential mechanism of learning is goal free and independent of an external reinforcement.” Misha Gromov, mathematician who won Abel prize for geometry that revolutionized the computer vision wrote recently. I found the above quote by Einstein in Gromov’s writing.

This is the problem of innovation. How can an observer ever break out of inadequate model classes and discover appropriate ones? How can incorrect assumptions be changed? How is anything new ever discovered, if it must always be expressed in the current language? If the problem of innovation can be solved, then, as all of the preceding development indicated, there is a framework which specifies how to be quantitative in detecting and measuring structure. One approach to this problem is hierarchical epsilon-machine reconstruction. In this, one starts with the simplest assumptions about the world and then builds a succession of more sophisticated languages as the assumptions prove inadequate. Epsilon-machine reconstruction plays a central role in this because we use it to discover regularities, not in the raw data, but in a series of increasingly accurate models. Thus, we replace the data stream with a “model stream” and the regularities discovered form the basis of a new language that describes how less-accurate models are transformed into more-accurate ones.” James P. Crutchfield, a complexity scientist who’s student adventures in Las Vegas were featured in the Eudaemonic Pie book by Thomas A. Bass proposed a couple of decades ago.

In support of Crutchfield’s ideas Christopher Lynn, Ari Kahn and Danielle Bassett, researchers at the University of Pennsylvania, recently proposed that human ability to detect patterns might stem, in part, from the brain’s desire to represent new things in the simplest way possible.

As soon as we understand the intelligence the superhuman moron will finally get a superhuman master… or an intelligent human master, at least.

Stop searching for a program to execute. Think for yourself!

Photo by Logan Lambert on Unsplash

--

--

Yuri Barzov
Yuri Barzov

Written by Yuri Barzov

Curious about life and intelligence

No responses yet