AI: At War Against Humanity?

Yuri Barzov
7 min readDec 30, 2019

--

  • “Over and over again men are blinded by too violent motivations and too intense frustrations into blind and unintelligent and in the end desperately dangerous hates of outsiders. And the expression of these their displaced hates ranges all the way from discrimination against minorities to world conflagrations. What is the name of Heaven and Psychology can we do about it? My only answer is to preach again the virtues of reason-of, that is, broad cognitive maps.”
  • Edward Tolman
  • “Now strange words simply puzzle us; ordinary words convey only what we know already; it is from metaphor that we can best get ahold of something fresh.”
  • Aristotle

This morning Google’s AI algorithm recommended me to replace the phrase 'bad learner’ with 'slow learner.’ I don’t know how to reject the recommendation and the word 'bad’ is still hanging in front of my eyes underlined with a red line. I’m not a native English speaker and I can’t tell what’s wrong with the phrase 'bad learner.’ However, I can tell that if I do something wrong it’s definitely not slow. It’s bad.

I speculate that the algorithm’s improvement may be the reason for such a bad recommendation. Previously I didn’t get any recommendations on style. New recommendations have something to do with idioms. The algorithm knows idioms now but it doesn’t understand in which context they’re suitable to be used.

Idioms are easier than metaphors to comprehend. Understanding irony is harder than comprehending metaphors because irony is based on hidden dissimilarity of things which look similar whilst metaphors are based on similarities between things which appear dissimilar.

Some time ago a state of the art self-supervised learning based algorithm of Facebook removed my post because of an idiom that could be interpreted as hate speech if taken out of the context. It was irony in the context of my post. A while later Facebook’s AI removed my wife’s ironic commentary as hate speech too.

The irony of the situation is that Facebook recently reported that it now automatically detects 80% of hateful posts without prior user complaints due to that new AI algorithm that can't tell irony from hate.

People who find it difficult to understand irony and metaphors insistently demand that any text they find difficult to comprehend should be clearly spelled out for them. They want to understand it literally and at once. They request their counterparts to only use clear, unambiguous definitions.

Finally, such people try to humiliate those who don’t agree to switch to their unidimensional style of communication. Yet they stubbornly and viciously argue with each other if their views differ without paying any attention to the arguments of the other party.

Humans tend to incorrectly compare with animals people with meaningless behavior. In reality any living organism is constantly engaged in meaningful activity, even if it does not have self-awareness enabling it to be aware of this.

A robot is the best metaphor for a literally thinking human. A robot needs clear and unambiguous commands in order to execute them without understanding. Fortunately, robots do not get irritated neither enter into aggressive arguments when they are faced with messages which require understanding. They simply ignore them.

People who fail to understand metaphors and irony have one more important similarity with machines. They can be programmed and controlled by informational channels which enslave them.

and metaphors are based on not obvious associations between established, well understood and new, yet hard to comprehend concepts, which may belong to absolutely unlinked domains. Humans generate new ideas from the bottom up that way. Hence metaphor, irony is the acme of associative thinking. Rigid and narrow-minded top down suppression of the new thought is the opposite.

Linguistic environments with eliminated ambiguity of metaphors and irony suppress agency and activate addictions in people because narrow-mindedness toggle off the network of self-control in the brain.

Therefore, difficulties in understanding irony and metaphors can serve as a fairly accurate marker of the weakening of natural intelligence.

Machine learning systems, which are already deployed for commercial use, are categorically incapable of associative thinking. This is my conclusion, obtained through personal empiricism (trivial research).

I write and translate quite a lot of texts, using Google’s AI powered grammar and spelling suggestions as well as its computer translation, based on Google’s highly acclamation AI algorithms. I also regularly scroll the Facebook newsfeed and share my posts in it.

Grammar and spelling algorithm persistently suggests to eradicate all figurative expressions. The translator does not understand metaphors. Facebook hate speech removal algorithm does not understand irony. In my experience, already deployed language processing AI systems coming across any ambiguity somehow always choose the most simple and straightforward meanings excluding all other maybe much more adequate but not that obvious alternatives.

Actually, those machines could have run into the deadend where they were from the very beginning supposed to stop - they can’t cross the gap in formal logic between equally valid alternatives.

Looks like the developers of modern already deployed machine learning algorithms have found a simple solution on how to get around semantic forks, at least in linguistic applications. Machine intelligence does not choose: it knows only one - the most obvious option, and follows it, simply ignoring the others except for the very close, superficially similar ones.

Machine intelligence does not know about the existence of undefinable polysemy and therefore it does not need either associative thinking or imagination. Elimination of choice makes it unnecessary for machines to understand.

If you imagine that all people will behave like aforementioned AI algorithms, then you can relax and not worry about the danger of machines to people. Humans will then become fully compatible with machines because they will cease to be humanity intelligent. In fact, we can observe something similar in schizophrenia patients.

My point is that schizophrenic AI applications which don’t understand metaphorical and ironic notions are the Trojan horse. They do not control weapons, don’t manage medical procedures, and are not generally used in areas vital to human life. They target and damage human intelligence, and not its biological shell. Yet they are most broadly deployed and adopted.

These applications eradicate not obvious meanings out of the communication between people, eliminate the ability of people to make meaningful choices which don’t exist for the machine. Concretism is a fallacy of the machine because ambiguity allows humans to bridge the gaps in formal logic, or rather, circumvent them in a different, not obvious way.

In the Spring 2016 issue of Space and Defence publication of the Eisenhower Center for Space and Defense Studies the definition of neuroweapons included information and software “intended to influence, direct, weaken, suppress, or neutralize human thought.”

Autonomous killer drones are dangerous indeed but they are fictional as of today. AI powered neuroweapons disguised as harmless user comforting apps are already widely deployed and keep proliferating.

A grammar recommendation or hate speech removal algorithm doesn’t sound as scary as a nuclear warhead or a climate emergency. But self-inflicting neocortical warfare is a direct, albeit implicit threat to the existence of mankind.

Man is able to understand nature insofar as he himself is part of it. So said the outstanding French anthropologist Claude Levy-Strauss. He added that humans, breaking away from nature, placing themselves above it, doom themselves to guaranteed extinction.

Nature is ambiguous at its core. Making ourselves incapable of understanding ambiguity, we humans cut off our connection with nature, block our only channel of information exchange with it.

An island of machine civilization will remain hanging in the world of turbulence, rigidly tied to it by unambiguous, non-flexible connections. When the last connection breaks down, the matrix into which humans are now trying to enclose themselves with the help of artificial intelligence, which already exists and is already working to destroy the nature of man, will inevitably fall apart.

It’s up to the AI community to decide if it wishes to recognize the threat posed by the schizophrenic AI to human intelligence. Recognizing the very fact that AI researchers had accidentally started neocortical warfare against humanity requires enormous courage and unselfishness. Fixing the problem requires a qualitative breakthrough in the AI ability to understand.

I hope for the best but I think people who wish to stay sensibly human need to take precautions in order to protect their minds from the attack of neuroweapons accidentally launched against them by Google, Facebook and other tech giants. Learning to live like information hunter-gatherers in the wild digital space will give us edge not only against neuroweapons but also in obtaining purpose and happiness in life.

Previously, I believed that it will be enough to create a virtual world, evolving according to the chaotic laws of nature, in order for people, after diving into it, could restore their humanness. Perhaps this will still be required, but today I see a much more burning task in giving people simple tools that will help those who want to survive as sensible humans in a digital jungle populated by machines that know neither irony nor pity.

My report on the work done over three years will be not just a book, as I promised before, but a Textbook of Sensible Humanness, effectively a survival guide based on the primary science that makes us human.

References:

  1. Schizophrenia patients don’t understand metaphors

https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00670/full

2. Metaphor as a device for understanding cognitive concepts

https://www.google.com/url?sa=t&source=web&rct=j&url=https://ojsspdc.ulpgc.es/ojs/index.php/LFE/article/view/926/849&ved=2ahUKEwjGpJ-oi9_mAhWCwcQBHdusBsIQFjARegQIBBAB&usg=AOvVaw0cebBYxtyT4TRihDNy6eUp&cshid=1577767911388

3. Facebook hate speech removal algorithm

https://time.com/5739688/facebook-hate-speech-languages/

4. Google’s grammar recommendation algorithm

https://cloud.google.com/blog/products/g-suite/let-grammar-suggestions-in-google-docs-help-you-write-even-better

5. Google’s AI grammar and spelling recommendation algorithm expands to email

https://www.zdnet.com/article/google-gmails-new-ai-spelling-grammar-checks-help-you-avoid-email-blunders/

6. Brain as a Battlefield

https://www.usafa.edu/app/uploads/Space_and_Defense_9_1.pdf

7. Neuroweapons

https://www.nationaldefensemagazine.org/articles/2018/10/19/a-dangerous-new-class-of-weapons-emerges

8. Neocortical warfare

https://www.google.com/url?sa=t&source=web&rct=j&url=https://www.rand.org/content/dam/rand/pubs/monograph_reports/MR880/MR880.ch17.pdf&ved=2ahUKEwjPu5D3pdPmAhXcwsQBHQ1eBb4QFjAAegQIBRAB&usg=AOvVaw1SPQMwqulKnZk9Bg0UQPT4&cshid=1577369845997

9. Aristotle, Rhetoric

http://classics.mit.edu/Aristotle/rhetoric.3.iii.html

10. Cognitive maps in rats and men

http://psychclassics.yorku.ca/Tolman/Maps/maps

Photo by sebastiaan stam from Pexels

--

--

Yuri Barzov
Yuri Barzov

Written by Yuri Barzov

Curious about life and intelligence

No responses yet