Twisted Variables

Yuri Barzov
9 min readSep 4, 2021
Photo by Frank Cone from Pexels

“The twisting of the conductors is carried out in order to increase the degree of connection between the conductors of one pair (electromagnetic interference equally affects both wires of the pair) and the subsequent reduction of electromagnetic interference from external sources, as well as mutual interference during transmission of differential signals” (Wikipedia in Russian)

The so-called “twisted pair” of wires was invented by Alexander Bell in 1881 and by 1900 almost all telephone lines in the United States had been transferred to it. Today, all wired computer networks in the world use it. Since Bell’s days, there hasn’t been a single simpler, more efficient, or cheaper solution that could supplant twisted pair cables.

The essence of the hypothesis of Mikhail Rabinovich, a Russian physicist currently working in the US, is that all biological communication networks, including the neural networks of the brain, also use a twisted pair cable as a data transmission channel. Only in this case, not wires, but variables are twisted, and this happens not in physical, but in phase (mathematical) space. However, despite the abstractness of this twisting of variables, it gives practically the same effect for improving the quality, stability and selectivity of communication channels as Bell’s twisted pair gives for telephone and computer communication lines. Even more: the twisted-pair of variables can be used to create wireless networks.

Rabinovich and his colleagues gave to the twisted pair of variables the name winnerless competition.

This name has evolved historically because the simplest and most widespread example of the implementation of winnerless competition of mathematical variables was the Lotka-Volterra system of two differential equations, invented in order to model the interplay between predator and prey populations in the evolution of ecological systems.

It seems very simple: the faster the prey population grows, the faster the predator population grows. The larger the population of predators becomes, the more prey they eat. As a result, the prey population begins to decline. Following it, the population of predators declines.

However, these processes are not so simple because each of them is nonlinear. Simplicity arises from the relationship in which predators and prey dominate alternately, in other words, from a twisted pair of population size variables.

What does this mean for biology? Let’s take a fresh example. Google’s artificial intelligence behemoth DeepMind, with the fanfare typical for billion dollar budgets, announced its yet another product, AlphaFold, which, according to media reports, predicted the folding structure of almost all proteins in the human body based on their amino acid composition. The database of all structures has been published and made open to all researchers.

Without discounting the significance of this achievement, biologists, nevertheless, draw attention to the fact that at least 40% of human proteins do not have a stable structure at all. Their folding changes dynamically under the influence of various factors.

In practice, this led to a problem that potential cancer drugs, which were supposed to block the channels of intracellular protein communication responsible for reproduction in cancer cells, turned out to be ineffective. The idea of protein channel linear communication (scientifically named transduction) in which each protein plays the role of a machine component with a clearly defined function turned out to be false.

It seems that proteins show their true properties only in interaction with other substances similarly to the way elementary particles do in the quantum realm. The twisted pair of variables is based on interaction.

Many twisted pairs of parameters can be created. They can have different twist rates (frequencies). Multiple pair wires can be made and information networks can be built from them.

Nonlinear dynamics is a feature, not a bug, for networks built upon the winnerless competition of variables. It is possible that linear signal transmission in artificial neural networks is their original bug, which required the introduction of such a mass of redundant entities to suppress it, that William from Ockham stabbed himself with his razor. Yet it was just necessary to twist two wires (parameters), only not very densely, with a low frequency.

These are the thoughts that come when you at six in the morning read an article with a laconic and easy to understand title: Discrete Sequential Information Coding: Heteroclinic Cognitive Dynamics.

References:

  1. Mikhail I. Rabinovich and Pablo Varona. Discrete Sequential Information Coding: Heteroclinic Cognitive Dynamics. Front. Comput. Neurosci., 07 September 2018 | https://doi.org/10.3389/fncom.2018.00073
  2. Tunyasuvunakool, K., Adler, J., Wu, Z. et al. Highly accurate protein structure prediction for the human proteome. Nature 596, 590–596 (2021). https://doi.org/10.1038/s41586-021-03828-1
  3. Philip Ball. The Chemical Absurdity of Molecular Recognition. Chemistry World. 15 January 2020

More on winnerless competition from The Science of Intelligence:

Section Seven. Equations of Life. Chapter One. Winnerless Competition

Winnerless Competition is a mathematical framework that emulates the process of understanding as a whole instead of trying to recreate the complexity of a living brain. It engages hierarchical networks of asymmetrically synchronized chaotic oscillators to achieve this target.

I’m an apprentice of natural learning. Therefore, I try to figure out a very primitive sketch of a very complex mathematical concept that reaches far beyond my mathematical faculties.

Louis Pecora, the American scientist, who created the first communication system based on synchronization of chaos, proposed to extend a thought experiment that is usually used to explain why very simple chaotic systems produce signals of very high complexity. According to that experiment, trajectories of two separated chaotic systems starting at close initial conditions always diverge and can fly away very far from each other. It means that they never synchronize. However, if the two systems exchange information in just the right way, they can synchronize. In this case in the thought experiment proposed by Pecora, we see two systems starting at initial conditions, which are far apart, converge to the same trajectory.

To my understanding in winnerless competition, phase trajectories of synchronized chaotic systems do not converge into one but become intertwined. Only the intertwining provides channel stability. The convergence of the two trajectories means that one trajectory overlaps the other as if superimposing it. Given a time lag one system leads and the other follows continuously. Such a setup looks to me rather one-sided and unstable. Chaotic systems always strive to run away from being too close to each other in the phase space.

When the trajectories intertwine, then they overlap each other alternately, from all sides. We get a twisted thread. One vein of this thread does not allow another to run away and vice versa. This intertwining gives the channel stability. This is my primitive understanding of winnerless competition, proposed by Valentin Afraimovich and Mikhail Rabinovich.

James Reggia writes in 2021: “We present a recurrent neural network that encodes structured representations as systems of contextually-gated dynamical attractors called attractor graphs.”

Frison wrote in 2009: “We show that a plausible candidate for an internal or generative model (of how sensory input is generated, when formulated in a Bayesian framework) is a hierarchy of ‘stable heteroclinic channels’ … Under this model, online recognition corresponds to the dynamic decoding of causal sequences.”

Mostafa Bendahmane wrote in 2016: “We have introduced the L´evy flights type of superdiffusion into a Lotka-Volterra competitive model. A stability analysis yields a conclusion that cross superdiffusion gives rise to Turing instability while self superdiffusion suppresses Turing instability. Moreover, after applying a weakly nonlinear analysis, we can also assert that the Turing patterns are stable hexagons. An immediate application of these observations from the viewpoint of biology, is that when the inter-population competition is larger than the intra-population competition, the reached inhomogeneous steady state is stable.”

How do natural neural networks learn? They copy the topology of each other’s oscillation networks in phase space.

First, they oscillate constantly. Secondly, they can connect by synchronizing each other’s oscillations asymmetrically in time. Time asymmetry means that when one network speaks, the other is silent. Then they change places.

If the nature of the oscillations of the networks does not change, if their topology in the phase space remains unchanged, then they turn into woodpeckers, tirelessly repeating the same thing to each other. The neurons in the central pattern generators do just that. Sometimes they just change the topic of the conversation or speed up/slow down its pace.

If only one network speaks, and the other only listens, then only the network that listens is learning. It is impossible, however, to check how well the new knowledge gets integrated with the existing one, because the answer does not come from it, which just should show whether the network topology develops in the phase space further using new knowledge received from another network, or not.

Mutually learning neural networks behave like two musicians who improvise in turn. They don’t repeat the tune that the other musician played, but use in their performance other musician’s ideas, creatively rethinking them. Each musician plays his own music, but it is related to the music of another artist, and is not a complete gag. The class of both musicians is constantly growing with this performance.

The neural network in the brain is the piano keys. The music of thought sounds in phase space. Observing only the process of activation of certain neurons and neural networks, we see how the keys move, but we do not hear their sound. Of course, there is a correlation, but you can understand it only if you hear the music. This is necessary, at least in order to understand what the keys are pressed for.

So far most neuroscientists believe that neural networks act like a computer keyboard. Thus pressing the keys is thought to reflect the programming process. However, there are some researchers who have already heard the music in phase space. And this is wonderful because the music of phase spaces is the language spoken by the mind in the Universe.

Rock Paper Scissors. This game is known to everyone. According to the Nash equilibrium, the optimal strategy for this game is to randomly choose from three options with equal probability. By following the optimal strategy, you cannot lose, but you cannot win either.

The winner is the one who, at the beginning of the movement of the opponent’s hand, can predict what combination the opponent is going to show. We won’t take this option. It is biased like a bent coin.

The mathematical representation of the rock-paper-scissors game in its unbiased form is a stable heteroclinic cycle. Eternal winnerless competition, without the possibility to move to another level without changing the rules of the game.

However, transitions are possible. It is the stable cycles (vertices, nodes) and transitions between them (edges, links) that describe the models of the stable heteroclinic channel of Rabinovich and the stable heteroclinic network of Meyer-Ortmanns and Voit.

Quite a long time ago, in the first half of the 2000s, Crutchfield came close to the transitional dynamics of the rock-paper-scissors game in collaboration with the Japanese researcher Yuzuru Sato. By adding the ability to learn to artificial agents in game simulations, the researchers first got deep chaos (instead of the pure randomness of Nash equilibrium), and then also the dynamics of transitions. They also had to introduce a variable between zero and one instead of 0.5 for the tie in the game in order to achieve transient dynamics.

Then everyone went their own way. But, it is possible that their paths will converge again to create the law of preservation of life, at last.

References:

  1. Sigmund K. (2007) Kolmogorov and population dynamics. In: Charpentier É., Lesne A., Nikolski N.K. (eds) Kolmogorov’s Heritage in Mathematics. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-36351-4_9
  2. Mostafa Bendahmane, Ricardo Ruiz-Baier, Canrong Tian. Turing pattern dynamics and adaptive discretization for a superdiffusive Lotka-Volterra system. Journal of Mathematical Biology, Springer Verlag (Germany), 2016, 6, pp.1441–1465. https://hal.archives-ouvertes.fr/hal-01403081
  3. Valentin Afraimovich, Irma Tristan, Ramon Huerta, and Mikhail I. Rabinovich, “Winnerless competition principle and prediction of the transient dynamics in a Lotka–Volterra model” , Chaos: An Interdisciplinary Journal of Nonlinear Science 18, 043103 (2008) https://doi.org/10.1063/1.2991108
  4. Yuzuru Sato, Eizo Akiyama, J. Doyne Farmer. 2002. Chaos in learning a simple two-person game. Proceedings of the National Academy of Sciences Apr 2002, 99 (7) 4748–4751; DOI: https://doi.org/10.1073/pnas.032086299
  5. Sato, Y., & Crutchfield, J. P. (2003). Coupled replicator equations for the dynamics of learning in multiagent systems. Physical review. E, Statistical, nonlinear, and soft matter physics, 67(1 Pt 2), 015206. https://doi.org/10.1103/PhysRevE.67.015206
  6. Maximilian Voit and Hildegard Meyer-Ortmanns. Dynamics of nested, self-similar winnerless competition in time and space. Physical Review Research (Received 29 January 2019; revised manuscript received 27 May 2019; published 6 September 2019)
  7. Reiner Schulz and James A. Reggia. 2004. Temporally asymmetric learning supports sequence processing in multi-winner self-organizing maps. Neural Comput. 16, 3 (March 2004), 535–561. DOI: https://doi.org/10.1162/089976604772744901
  8. Davis, Gregory & Katz, Garrett & Gentili, Rodolphe & Reggia, James. (2021). Compositional memory in attractor neural networks with one-step learning. Neural Networks. https://doi.org/10.1016/j.neunet.2021.01.031
  9. Kiebel, S. J., von Kriegstein, K., Daunizeau, J., & Friston, K. J. (2009). Recognizing sequences of sequences. PLoS computational biology, 5(8), e1000464. https://doi.org/10.1371/journal.pcbi.1000464
  10. Lagzi, F., Atay, F.M. & Rotter, S. Bifurcation analysis of the dynamics of interacting subnetworks of a spiking network. Sci Rep 9, 11397 (2019). https://doi.org/10.1038/s41598-019-47190-9
  11. Di-Wei Huang, Rodolphe Gentili, James Reggia (2016)A Self-Organizing Map Architecture for Arm Reaching Based on Limit Cycle Attractors. http://dx.doi.org/10.4108/eai.3-12-2015.2262421
  12. Ashby W.R. (1958) Requisite variety and its implications for the control of complex systems, Cybernetica 1:2, p. 83–99.

--

--