The world around us is complex beyond comprehension.
Not just beyond comprehension of a human being. It is beyond comprehension of any entity from this world.
As humans, we are proud of our brains. After all, this is not our physical strength, endurance, superior senses or huge numbers that set as apart from the competition of other species and gave us dominance on this planet.
It is our mammalian brain developed to the limits set by size of a birth canal, densely packed with Neo-cortical columns.
We even classify ourselves as double intelligent homo sapiens sapiens, where sapien means intelligent.
It is comfortable to be on the top of the food chain but that does not need to last forever.
Should we worry to loose the dominance to possibly even more intelligent creation of ours, the Artificial Intelligence?
Limits of understanding
In common language the word understanding has multiple meanings. In the following, I assume that to understand something is to be able to predict future behavior of it.
For an individual submerged in an ever changing environment, it seems to be vital to understand some aspects of that environment. The very survival depends on how well and fast one copes with dangerous circumstances.
1) First, one needs to recognize the state of the environment with sensors available,
2) pick up only few elements, most relevant to survival at a given moment and
3) use them to perform quick prediction of what is going to happen.
4) If the prediction comes out threatening, one better comes up with an action that would increase odds of survival. That task also employs the predictive facility of the brain.
5) The last part is to take a preventive action.
Our brain is just one big memory. It stores information acquired over time very much like hard drives in contemporary computers. It also has the capability to process both memories and sensory input in a short term memory. This is similar to RAM or operating memory of computers.
The most limiting factor for our predictive powers is the size of the operating memory.
We can focus, i.e., upload to the operating memory, only on few elements at the time. Then we can perform a simulation to guess the future.
The more elements you can upload, the longer it takes to process them. It is plausible, that there is an evolutionary pressure to keep the operating memory small and the predictions fast in quickly changing surroundings.
You can test how big your operational memory is, for example, by trying to foresee the number of moves ahead in chess.
Programmers often reach limits of their short term memory when attempting to fix a computer program without creating new bugs. To achieve that, one has to understand the code, i.e., be able to predict what a given change will do to its overall performance.
Remarkably, when a programmer in his uploading to short term memory is interrupted or some of his operational memory is used for an unrelated task, he must start the uploading process from scratch. This is why undisturbed work time slots, so called “sprints”, are so important for the performance in that field.
People have different operational memory capacities. Those that can handle one more element then you, are considered smart. Those able to upload two or three elements more, are geniuses.
In either case, even the best of us can handle only a handful of elements.
Moreover, in my experience, education does affect only the content of the long term memory of our brains and doesn’t seem to improve the short term memory capabilities.
So, whether you are a cab driver, clerk or a professor you will fail equally often in predictions.
Complexity is the fundamental and primary reason why our predictions fail. For predictions we are using our operational memory. The memory can handle only several elements at the time. The number of elements influencing the actual development is usually large.
And the word “large” is a preposterous understatement.
One of the examples where complexity spelled disaster is the fiasco ridden history of expert systems. Those were the attempts to reproduce thinking by rules driven programs aiming at modest tasks like voice recognition. It was realized that the number of rules required for even remotely useful applications is simply impossible to code.
Later, hand coded rules were replaced by data mining and machine learning. That shifted a strain to computational power of computers from work eons of experts.
Before we start senseless partying in celebration of this clever development lets put that into a historical perspective.
In early 80’s, the computer revolution was in a full swing and physicist were using supercomputers to perform variety of simulations. One of them was, if my memory serves me well,
the simulation of an electron gas in a box, that would reveal energy split between ground and the first excited state of such a system to a given accuracy.
The available computational power at the time allowed to handle only 8 electrons.
However, both 80’s and the 90’s have seen an enormous gain in that area. Computer clock speeds have risen from several MHz to over 1 GHz, number of transistors per processor increased from 30 thousands to 30 millions, the architecture itself has been upgraded from 16 to 32 bits.
Moreover, there was also a significant improvement in computational libraries used in simulations.
With all that effort, at end of 90’s the physicist could handle 9 electrons with the same accuracy.
In a thousand years of such an unlikely exponential growth of computer power we could perhaps reach 60 electrons.
Adding just one more element to the problem, required thousands of times more resources. This is the complexity jail at works. No matter how hard we try, some problems are notoriously difficult. And hey, those are just several electrons. The world out there isn’t that simple.
These days, the pace of growth of computational power is nowhere near that of the golden age of last decades of twentieth century. The increase in clock speed is marginal, the actual size of transistors in microprocessors has ceased to shrink a decade ago. The number of transistors per millimeter squared still increases because we started using the third dimension of silicon dies but there is no more dimensions to expand to.
More cores, more brains!
Can we conquer the complexity by dividing one complex task between many brains? If one brain can handle, say, 5 elements and we have a problem consisting of 20 elements, cannot we just employ 4 brains to do the job?
Unfortunately no. The thing is, that the word “handle” means not only storing the elements but also considering their mutual relations.
Those relations are essential for any predictions. When we divide the set of 20 elements into 4 groups there will be no relations taken into account between elements from different groups.
There are, of course, tasks that can be split and performed in parallel. They are not complex, though.
However imperfect the parallel processing is in handling complex tasks, it is still better than relaying on a single brain.
It is best demonstrated by comparing results of a free market economy versus the central planning. The former allows brains picking up elements of their choice and do the predictions on their own in their quest to survival. The later relays on the wisdom of small group of planners that can barely address their own needs, let alone others.
AI’s ubiquitous vulnerability
If you were scared of a single Skynet-style AI taking over the world, fear no more. It would have to deal with the inescapable complexity of the world the humans do. The same laws would apply to it as well. In order to react, it would have to focus only on a small subset of elements at the time and that would lead to prediction failures. If that is so, it is open to threats from hostile individuals.
How about many competing AI’s, each outperforming the smartest of human beings?
Well, similar scenario has already played out in the past.
We used to compete with another sub-specie of homo sapiens, Neanderthals. The Neanderthals were superior in physical strength but inferior in cognitive skills, operational memory included.
For long time, anthropologists believed we killed them all. These days, due to genetic studies, it seems that we merged with them.
Perhaps we will merge with our technology, too.
Zbigniew Karkuszewski, February 20-th 2022