Pages

Wednesday, 2 May 2012

Building MULTIVAC

Required reading: How will we build an artificial human brain?

I think the rules-based AI recreation approach has the best shot, although I fail to see how it is all that different from a Turing machine. If the brain's intelligence is going to be created by basing its development on evolutionary algorithms, the code-system that will function at the core of these algorithms will essentially be a dynamic Turing machine.

However, on the other hand, I personally believe that reverse-engineering the brain, or even creating something as powerful, is almost impossible. I believe that we lack something more vital. I don't know what it is, and I can best put it as the chasm that exists between reality and perception (for one, can you tell me what pain really is and why it is the sensation it is? There is a lot left to understand about the evolutionary goal of sensations and genetic morality). The engineered brain has a good chance of being the greatest computer of all time, yes, but can it be human is the question. I don't think it can.

Consider the example of the third law of thermodynamics.
The entropy of a perfect crystal at absolute zero is exactly equal to zero.

Axiomatically, to get the in situ crystal temperature to absolute zero, the entropy must be brought down to zero. By using a series of processes that systematically reduce the temperature, absolute zero can be attained, but as we get closer and closer to the target, the amount of work required to push the mercury down increases. In fact, to jump from whatever temperature above absolute zero to absolute zero, the work required is theoretically infinite. But since the theory does not place any restrictions on the time-frame, absolute zero can be attained within a finite number of steps.

The problem with constructing a human brain to become something more than simply a powerful computation engine doesn't enjoy the option of certain finiteness, however. It can either be attained within a finite time-frame or can have all the resources necessary but take an infinite amount of time.

[caption id="attachment_23014" align="aligncenter" width="477"] A temperature-entropy chart after accounting for Nernst's heat theorem and the third law of thermodynamics for an adiabatic demagnetization refrigeration process[/caption]

The reason I state these options is because the human brain is constantly evolving. Going by Ben Goertzel's proposition that if we have the set of properties ready and coded in, the AI-brain can start "learning" and continue learning, thereby becoming the human brain by definition the moment its first lesson is learnt. However, the "probabilistic truth values" aspect seems to have been taken for granted: how efficient is the AI-brain going to be in re-assigning the "probability of truth" to each unit of knowledge as it is learnt? There could easily be a computational catastrophe because, with each step, the entire knowledge-dendrogram will have to be recomputed - even if we account for Ray Kurzweil's idea that the brain has an array of redundancies. For example, how are we to determine these redundancies, or how is the AI-brain going to be programmed to look for these redundancies?


If we are to work towards engineering the human brain within a finite time-frame, the amount of computational power is going to skyrocket. At the same time, we can say we have all the resources - the knowledge, the experience-simulator, the computing and processing power - but saying we have everything as such implies that an infinite amount of time has been consumed to create the AI-brain.

This is where the role that neuroscience has to play in all of this comes in:
Neurosciences: Researchers need more impactful advances in the neurosciences so that they can better understand the modular aspects of cognition and start mapping the neural correlates of consciousness (what is currently a very grey area).

In searching for redundancies, the AI-brain's consciousness comes into play. Without understanding how consciousness is the summa of cognition, the artificial brain is going to have to work with ridiculously large volumes of data to come up with any meaningful output. To break down this sea of information into logical categories, we need to understand the modularity of information, how we perceive and consume each module, and what the programmatic analogues of these modules are going to be. Since these neuroscientific factors need to be ironed out before the Whole Brain Emulation (WBE) can work, there's no more to say on it.
When computers claim to be conscious, how can we believe them?

In conclusion, the Church-Turing hypothesis' offshoot is the rules-based technique, and the rules-based technique is dependent on the success of the WBE. WBE requires great advances in science and technology to see the light of day, and the rules-based procedure requires better understanding of what consciousness is (so the AI-brain doesn't end up with Alzheimer's disease) and immense computing power (so it doesn't let thermodynamic fatigue resulting from asymptotic performance get in the way of its success). I don't mean to be a cynic (even though I may have come across as one): the idea of an artificially created brain is greatly exciting to me. It would, after all, mean humans are gods in their own right.

No comments:

Post a Comment