Is the mind more than the brain?

Marcus A. Maloof

As a computer scientist, I have always been fascinated by the notion that one could simulate or duplicate a large portion of the mind's abilities with computer programs. It seems like a natural thing to do, especially since we have created machines for digging and have stored our knowledge in books in libraries.

The computer scientists got into the fray in 1950 when Alan Turing, who was actually a mathematician, proposed the question, Can machines think? In his article Computing Machinery and Intelligence, Turing proposed the Imitation Game, in which a human uses a computer terminal to interact with either a human or a properly programmed computer. Perhaps we could conduct repeated trials with a variety of subjects, but if the human cannot determine the difference between interacting with a human and interacting with the computer, then Turing argues that this is sufficient for answering the question, Can machines think?

Some have pointed out that, while the substance of the question, Can machines think? is important, the question itself is not clearly defined. For example, while we all would agree that airplanes fly, few would agree that submarines swim. So it is not clear whether we can ask meaningful questions, such as, Can machines think? Can machines have minds? and the like. Turing actually devised his behavioral test, the Imitation Game, in an effort to sidestep difficult questions, like What is thinking? and What is intelligence?

Hans Morvavec, a robotics researcher, in his book Mind Children (Harvard University Press, 1988), described a scenario in which one might move or "transmigrate" the mind from the brain and into a computer. The essence of Moravec's thought experiment is that a robot surgeon probes a single neuron in the brain of a conscious person and then uses a computer to mimic the neuron's function. After the robot is sure that the surrogate functions identically, it switches processing from the neuron to the computer. The procedure continues until the surgeon replaces all of the person's neural circuitry with silicon circuitry.

One interesting question to ask is whether the person undergoing the transmigration would perceive any "changes", broadly defined. Perhaps we could ask the patient questions during the procedure, like How many fingers am I holding up? Moravec believes that the mind and consciousness would go unchanged during the procedure, but others, like John Searle, are confident that consciousness and the mind would collapse and vanish.

It is difficult to discuss whether we can develop a computational theory of the mind without dealing with complex issues like consciousness, emotions, and creativity.

It is important to point out that several researchers are conducting research on programs for each of these.

In 1997, the Educational Testing Service began experimenting with the Electronic Essay Rater, or e-Rater for short, for scoring GMAT essays. Two human raters generally agree 87% to 93% of the time, rating essays on a scale of 1 to 6 and looking for agreement within 1 point.

In February of 1999, e-Rater became fully operational and has been used to score about 50,000 essays (as of May/June 1999). It agrees with human raters within 88% and 95% with an average of 92%. The system has been trained on 170 to 180 topics.

Gary Kasparov, after his defeat by Deep Blue, commented in Time magazine that he "sensed a new kind of intelligence fighting against him."

One of the topics for this panel is to discuss whether the mind is more than the brain. I am not entirely sure that it matters for computer scientists. Let us say that the mind is epiphenominal, meaning that the mind is greater than the sum of its parts. Presumably, the mechanisms that cause the mind to arise from the brain are not limited to chemical and neural processes.

Indeed, these same mechanisms may work for computational processes. With the advent of DNA computing, the distinction between what is a chemical process and what is a computational process is becoming less and less clear. Some would argue that there is no difference.

However, since the mind consists of neural and chemical processes, it seems that one could simulate these using computers.

Furthermore, before one can claim that computers cannot be minds, one must first explain how minds arise from individual neurons, which are themselves not minds, and why such a process will not work for computers.

As a final thought, I have often wondered how the scientific and intellectual community would have reacted had Jane Goodall reported that, based on her observations, the chimpanzees of Gombe had established a research program to construct an artificial chimp. Regardless of whether these chimps wanted to make the artifact smarter or stronger, most scholars would have laughed. So why do we think that we are capable of understanding ourselves well enough to build an artificial mind?