|
Stardate
20031227.1953 (Captain's log): I recently purchased and watched the anime series Serial Project Lain, and certain events and concepts from that series induced a lot of thoughts about the entire question of intelligence. What follows is inspired by that series, but doesn't discuss the series as such and contains no spoilers about that series. (I've also been thinking a lot about the series itself, but am not yet ready to write about it. I think it is excellent, but I may need to watch it one more time to understand it better.)
This is preposterously long, and it contains several digressions which do contribute to my result, but which the impatient might want to skip. I'm making those parts this color.
Most of our body of knowledge in computer science concentrates on serial computing, but as I've written here, we are reaching physical limits which will eventually place a cap on further increases in serial computing speed. We can't change the speed of light, or Planck's constant, or the size of atoms, or the universal electrical constant, and we can't ignore or bypass their effect on semiconductor performance.
On the other hand, there is no equivalent inherent limit on the power of a parallel computing system. The big challenge for computer science in the next few decades will be to learn how to use parallel computing as well as we now can use serial computing.
And we know from our study of nature that parallel computing can do many kinds of things which are far beyond our current technology based on serial computing. In particular, organic intelligence is the triumph of parallel computing. Neurons in our brains have a switching speed of only 2 khz, and yet we can rapidly do things which are beyond the abilities of the best silicon-based computers that exist.
The neurons in our brains are very slow, but there are truly huge numbers of them: on the order of 10นน.
But that doesn't mean that if we build a system with 10นน transistors that it will become intelligent, too. There's more to it than that. It's not just a matter of the number of elements, it's also a matter of how they're organized and how sophisticated each of them is. A neuron is much slower than a transistor, but it's also capable of far more sophisticated processing. Neurons are "digital" in the sense that on an instantaneous basis they're either "on" or "off", but as a practical matter they're actually analog devices. When they fire, they don't turn on, they pulse. They accept input from as many as ten thousand other neurons, some of which are treated as "positive" and some as "negative". One way to think of it is that the neuron is maintaining a cumulative number representing how stimulated it is, which constantly decays absent any other effects, and which steps upwards on each received pulse which is "positive" and steps downward on each received pulse which is "negative". If the accumulator rises above a certain threshold, then that nerve fires and generates a pulse downstream. But the reality is more complicated than that and not totally understood. And neurons use what engineers refer to as "pulse code modulation" to transmit analog values: the faster the pulses arrive, the more emphatic the message.
Though some inputs are positive and some are negative, they don't all cause the same size stairstep. Some are very persuasive, some less so, and some have almost no effect. Over time, any given input's value can change, and that may well be the physical mechanism behind memory. But no one really knows. The human brain is the most complicated structure per unit mass we know of, and we only have a rudimentary understanding of how it works, and most of that comes from study of peripheral functions such as the vision center. The frontal lobes, which appear to be where higher thought and personality happen, are not understood at all. And no one has the slightest clue as to what kind of physical mechanism implements long term memory of high level information.
The vision center has been studied pretty intensively, however, and is moderately well understood now. In humans it occupies a section of the brain in the rear which is perhaps six centimeters wide and maybe 12 centimeters tall, and its job is to take a huge stream of data from the eyes and to process it to remove redundant information, to compensate for distortion (such as due to effects of lighting), to try to identify which sections of the field of view represent "objects", to evaluate differences in the images captured by both eyes and to calculate depth information from that, and in the end to create an abstracted symbolic representation of what is being looked at to send elsewhere in the brain for further processing at a higher level. The division of labor between the vision center and more general computing elsewhere is not known, nor how that abstract description is encoded, nor how we remember objects we've seen and can recognize them when we see them again, or identify the function of objects we've never seen before (like knowing that something is a chair the first time we see it).
At least in the early stages, visual processing is highly pipelined, probably with several parallel pipelines, and with groups of neurons at each stage of the process using the output of previous stages to create greater and greater degrees of abstraction about the image.
|