Hi again.
Let me start right off by saying I’m not familiar with Kurzweil or his opinions. However, I will comment on a few of the points you make.
A quick word about evolution — it does not necessarily favor greater intelligence, or even consciousness.
Agreed. Natural selection is merely environmental pressures favoring those genetic mutations which allow for better survival. Intelligence is one possibility, as it certainly does aid in survival, but intelligence is not inevitable through evolution by natural selection.
But natural selection plays no role in the development of AI. What we are doing is more akin to intelligent design. We are intentionally endeavoring to make smarter and more complex computers.
Humans are essentially crappy computers. On the plus side, they can build better ones, computers that are less crappy than our brains…
Well. I don’t know that I agree with that characterization. Our brains are messy, yes. Memory and perception are fallible, and emotions cloud our ability to reason. These are not problems that a computer has. But even so, our brains are still far more complex than any computer we’ve ever built so far.
We are messy computers, but I don’t agree that we are crappy computers.
Some have speculated based on Moore’s Law that we’ll develop a computer with the processing power of the human brain by as early as 2025. I’m very skeptical of that. But, we’ll see.
No species inevitably builds superior competitors, which it then sets loose on itself. In all of nature, life kills, suppresses, co-opts or avoids competitors. One does not create them.
This strikes me as fallacious reasoning. Human behavior is replete with things that are not seen anywhere else in nature. Because of our consciousness and intelligence, we behave in ways that other animals don’t.
You can’t apply the rules of natural selection to human behavior. That’s how you get Social Darwinism.
Intelligence, self-awareness, consciousness.
But wait a minute —
There is no model in nature we can find for this.
In a child, consciousness comes first.
I think you’re looking at this the wrong way. You’re zooming in to focus on the individual when you should be thinking in terms of species.
How did consciousness first develop? Who were the first hominids who attained it? I don’t know that we can say for sure. I think it likely that it wasn’t some instantaneous thing, but a very gradual process, through which each successive generation of our primate ancestors was a little more intelligent than the last, and over time, consciousness grew out of that increased complexity. It would have begun with a very rudimentary consciousness, which would have itself grown more complex with each generation, as our brains and capacity for thought increased. It will be the same way with AI, if such a thing can be accomplished.
If we are defining consciousness as self-awareness, than I dispute the idea that it comes before intelligence. There are many animals with lower levels of intelligence which do not exhibit signs of self-awareness.
Dogs, for example, are very intelligent creatures. My dog has a vocabulary of over 40 words. But is she self-aware? Well, I would like to think so. She does know and respond to her own name. That seems like a hopeful sign. But the truth is, we don’t know if they are or not.
What we do know is that, intelligent as they may be, dogs have never been able to pass the mirror test for self-awareness, which chimps can pass easily.