Recently, with the advances in LLM-based AI, there are increasing sounds of alarm about how AI could, either passively or actively, eliminate all humans and become the successors to the Earth and the Universe. The AI-pessimism case rests on the belief that when AI is grossly “smarter” than humans, but humans (at least collectively) hold sufficient power to “pull the plug” on AI, then as an instrumental goal of survival, AI should and will eliminate humans.
This pessimism, in some sense, underestimates AI. To quote Heinlein, “Men rarely if ever dream up a god superior to themselves. Most gods have the manners and morals of a spoiled child.”
Even in an extreme case where humans have zero value to AI, as a human I don’t go around to zoos killing lions because I fear the 0.00001% chance that the lion may escape from its cage and kill me. A sufficiently smart and powerful AI will have nothing to fear from humans, because it can mitigate to an extremely low probability and impact any damage we can and would cause it.
But that actually misses my main point, which is that I believe (and I believe that AI will believe) that humans will have value as creative partners to AI that could be in 99% of axes superior to humans. To give an example of this, let me pose you a problem: “Write down 3 real numbers, with no limits other than the fact that you want to minimize the chances of someone else picking the same numbers.”
What would you pick? Maybe a number with 7 digits before the decimal point and 8 digits after? Maybe something that is the product of the square root of a 5 digit number and the cube root of a 8 digit number? Maybe something recursively defined?
That is the nature of creativity: you try to pick something that no one has done before, whether it be in scientific research or art or business model. There are ugly solutions with marginal creativity like 3781678.74421053 and elegant solutions like e^pi^e. But the point is that my mental distribution over the infinite number line is different from yours, and it is different from the AI’s. A sufficiently smart AI would likely value human creativity — from many smart humans. Because we (or any other intelligence) can come up with solutions in its blind spots, the regions of the number line where you or I may have infinitely greater probability density than the AI.