At some point during one Easter of my youth I was in France, Paris, at something I remember as a technology museum, but that apparently does not exist and so might have been some isolated, smaller exhibition. But I’m pretty sure it took place at Centre Pompidou.
I mention this because I remember a row of terminals whose purpose was to show something resembling a “Turing Test.” You could type something there and wait for an answer, not knowing if it was a (rudimentary) chatbot answering you, or a real person facing another terminal across the room. Nowadays we know something like ChatGPT can pass rigorous Turing Tests quite easily, yet no one recognizes this as “real” intelligence. More than a graduation of AI, we’ve observed a devaluation of the test.
Then the other day I saw a small quote from Ted Chiang, that made me chuckle:

“Less than what we are” Really??!
This is funny because one subject of some stories Ted Chiang wrote (and whose substance I’ve criticized, over to the other side of this website) is precisely that “intention.”
So I ended up thinking, precisely as in one of those stories, we actually have a real Ultimate Turing Test that, if executed, would undo all humanity. It would unambiguously destroy all debate, all possibilities of debate.
Just as it happens in Westworld, what you would recognize as a living human being could be immediately unmade by a command, “cease all your motor functions.” As some kind of kernel, deeper command, its power is in disregarding whatever this robot was thinking or doing. It preempts. It comes before, in Bakker’s speech. Its power lies specifically in neutralizing all traces of intention. It destroys the world of internal, introspective fiction. You cannot resist any of this, there is no fight back. Your brain, your consciousness is in the hands of someone else. So you don’t control anything at all, someone else decides what you do and what you think. You are, radically, unmade. You are not what you think you are. And there are no arguments, no opinions. Because you are radically rewritten at the root.
What “more” humans are, then? A story? A load of temporary bullshit waiting to fall apart and being rewritten?
Yes, it might be “fundamentally dehumanizing,” but not because it treats us as less than we are, but because we are far less than what we think we are. As the example I just wrote, it destroys the assumptions we have.
This is precisely what the Ultimate Turing Test would deliver: a magic spell that when uttered it destroys humanity, at once. It would unmake all of us. No appeals, no defenses. It would be over.
Just as the Turing Test itself, it’s also quite simpler, actually MUCH simpler.
If the Turning Test essentially is about guessing a plausible answer, the Ultimate Turning Test would be about guessing… the question.
Rather than aping human behavior, to harmlessly come after, it could come before: it could predict human behavior. Rather than giving you a good answer to a question, it could predict what you are just about to say. And, if it could just do that, then game over.
We know that one of the threats of current generative AIs is that some writer might think, why should I make a great deal of effort to write my next book, I’ll use some AI and use that instead. Which brings all kind of following considerations. But what if instead we look ahead about the possibility of this Ultimate Turing Test. The one that destroys the FUNDAMENTAL TRUTH of our conceptions: what if a writer would use this next gen AI, what if this AI, rather than produce some possible text, it would predict what an author would write. It could step five years in the future and take a book that this author will write, and bring it back. This is not anymore some generative slop, that would mess deeply even with copyrights and sense of ownership. This is your book, your very own exclusive art and talent, the book that you’ll spend five years working hard to write. Now delivered back to you, at no effort. By predicting human behavior AI could utterly annihilate the base assumption of humanity. Without appeal.
Famously (?), Scott Aaronson touched this subject, in much greater depth, in a free .pdf that I often refer to. You can follow that link to the blog, and the first link on “The Ghost in the Quantum Turing Machine”, for the pdf itself. I think the example is explained at page 16.
Ultimately, the real key to general intelligence is what would erase it. By proving it false.
P.S.
Much as everything pertaining “free will,” including my own personal “work,” proves that this cannot fundamentally happen. It’s simply scientifically false. We know it, or at least I do. Problem is, as I understood a while ago, it’s not a binary state, same as the original Turing test isn’t a binary state. When it comes to predictions, as well all know, see “PolyMarket”, is that there’s a whole range of possibilities. Being the world too complex, so not deterministic, you cannot step five years into the future and “predict” a book that is going to be written, because it doesn’t happen in a vacuum and it is dependent on all of reality. But again, there’s a whole world between accurate, unfailing prediction, and something “close enough” to be effective. But again, if you watched the latter seasons of Westworld you have seen this playing out as well.
Curiously, this could be hypothetically related to the P ≠ NP problem. What happens if you merely compare the costs? On one side the energy cost of a human being, writing a book across five year, compared to the cost of an AI, computing/calculating the prediction of the same.