Problem of direct wiring

The problem of direct wiring reveals that AI's conception of the electronic digital computer as a device which performs internal computations is fundamentally wrong.

The claim of such fundamental error, of course, would be strongly rejected by both AI and computer science (AI gets its understanding of the computer from computer science).

How could the venerable and vast research discipline of computer science, taught through all the Universities of Christendome and grounded upon certain Texts of Turing, as Hobbes might say, be wrong? Computer science is science, scientific fact. How could there be basic error?

Nevertheless, I hope to explain why the problem of direct wiring shows that electronic digital computers are not, and never could be, symbol-processing devices.

I'd like to offer two thought experiments. One about the intrinsic relation of a keyboard to a computer, the other about what actually comes out the back of a keyboard when a key is pressed, and which the computer receives and processes.

One would think that AI already knows all about this. Every electronic engineer knows. But AI (and computer "science") does not understand the computer with concepts of the basic science (physics and chemistry).

The problem of direct wiring is about the Turing test as performed. The performance differs in a crucial respect from the test as described by Turing (1950, "Computing Machinery and Intelligence"). On Turing's description, the computer contestant takes the place of a man.

The man is happily taking the test which comprises answering a number of question from an "interrogator". The questions appear as text on the paper roll at the back of a teleprinter. The man sees the text, understands the meanings of the shapes of the text, and understands the questions.

The test as performed

In the Turing test as performed, for example in the annual Loebner Prize Competition, there is a non-crucial difference compared to Turing's description of the test.

The teleprinter is replaced by a monitor and keyboard. The man, if he were to use the more modern technology, would see the text displayed on a monitor.

If the computer then takes the place of the man (all Turing says is that it takes the place of the man then continues with the test as if it were the man) then the computer contestant will also see the text displayed on the monitor. It will be a robot with (at least) one eye to see the text and one finger to tap out a reply on the keyboard.

But this is the crucial difference. In the test as performed, the computer contestant is not a robot. It has no eyes or fingers. Rather, it is wired directly into the interrogator's keyboard.

All it gets is what flows down the wire coming out the back of the keyboard. There is no text inside the wire. The computer never gets the text. It never gets the interrogator's questions.

That's the first part of the problem of direct wiring. The second part is: why does no one realize this - why does everyone think the computer does get the questions?

Why think the machine gets the questions?

The reason everyone thinks the computer contestant gets the text is they accept a myth. The myth that computers are symbol processing devices, are physical symbol systems.

If computers operate by manipulating inner symbols then the wire does contain the interrogator's text. It contains symbols. The computer receives the symbols and processes them.

Just as in the common view of word processing. Words are typed on a keyboard then go into the computer and are processed. (So goes the word processing version of the myth.)

But the idea of text inside the wire, as every electronic engineer knows, is pure fantasy. There is no text inside the wire.

Thought experiments

Keyboards

My first thought experiment is intended to clarify the intrinsic nature of a keyboard. What is the intrinsic nature?

But first, what of the extrinsic nature? A human sees certain shapes (typically uppercase letters of the English alphabet, punctuation marks, and a few others) displayed on the top surfaces of the keyboard's keys.

These are put there by humans for humans to use. Humans see them then decide which keys to press depending on the shapes encrusted on the exposed surfaces of the keys.

The human is extrinsic to the keyboard-computer system. The shapes on the keyboard keys are put there for extrinsic use.

But AI needs to understand the computer (and attachments such as keyboards) from the intrinsic perspective. What is the machine's inherent nature? What of the machine in and of itself? To the machine itself, what is a keyboard?

Imagine you are the machine. You have a rectangular area of skin etched with some tattoos. Someone put them there, not you. You don't even know the tattoos are there. You can't see them. You have no eyes.

But you can feel pressure. You feel a press and a release. It happens to be at a location under one of the tattoos, but you have no idea about that. You just feel the press then release. That's all.

To the machine itself, a keyboard is an area of digital skin, a touch sensor. The symbols printed on the exposed surfaces of the keys are completely irrelevant. As far as the machine itself is concerned, they don't even exist.

To suppose that the shape on a key, when it is pressed, somehow duplicates itself, then the doppelganger races down the wire from the keyboard and into the machine to be manipulated or stored is fantasy. To think that the interrogator's text flows down the wire is fantasy.

And to say that these imaginary symbols now inside the machine are knowledge representations, represent knowledge, is simply to agree that Big Foot is often seen riding a unicorn while wearing a pan-dimensional yellowish-green motorized bow-tie.

The wires

The second thought experiment is about what is inside the wire leaving the back of a keyboard as a result of a key being pressed, and which items arrive at the computer proper, enter it, and are processed.

The system is electronic. The keyboards we are talking about are electronic. They contain electronic circuity. What is emitted down the wire is electrical current. We may say, electrons. The circuity, by use of a clock, treats the current as divided into discrete units, but really the current is continuous.

Imagine you're an electron traveling down the copper wire (we're talking direct current here). You're one of a gang of electrons in a clocked unit of the continuous current. You look around.

Where is the text of the judge's questions? It can't be your gang. Your gang is just an amorphous cloud of electrons. Yet you were very seriously assured that the interrogator's text is inside the wire. You are perplexed. You can't see it.

But perhaps you are too close and are actually part of the text. You decide you need to stand back to get an overview. You grab the next copper atom you come to and hang on. Now you see the cloud of electrons continuously streaming by.

But where's the text? The machine proper will never understand the questions unless it receives the shapes of the letters of the text. But where are they?

They're not there. Computers, you now desperately realize, are not symbol processing devices at all. You've been led astray. Both AI and computer science are wrong in their fundamental belief, proposed by mathematicians, not real scientists, that computers operate by manipulating symbols.