This site

 

Overview

This site mainly concerns machine intelligence. Tenets include:

  • To be intelligent is to understand the world,
  • The machine at issue is electronic,
  • The process of understanding is a semantic process,
  • To appreciate how the machine could be intelligent entails knowing the semantics of electronics.

Had AI research been scientific, the topic of the semantics of electronics would have been front and center from the start. However, the research project set off down the wrong path. To this day, the field still doesn't understand (and shows little interest in understanding) the semantics of electronics.

AI critic, American philosopher John Searle, did understand semantics but didn't understand electronics. He analyzed the Turing machine, wrongly thinking he was dealing with the electronic computer. The two machines are semantically very different.

AI founder, Alan Turing, understood electronics but didn't understand semantics. This forced him into difficult positions such as claiming that no machine could have human-like intelligence (think); recommending telepathy as a mode of communication; and assigning a single teleprinter for communication between two rooms.

I take what Turing knew (electronics) and what Searle knew (semantics), and use the combined knowledge updated for semiconductors to understand the semantics of electronics.

This provides a scientific foundation for machine intelligence research based on the physics and chemistry of electronics. This foundation can be used to understand how electronic components could come to know the world including the meanings of the shapes of text.

Contributions of Turing and Searle

Turing tried to understand intelligence with concepts of computation, an error Searle exposed so clearly in his 1980 Chinese room argument. But in 1950 Turing knew there was a fundamental problem. He was unable to explain the principal semantic process of human intelligence, that of thinking. But rather than admit the difficulty, he simply said "The ... question, 'Can machines think?' I believe to be too meaningless to deserve discussion".

But as then friend, Robin Gandy, said, Turing's 1950 paper was more propaganda against prevalent anti-machine intelligence views than a manifesto of AI, as it is now often taken to be. One doesn't show weakness in propaganda. Not long after, 1954, Turing's life was tragically cut short. By that time research in both Britain and America was heading down the computational path. That is, the path of using concepts of computation to understand the potentialities of the electronic machine.

But Turing was intimately familiar with the science of electronics as it existed at that time. In 1946 he designed from the ground up an early computer, now called the Automatic Computing Engine (ACE) (in his technical design report, called the Proposed Electronic Calculator). He gave lectures on electronics. He was very familiar with the physics of the teleprinter, and includes a description of the clocked current transmission mode starting at page 89 of his 1951, Programmers' Handbook for Manchester Electronic Computer Mark II.

Searle's Chinese room argument (CRA) is a semantic argument. It concerns the meaning and understanding of words. If a computer could be intelligent then it will be capable of understanding the meanings of words. Searle considers text.

He used a set of semantic tools to analyze what he took to be the computer. But in fact he analyzed the Turing machine. Unlike computers, Turing machines do internally manipulate text.

I use the same tools but analyze computer electronics. Electronic components don't internally manipulate text. Computers have a significantly different semantics from Turing machines. But Searle's analytical tools can be used on devices which react to inner electrons just as well as on ones which manipulate inner text. Though one gets different results.

Searle used semantic concepts mainly related to the distinction between intrinsic and extrinsic semantics. In a system with an intrinsic semantics, meanings, understanding and knowledge are contained within the system. In a system with an extrinsic semantics, those features are contained only within the intelligent observer. Computation has an extrinsic semantics. Humans have an intrinsic semantics. No computation (the process) could have human-like intelligence.

Searle made the mistake of thinking computers (the electronic device) operate by executing inner computations, and hence wrongly concluded that no computer could have human-like intelligence. But his tools can equally be applied to electronics as to text. I do this and conclude that the electronic components of computers could have an intrinsic semantics. My examination includes a semantic analysis of ChatGPT.

Ideas

On the Ideas page, I link articles in which I try to explain what's wrong with, and what's right with, the Turing test, the CRA and ChatGPT. I also outline what I believe are principles for understanding how electronic circuitry could acquire semantic content, or knowledge. This explanation involves a theory about how an electronic system could come to understand its environment by virtue of the operation of sensory apparatus, in particular the key semantic process of transduction.

 

 

Some classic papers and books

A sampling of classic papers and books. Turing's and Searle's seem the most important.