Overview
This site mainly concerns machine intelligence. Tenets include:
- To be intelligent is to understand the world,
- The machine at issue is electronic,
- The process of understanding is a semantic process,
- To appreciate how the machine could be intelligent entails knowing the semantics of electronics.
Had AI research been scientific, the topic of the semantics of electronics would have been front and center from the start. However, the research project set off down the wrong path. To this day, the field still doesn't understand (and shows little interest in understanding) the semantics of electronics.
AI critic, American philosopher John Searle, did understand semantics but didn't understand electronics. He analyzed the Turing machine, wrongly thinking he was dealing with the electronic computer. The two machines are semantically very different.
AI founder, Alan Turing, understood electronics but didn't understand semantics. This forced him into difficult positions such as claiming that no machine could have human-like intelligence (think); recommending telepathy as a mode of communication; and assigning a single teleprinter for communication between two rooms.
I take what Turing knew (electronics) and what Searle knew (semantics), and use the combined knowledge updated for semiconductors to understand the semantics of electronics.
This provides a scientific foundation for machine intelligence research based on the physics and chemistry of electronics. This foundation can be used to understand how electronic components could come to know the world including the meanings of the shapes of text.
Contributions of Turing and Searle
Turing tried to understand intelligence with concepts of computation, an error Searle exposed so clearly in his 1980 Chinese room argument. But in 1950 Turing knew there was a fundamental problem. He was unable to explain the principal semantic process of human intelligence, that of thinking. But rather than admit the difficulty, he simply said "The ... question, 'Can machines think?' I believe to be too meaningless to deserve discussion".
But as then friend, Robin Gandy, said, Turing's 1950 paper was more propaganda against prevalent anti-machine intelligence views than a manifesto of AI, as it is now often taken to be. One doesn't show weakness in propaganda. Not long after, 1954, Turing's life was tragically cut short. By that time research in both Britain and America was heading down the computational path. That is, the path of using concepts of computation to understand the potentialities of the electronic machine.
But Turing was intimately familiar with the science of electronics as it existed at that time. In 1946 he designed from the ground up an early computer, now called the Automatic Computing Engine (ACE) (in his technical design report, called the Proposed Electronic Calculator). He gave lectures on electronics. He was very familiar with the physics of the teleprinter, and includes a description of the clocked current transmission mode starting at page 89 of his 1951, Programmers' Handbook for Manchester Electronic Computer Mark II.
Searle's Chinese room argument (CRA) is a semantic argument. It concerns the meaning and understanding of words. If a computer could be intelligent then it will be capable of understanding the meanings of words. Searle considers text.
He used a set of semantic tools to analyze what he took to be the computer. But in fact he analyzed the Turing machine. Unlike computers, Turing machines do internally manipulate text.
I use the same tools but analyze computer electronics. Electronic components don't internally manipulate text. Computers have a significantly different semantics from Turing machines. But Searle's analytical tools can be used on devices which react to inner electrons just as well as on ones which manipulate inner text. Though one gets different results.
Searle used semantic concepts mainly related to the distinction between intrinsic and extrinsic semantics. In a system with an intrinsic semantics, meanings, understanding and knowledge are contained within the system. In a system with an extrinsic semantics, those features are contained only within the intelligent observer. Computation has an extrinsic semantics. Humans have an intrinsic semantics. No computation (the process) could have human-like intelligence.
Searle made the mistake of thinking computers (the electronic device) operate by executing inner computations, and hence wrongly concluded that no computer could have human-like intelligence. But his tools can equally be applied to electronics as to text. I do this and conclude that the electronic components of computers could have an intrinsic semantics. My examination includes a semantic analysis of ChatGPT.
Ideas
On the Ideas page, I link articles in which I try to explain what's wrong with, and what's right with, the Turing test, the CRA and ChatGPT. I also outline what I believe are principles for understanding how electronic circuitry could acquire semantic content, or knowledge. This explanation involves a theory about how an electronic system could come to understand its environment by virtue of the operation of sensory apparatus, in particular the key semantic process of transduction.
Some classic papers and books
A sampling of classic papers and books. Turing's and Searle's seem the most important.
- Bar-Hillel, Yehoshua, 1960, The present status of automatic translation of languages
- Bar-Hillel, Yehoshua, 1960, A demonstration of the nonfeasibility of fully automatic high quality translation
- Block, Ned, 1981, Psychologism and behaviorism
- Block, Ned, c.1995, The mind as the software of the brain
- Bickhard, Mark and Loren Terveen, 1995, Foundational issues in artificial intelligence and cognitive science
- Brooks, Rodney, 1986, Achieving artificial intelligence through building robots
- Brooks, Rodney, 1990, Elephants don't play chess
- Brooks, Rodney, 1991, Intelligence without representation
- Brooks, Rodney, 1991, Intelligence without reason
- Dennett, Daniel, 1984, Cognitive wheels: The frame problem of AI
- Dreyfus, Hubert, 1965, Alchemy and artificial intelligence
- Dreyfus, Hubert, 1979, From micro-worlds to knowledge representation: AI at an impass
- Fodor, Jerry, 1983, The modularity of mind
- French, Robert, 2000, The Turing test: The first 50 years
- Gibson, James, 1986, The theory of affordances
- Gibson, James, 1986, The ecological approach to visual perception chs 5 and 11
- Harnad, Stevan, 1990, The symbol grounding problem
- Haugeland, John (Ed.), 1997, Mind design II
- Kuhn, Thomas, 1962, The structure of scientific revolutions
- LeBouthillier, Arthur, 1999, W. Grey Walter and his turtle robots
- Lighthill, James, 1973, Artificial intelligence: A general survey
- Lovelace, Ada, 1843, Note G
- Marr, David, 1982, Vision ch 1
- McCarthy, John, 1955, 1956 Dartmouth College summer workshop proposal
- McCarthy, John and Patrick Hayes, 1969, Some philosophical problems from the standpoint of artificial intelligence
- McCorduck, Pamela, 2004, Machines who think
- McCulloch, Warren and Walter Pitts, 1943, A logical calculus of the ideas immanent in nervous activity
- Minsky, Marvin, 1961, Steps toward artificial intelligence
- Minsky, Marvin and Seymour Papert, 1969, Perceptrons
- Minsky, Marvin and Seymour Papert, 1972, Artificial intelligence progress report
- Minsky, Marvin, 1974, A framework for representing knowledge
- Minsky, Marvin, 1982, Why people think computers can’t
- Neumann, John von, 1945, First draft of a report on the EDVAC
- Newell, Allen and Herbert Simon, 1961, Computer simulation of human thinking
- Newell, Allen and Herbert Simon, 1975, Computer science as empirical inquiry: symbols and search
- Nilsson, Nils, 2007, The Physical Symbol System Hypothesis: Status and prospects
- Pinker, Stevan, 2005, So how does the mind work?
- Pylyshyn, Zenon and Fodor, Jerry, n.d., c.1988, Connectionism and cognitive architecture: A critical analysis
- Russell, Stuart and Peter Norvig, 1995, Artificial intelligence: A modern approach (1st edition)
- Saygin, Ayse et al, 2000, Turing Test: 50 years later
- Searle, John, 1980, Minds, brains, and programs
- Searle, John, 1983, Can computers think?
- Searle, John, 1984, Minds, brains and science
- Searle, John, 1987, Interview with Bruce Krajewski
- Searle, John, 1990, Is the brain a digital computer?
- Searle, John, 1990, Is the brain's mind a computer program?
- Searle, John, 1997, The mystery of consciousness
- Searle, John, 1999, The problem of consciousness
- Searle, John, 2014, What your computer can't know
- Shannon, Claude, 1948, A mathematical theory of communication
- Simon, Herbert, 1962, The architecture of complexity
- Sloman, Aaron, 1978, The computer revolution in philosophy
- Solomonoff, Grace, n.d., c. 2011, History of the Dartmouth summer research project
- Stallings, William, 2010, Computer organization and architecture
- Turing, Alan, 1936, On computable numbers, with an application to the entscheidungsproblem
- Turing, Alan, 1946, Report on the Automatic Calculator Engine (ACE)
- Turing, Alan, 1950, Computing machinery and intelligence
- Turing, Alan, 1950, Programmers' Handbook for Manchester Electronic Computer Mark II
- Wiener, Norbert, 1948, Cybernetics