Introduction

The research project of artificial intelligence (AI) has made minimal progress towards its original 1950 goal of human-like artificial general intelligence in a computer (AGI).

The project understands the mind with the concepts of computation, but the computational paradigm is unable to explain human-like intelligence - for reasons Searle gives in his 1980 Chinese room argument.

Searle solves half the puzzle. He rejects computation (the process). But he accepts that computers operate by performing inner computations.

However, the concepts of computation are also unable to explain the computer. The machine is capable of thinking, but the concepts of computation can't explain how.

That's the radical case I want to put (which completely rejects the established wisdom).

Then I want to suggest a starting point for developing a replacement paradigm that can explain how the so-called "computer" could think.

 Crisis of Science

Failure to achieve AGI

The AI research project began in the 1950s with the goal of creating in a computer the computation (program) of human-like general intelligence (AGI) including "every aspect of learning or any other feature of intelligence".

Yet despite seven decades of largely well-funded and extensive effort by many thousands of researchers, very little demonstrable progress has been made so far.

Idiomatic conversation, common-sense knowledge, and avoidance of combinatorial explosion, among other traits of human-like intelligence, are still elusive.

New paradigm needed?

It's now grudgingly admitted by some that the concepts of computation may be unable to explain general intelligence.

But if true, this seems catastrophic. Almost certainly the computer is the only available machine with sufficient speed of state change and quantity of uniquely addressable storage.

I argue that AI is in a classic crisis of science in Thomas Kuhn's sense. And Searle's Chinese room thought experiment might be like...

...the analytical thought experimentation that bulks so large in the writings of Galileo, Einstein, Bohr, and others [which] is perfectly calculated to expose the old paradigm to existing knowledge in ways that isolate the root of crisis with a clarity unattainable  in the laboratory.

The old paradigm is the computational theory of mind, and the existing knowledge is the semantically vacant nature of the symbol.

On this view of AI being in crisis, the concepts of the existing paradigm are preventing progress.

It's not a matter of adding more concepts or modifying existing ones. In a crisis of science the problematic concepts are fundamental. The entire framework needs to be replaced.

I want to argue for replacement. But I also want to put the case for retention. I want to argue that computation, the process, must be abandoned. But on the other hand, AI can keep the hardware - its machine.

This seems contradictory. But AI has failed to examine the most radical possibility: its theory of mind is wrong, but so is its theory of its machine. Computers can operate without performing inner computations.

I argue that both theories are wrong, then conclude that a new conceptual framework is possible. This must be true to (reducible to) the basic science of the physics and chemistry of the computer's electronics.

I propose a paradigm which lacks concepts of the symbol, computation, representation, information, and various related ideas including knowledge representation and what computational AI calls perception.

If the above assessment of AI's situation is accurate, AI can accept the Chinese room argument's conclusion that no computation could think. But happily, it need not abandon its electronic equipment.

Climate change

AGI would be really helpful for mitigating, even reversing, climate change.

Most cleansing technology exists. The main problem is the cost of human intelligence needed to make and operate the many units needed.

We are prone to many counterproductive evolved forces, and these have greatly hindered the consensus needed to fight climate change. But we are very good at making machines, if we know the principles.

We need a machine with the sort of technical competence we have. This sort of technical intelligence could save us, and also many other species, from disaster. We need a New Manhattan Project, but this time to save life - with AGI.

Computation

The AI project understands the mind and the machine called the computer with concepts of computation. Yet there are different views about what is computation.

I think the most relevant one is that used by Searle in his Chinese room argument, referenced by John McCarthy in the 1955 flier for the 1956 Dartmouth summer workshop, and indicated by Turing in his 1936 paper about computable numbers.

On this view, computation is the manipulation of symbols according to rules about the meanings of their shapes.

For example, the computation 1+2=3 is true by virtue of the meanings (and positions) of the shapes of the constituent atomic symbols, and 1+3=2 is false for the same reason.

Turing

AI project founder Turing in his 1936 article, which is widely considered as defining the concept of machine computation, says (p. 249):

Computing is normally done by writing certain symbols on paper.

That is, computation is the manipulation of symbols according to rules contingent on the meanings of their shapes.

His article describes the theoretical machine now known as the Turing machine, a machine now considered the abstract invention of today's electronic computer.

Turing machines do internally manipulate symbols, ones such as "1", "e", "x", "::", and "0". And these symbols, when the machine operates on them, are in the machine (p. 231):

At any moment there is just one square [storage location], say the r-th, bearing the symbol G(r) which is "in the machine".

In its "universal" mode, the machine manipulates a description of some other machine, possibly another Turing machine but one not in universal mode. Turing (pp. 240-1):

In this way we obtain a complete description of the machine [M] ... This new description of the machine may be called the standard description (S.D). It is made up entirely from the letters "A", " C", "D", "L", "R", "N"and from ";" ... [241] The Universal Computing Machine [U]. If this machine U is supplied with a tape on the beginning of which is written the S.D of some computing machine M then U will compute the same sequence as M.

Dartmouth summer workshop

Organizer John McCarthy's 1955 flier for the 1956 Dartmouth College summer workshop, the event formalizing the start of the AI research project in America, starts:

The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.

Thus we see two things: the project goal is human-like general intelligence, and the method is simulation, a form of computation that operates on symbols that describe another machine or natural system.

The flier's introduction doesn't explicitly mention the type of machine, but there was only one on offer: the stored-program electronic digital computer.

And work had already begun using electronic digital computers to simulate some types of human intelligence including game playing and theorem proving.

Hence the organizers of the workshop, mostly trained mathematicians, envisioned achieving human-like general machine intelligence by the computational process of simulation in a computer.

The descriptions mentioned, the symbols, are the program, the rules. The machine also has input and output symbols. The computer manipulates these according to the rules. This is computation. This was the concept.

Searle

Searle, in his 1980 Chinese room argument, adopts the Turing machine as the exemplar of computation and essence of the digital computer. As he repeats in 2014 :

...a Turing machine ... is a purely abstract theoretical notion. All the same, for practical purposes, the computer you buy in a store is a Turing machine. It manipulates symbols according to computational rules and thus implements algorithms.

Conceptual problems

Over the years, the AI project, rather than achieving expected milestones, and in the context of enthusiastic research at many universities, has made little progress towards artificial general intelligence.

Rather, various seemingly severe difficulties with the computational paradigm, mainly philosophical but also practical, have been identified.

Daniel Dennett, John McCarthy, Hubert Dreyfus, Hilary Putnam, Mortimer Taube, Jerry Fodor, John Searle, Zenon Pylyshyn, Kenneth Sayre and others conclude that AI has fundamental conceptual difficulties understanding the mind as an executing computation.

It seems reasonable to suppose that these difficulties have been a factor in AI's lack of progress towards the general case. Computer scientist Hector Levesque (2017) notes (p.131):

We are still nowhere near the sort of [human-like] intelligent computer seen in the movie 2001.

British quantum physicist David Deutsch earlier concluded:

I cannot think of any other significant field of knowledge where the prevailing wisdom, not only in society at large but among experts, is so beset with entrenched, overlapping, fundamental errors. ... The lack of progress in AGI is due to a severe log jam of misconceptions.

Problems of AI

The recognized problems with AI theory and practice include those now known as:

The only machine

The computer seems the only machine with sufficient internal speed of state change, uniquely addressable storage, and storage access method for human-like intelligence.

Abandoning the machine would, so it seems, be to abandon the very project itself, at least as dedicated to achieving concrete real-world results.

Philosophical objections

Perhaps because of this, the fundamental problems of computation don't appear to have greatly influenced the pragmatic project of actually programming actual computers.

Perhaps this perceived dependence on existing hardware is what underlies the relative high contempt many AI researchers have for the philosophical objections.

Stuart J. Russell and Peter Norvig in their preeminent AI text, (1st Edition) page viii, introduce the last part of their book:

Finally, Part VIII, "Conclusions," analyses the past and future of AI, and provides some light amusement by discussing ... the views of those philosophers who believe that AI can never succeed at all.

Then in Part VIII, Chapter 26, "Philosophical Foundations", (pp. 837-838) the two authors explain:

We have presented some of the main philosophical issues in AI. These were divided into questions concerning its technical feasibility (weak AI), and questions concerning its relevance and explanatory power with respect to the mind (strong AI). Fortunately, few mainstream AI researchers, if any, believe that anything significant hinges on the outcome of the debate...

R&N themselves paid little attention to the philosophical issues (they get Weak AI and Strong AI the wrong way round), but this is exemplary of the disdain common among the practical scientific AI research community.

Computation is science

But there is often a disconnect between philosophy and science.

I was trained as a scientist, then intellectually captured by philosophy and economics, and found philosophy by far the hardest discipline I'd encountered.

So I can see why the AI community is skeptical of philosophers, "armchair experts". Also, perhaps for funding reasons, early AI leaders including Minsky, Newell and Simon strongly promoted AI as science.

The discipline was characterized as science right from its inauguration in America at the 1956 Dartmouth College summer workshop:

We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.

Hence, the foundations of AI are viewed as established science. What gives philosophers or any armchair theoreticians the right to question scientific fact?

Computer Science, the well-established academic discipline, tells AI what the computer is and how it works. This knowledge is simply not up for debate.

Thus, the mainly philosophical objections to AI's computational theory of mind and its computational understanding of its machine have rarely been seriously examined by the community whose goal is to create a mind in that machine.

But the issue remains, there seems only one machine with sufficient inner state change speed and addressable inner storage to have human-like intelligence.

Science of the machine

That computers compute

Early AI leaders including mathematicians Alan Turing, Marvin Minsky, John McCarthy and Allen Newell, established the computational understanding of the electronic machine called the digital computer.

This understanding said computers necessarily compute, that is, they operate by performing inner computations on inner symbols. Searle expresses this view by saying computers manipulate symbols and do nothing else.

This view is certainly partly based on the theoretical "computing machine" now known as the Turing machine. Turing machines do internally manipulate symbols such as "1", "e", "x", "::" and "0".

Going back to the introduction of electronic computers, initially they replaced humans called "computers", clerks who, for instance, calculated artillery tables.

Some clerks used mechanical calculating machines with keys labeled with symbols for the clerks to see and press, and also with a mechanism for displaying or printing symbols, the results, for the clerks to see.

No one supposed that the symbols on the keys or in the display were also being manipulated inside the mechanical clockwork.

Most electronic computers have keys labeled with symbols for humans to see and press, and a means to display or print symbols so users can see them.

However, virtually everyone believes that symbols are moving around and being manipulated inside the computer's electronics, a belief expressed for example by Searle in 2014:

It is important to say immediately that a Turing machine is not an actual type of machine you can buy in a store, it is a purely abstract theoretical notion. All the same, for practical purposes, the computer you buy in a store is a Turing machine. It manipulates symbols according to computational rules and thus implements algorithms.

Abstraction

But the idea of the computer as manipulating inner symbols according to rules about the meanings of their shapes is an abstraction on the science of the electronic substrate.

The petrol engine is explained by reference to the science of the chemistry of combustion and electrics or electronics of timed ignition.

The computing engine (the computer) is explained not by reference to the science of the chemistry and physics of the semiconductor and the electrics of the wires.

It's explained with ideas of the symbol, of rules, of simulation (ANNs aside). So we say that AI's computational explanation is an abstraction on the basic science of the electronic substrate.

This seems quite acceptable - if the abstraction is true of (is reducible to) the basic science.  But are the concepts of the computational paradigm reducible to the basic science of the computer?

Given that AI's understanding is an abstraction on the basic science, there seem to be two issues: (1) is the abstraction accurate, (2) are there any other abstractions?

The electronics

The science of the substrate is actually very clear. There are no such things as symbols inside it.  The symbols are on the keys and on the paper or screen (put there so humans can see them), but not inside the wires or semiconductors.

Why would a research community embrace such a blatantly false belief that symbols are manipulated inside the machine?

False belief

This seems partly due to the doctrine of simulation (that of a computer internally reacting to an internal description, that is, to internal symbols).

Also, it is undoubtedly due to the false belief that computers are Turing machines. Turing machines do internally manipulate internal symbols. Turing (p. 231):

At any moment there is just one square, say the r-th, bearing the symbol G(r) which is in the machine".

In more detail, symbols exist on exposed surfaces of attachments to the computer proper: keyboard, screen, paper in a printer. Put there for just one purpose: so humans can see them and interpret the meanings of their shapes.

The electronic substrate, however, is made mainly of wires and semiconductors. There are no symbols inside the wires or semiconductors or inside any other components.

But the computational understanding of the machine says there are. Computation is the manipulation of symbols (according to rules about the meanings of their shapes). Turing (p. 249):

Computing is normally done by writing certain symbols on paper.

So if computers operate by performing inner computations, there must be manipulation of inner symbols. There must be symbols inside the wires and semiconductors.

But this is a myth. At least the alchemical notion of substance, though wrong, was understandable. What adepts called types of matter: earth, water, air (essentially solid, liquid, gas) we now call states of matter.

But AI's idea that the computer's electronics contains symbols is just plain false, as every electronics engineer knows.

Chinese room argument

This falsity actually seems good news. It means that Searle's Chinese room argument assumes the wrong machine (one which does internally manipulate symbols). Hence a premise is false. The argument is unsound.

However, this is not as great as it might seem. Being unsound means only that the conclusion doesn't follow from the premises. The argument's conclusion, computers will never think, might still be true.

New paradigm

Knowledge is the basis of intelligence. So when considering how to make an intelligent machine the initial question is, how is this knowledge intrinsically acquired?

No one seems to know. To make progress we need a method. We should ignore the big picture and instead focus on a small part, the front end of the knowledge acquisition process.

If we can adequately understand this, then we probably will have concepts and an understanding needed to progress further.

Acquisition occurs mostly or fully by way of sensory apparatus. No one seems to know how the environmental impacts of photons, molecules etc., detected by sensors translate into knowledge.

But we can first consider other aspects of what happens in a sensor, leaving the (very vexed) question of intrinsic semantics till later.

The initial step, then is to understand the principles which underpin the semantics of the outer-to-inner causal interface of the sensor.

The Sensor

Sensors react to elements of the environment, and as a result send streams of electrical units to the central system, the brain, which then reacts to the units.

We agree that sensory streams contain knowledge in some form. But even though we know the physics and chemistry of both the organic and electronic sensor, we don't know the form.

AI currently understands electronic sensors with concepts of computation. In response to detecting particulate impacts of the proximate environment, sensors emit one or more streams of symbols.

Computers receive the symbols and internally manipulate them. But computation can't explain how knowledge of the sensed environment is embodied in the symbols.

As Searle's Chinese room points out, nothing in the symbols themselves indicates the nature of what is sensed. And the sensors emit (and computers receive) only the symbols themselves.

Hence there is what we can call an informational hiatus, or gap, between the outer and the inner, on the computational account of the operation of sensors.

Transduction

This gap is produced by the sensory process of transduction. Nothing on the outside survives the process. Nothing from the outside enters the inside. Hence nothing on the inside can know the outside.

So says computation. We need to know why this assessment is false. But we might need new concepts.

Searle is right in that the units which senors transmit to the central system, in themselves, are inherently meaningless.

Yet something from outside must enter the inner world. We know our environment. Something other than what computation wrongly calls symbols must be in the streams.

This other thing (or things), in its own right or in combination with the "symbols", must amount to knowledge.

But what is the principle which underlies the formation of this, what we might call, knowledge in motion?

The Chinese room

For an answer we turn to the Chinese room. This presents the computational account of the computer.

If computers will one day think, then something must be missing from what enters the room from the outside, since these things are only symbols, and symbols in themselves are meaningless.

The room contains a door, a slot to the outside, symbols, a CPU, a program, spare symbols, and baskets - places where spare symbols are stored.

The rule book (program) requires the man (CPU) to manipulate symbols depending their shapes (different shapes are different values of the property of shape).

On this account, the only things that could be knowledge are the symbols. But we know this account must be false, if computers will one day think.

Lets add lengths of string to the room. These are unusual in that they are all exactly alike (to the room). Same length, diameter, color, and so on.

The (modified) room, in fact can't react to any property of the lengths of string. Hence, they might be symbols to some system (which can discern variations of values of properties), but they can't possibly be symbols to the room.

Now suppose that each symbol which drops from the slot is connected to the next by a length of string. Now the symbols can be dumped in a basket, but the string preserves the order in which they dropped from the slot.

So we see that the stream does in fact contain something which is not a symbol. Namely that the existence of which is recorded by the lengths of string.

Temporal relation

This thing in the stream is the relationship of adjacency it time. And the string records instances of the relation as the stream enters the room.

The thing to determine at this point is whether the Chinese room should properly contain lengths of string, and whether the instruction book could mandate their use.

That is, whether there is such a thing in a computer which can record the relation of togetherness in time as streams from sensors enter the machine.

The answer is yes, computers do contain such things and they are very often used.

If this is true then it rebuts the Chinese room argument for the second time. The first time was when the premise, computers process symbols, was found to be false (because what computers process are not symbols).

The second time is now because what Searle calls symbols are not the only things computers can internally manipulate. This is a key moment (if the discussion about string is true) because now what AI calls symbols are merely components.

One type of component may be meaningless. But combined with one or more other types might yield a meaning, semantic content, knowledge.

I try to explain the nature of these multiple types of "thing" in sensory streams, and how they amount to knowledge.

The respective principle is: All knowledge gained through sense experience is reducible to instances of the relation of temporal contiguity.