Problems with AI theory and practice have been identified (or alleged) since early days, in part due to the ludicrously optimistic claims made by early AI leaders such as Marvin Minsky, Allen Newell and Herbert Simon.
The claims, now laughable, had a great effect in inspiring students and acquiring funding and reputations, but also, because they were so expertly promoted, came to the attention of philosophers.
The topic of philosophy of mind has been an established subject in philosophy since at least Socrates and arguably Parmenides. The AI luminaries were proclaiming far and wide that they had (or were about to) create a mind in a machine.
The problem I want to introduce myself (which as far as I know is a genuine problem, and new) is:
A selection of problems previously identified or alleged with AI theory and practice follows. At present the links contain relevant papers but I'm working on a separate page for each problem:
- the problem of design
- the frame problem
- the problem of machine translation
- the problem of common-sense knowledge
- the problem of combinatorial explosion
- the problem of the infinity of facts
- the symbol-grounding problem
- the Chinese room argument
- the problem of encodingism
- catastrophic forgetting (deep learning)
- edge cases (deep learning)
- adversarial attack (deep learning)