Talk:GOFAI

Latest comment: 1 year ago by CharlesTGillingham in topic I have a solution

GOFAI

edit

This page, formerly a redirect page to symbolic AI, has now been expanded to explain why GOFAI does not mean the same as symbolic AI, but instead refers to a specific approach that has long since been extended. The GOFAI term is still heavily used to caricature symbolic AI in a way that is now quite inappropriate. If the conflation of GOFAI and symbolic AI is not addressed the confusions will just continue.

The term could be addressed in the article on symbolic AI but that would be distracting from the main article, especially in explaining the conflations and showing examples required to justify the view.

I do not know exactly why the terms are conflated, but I suspect there are two reasons. One is just from a lack of familiarity with symbolic AI from newer students who are entirely and solely immersed in deep learning. The other reason may be a deliberate confusion to present deep learning as a new paradigm to totally replace symbolic AI, by ignoring symbolic machine learning to imply symbolic AI was solely fixated on expert systems, and further denigrating the use of symbols as "aether", as Hinton has said. Gary Marcus has pointed out there is considerable animus to the use of symbols in the current leaders of the deep learning movement.

Whatever the reason, it is time the confusions were addressed explicitly.

Veritas Aeterna (talk) 22:26, 19 September 2022 (UTC)Reply

Too bold for what there is

edit

While I whole heartedly agree in principle in breaking off from Symbol, I vehemently object to doing it like this with zero reference even to the fact that this formerly referred to Symbolic and all of the current content is ORy, fluffy, and doesn't even refer to the former redirect which btw embodied the general understaning, the thing that would be found most supportable. Lycurgus (talk) 15:14, 24 September 2022 (UTC)Reply

Actually since I just noticed this it's possible there was such a process. I'll come back later to verify that. 98.4.112.204 is Lycurgus (talk) 15:18, 24 September 2022 (UTC)Reply
Thanks for your suggestions. I'm working on changes to the last section that should tighten the writing and note that this was formerly a redirect. I should have that next week, currently I have family visitors to deal with!
Veritas Aeterna (talk) 19:36, 30 September 2022 (UTC)Reply
Acknowleged. Lycurgus (talk) 03:58, 1 October 2022 (UTC)Reply

An inaccuracy

edit

The article says said:

Significant contributions of symbolic AI, not encompassed by the GOFAI view, include search algorithms; automated planning and scheduling; constraint-based reasoning; the semantic web; ontologies; knowledge graphs; non-monotonic logic; circumscription; automated theorem proving; and symbolic mathematics.

Is this a typo? Obviously these are all GOFAI, at least by the common definition of "the dominant paradigm of AI research from 1956 to 1995 or so".

If it isn't a typo, I suppose I should try to offer proof. Here goes. All of these were developed or (at least discussed) AI researchers in the 60s & 70s working in old-fashioned symbolic AI. Heuristic search is the quintessential GOFAI algorithm; almost every program used it. PLANNER was a GOFAI planning algorithm. Constraint satisfaction is a search algorithm over a space of symbolic expressions (logical and numeric) -- strictly GOFAI. Semantic Webs go back to the 50s. Ontologies (that is, "common sense knowledge bases") were proposed in the 70s by Schank, Minsky & others. Non-monotonic logic was part of the work by McCarthy's group at Stanford. Theorem proving (by heuristic search) goes back to Logic Theorist (56), symbolic mathematics goes back at least to Gelernter's geometry theorem prover. ---- CharlesTGillingham (talk) 06:01, 3 July 2023 (UTC)Reply

I fixed this by using a definition of GOFAI from a reliable source. All this is cut now. ---- CharlesTGillingham (talk) 12:24, 3 July 2023 (UTC)Reply
Wrong, you are treating GOFAI as meaning symbolic AI, even aspects that hadn't been developed when Haugeland wrote his book.

The standard authority in this area is Russell & Norvig. See p. 982,

27.1.1 The argument from informality Turing's "argument from informality of behavior" says that human behavior is far too complex to be captured by any formal set of rules humans must be using some informal guidelines that (the argument claims) could never be captured in a formal set of rules and thus could never be codified in a computer program.

A key proponent of this view was Hubert Dreyfus, who produced a series of influential critiques of artificial intelligence: What Computers Can’t Do (1972), the sequel “What Computers Still Can't Do (1992)”, and, with his brother Stuart, Mind Over Machine (1986). Similarly, philosopher Kenneth Sayre (1993) said "Artificial intelligence pursued within the cult of computationalism stands not even a ghost of a chance of producing durable results." The technology they criticized came to be called Good Old-Fashioned AI (GOFAI). GOFAI corresponds to the simplest logical agent design described in Chapter 7, and we saw there that it is indeed difficult to capture every contingency of appropriate behavior in a set of necessary and sufficient logical rules; we called that the qualification problem. But as we saw in Chapter 12, probabilistic reasoning systems are more appropriate for open-ended domains, and as we saw in Chapter 21, deep learning systems do well on a variety of "informal" tasks. Thus, the critique is not addressed against computers per se, but rather against one particular style of programming them with logical rules—a style that was popular in the 1980s but has been eclipsed by new approaches. One of Dreyfus's strongest arguments is for situated agents rather than disembodied logical inference engines. An agent whose understanding of "dog" comes only from a limited set of logical sentences such as "Dog (x) => Mammal(x)" is at a disadvantage compared to an agent that has watched dogs run, has played fetch with them, and has been licked by one. As philosopher Andy Clark (1998) says, "Biological brains are first and foremost the control systems for biological bodies. Biological bodies move and act in rich real-world surroundings.

According to Clark, we are "good at frisbee, bad at logic.”

The embodied cognition approach claims that it makes no sense to consider the brain separately: cognition takes place within a body, which is embedded in an environment. We need to study the system as a whole; the brain's functioning exploits regularities in its envi-onment, including the rest of its body. Under the embodied cognition approach, robotics, rision, and other sensors become central, not peripheral.

Overall, Dreyfus saw areas where AI did not have complete answers and said that Al is herefore impossible; we now see many of these same areas undergoing continued research nd development leading to increased capability, not impossibility.

So if you want to add arguments Dreyfus made in his book in the embodied cognition area, please go ahead, that would be fine! But the key point is that GOFAI is not equal to symbolic AI now, only symbolic AI as he saw it then. Veritas Aeterna (talk) 20:14, 3 July 2023 (UTC)Reply

"GOFAI" is NOT an NPOV term for "the dominant paradigm of AI research from 1956 to 1995", instead it is pejorative, and not technically correct. Haugeland's book came out in 1986, and so what he called GOFAI does not fairly describe what came after.

Veritas Aeterna (talk) 23:37, 3 July 2023 (UTC)Reply

Haugeland's GOFAI was not just rule-based systems

edit

The article said, in the section about "rule based systems":

Haugeland and Dreyfus also correctly pointed out various limitations, discussed in later sections.

Haugeland's GOFAI was not strictly rule-based systems, but any system that used high level symbols to represent knowledge or mental states or thoughts or produce intelligent behavior.

Haugeland's GOFAI is any work that assumes:

1. our ability to deal with things intelligently is due to our capacity to think about them reasonably (including sub-conscious thinking); and
2. our capacity to think about things reasonably amounts to a faculty for internal “automatic” symbol manipulation.

— Artificial Intelligence: The Very Idea, pg. 113

This is basically a form of the physical symbol systems hypothesis (with some fine-tuning only of interest to philosophers). If you're more familiar with the PSSH than you are with Haugeland, you can take GOFAI to mean a "physical symbol system".

For Haugeland, GOFAI is more of a philosophical position than a branch of AI. It's an assumption that's implicit in symbolic AI projects, especially when they start making predictions, or assuming that symbolic AI is all you need for intelligent behavior. So, if we're to take it as a branch of AI, it has to be any AI research that (a) uses symbols, and (b) thinks that's enough.

So, anyway, Haugeland doesn't belong in this section, because his use of the term is slightly different than the definition used most often today: "the dominant paradigm of AI research from 1956 to around 1995", and is definitely directed towards all symbolic AI, not just rule-based systems.

I should probably add this material to the article. --- CharlesTGillingham (talk) 07:18, 3 July 2023 (UTC)Reply

I boldly did it. ---- CharlesTGillingham (talk) 11:30, 3 July 2023 (UTC)Reply
I agree with you that assuming that symbolic AI is all you need for intelligent behavior is incorrect and also that human thought is entirely and only composed of symbol manipulation, especially when we get to the unconscious level, which is more the purview of Type I automatic thinking than the Type II deliberative thinking.
I don't assume either.
Veritas Aeterna (talk) 23:42, 3 July 2023 (UTC)Reply

Dreyfus' critique was not directed at rule-based systems

edit

First of all, Dreyfus' critique was first published in 1965, before rule based systems existed.

He directly criticized the work that had been done by 1965, especially the "cognitive simulation" program of research at CMU. (I.e., Newell and Simon), and he harshly criticized Simon's public claims to have created a "mind" in 1956 and the nutty predictions he made in the 60s. So all the vitriolic stuff was definitely not just rule based systems. It was AI in the middle 1960s.

The better part is his four assumptions. His target is clearly "symbols manipulated by formal rules". The "formal rules" here are the instructions for moving the symbols around and making new ones, i.e., a computer program. They are not production rules (production rules weren't invented until ten years later).

There's nothing in his work that suggests he was talking about production-rule-based systems exclusively, or that his critique didn't apply to any work that manipulated symbols according to instructions.

On R&N (2021). They mention three things that aren't targets of Dreyfus' critique: (1) "Subsymbolic" stuff. Sure. Dreyfus himself says this. (2) Probabilistic models. It's true that this addresses the qualification problem, but it can't come close to solving the general problem. It's also true that in soft computing, the fuzziness overcomes the limits of symbolic AI. I'm not sure what he would say about this. (3) In the last line, they throw in "learning". Dreyfus didn't say symbolic AI can't learn or specifically he was only talking about things that can't learn. Samuel's Checkers was around, and there was a big dust up with Dreyfus over chess programs (Mac Hack, specifically), so I have to assume he was aware of it. There's no reason to assume he was somehow ignoring it.

I realize that R&N is the most reliable source we have, but in this case, I think Dreyfus is more reliable when we're talking about Dreyfus.

I think R&N missed an important point. Dreyfus' critique only applies to people who think that symbolic AI is sufficient for intelligence, so it doesn't apply to neurosymbolic mashups, or to "narrow" uses of symbolic AI. Dreyfus never said symbolic AI was useless. He said it couldn't do all the intelligent things people do. ---- CharlesTGillingham (talk) 08:34, 3 July 2023 (UTC)Reply

Charles -- this page is about GOFAI as it is used by those who have heard the term in AI and just equate it with symbolic AI. I am happy to let you discuss GOFAI exactly as Dreyfus intended it in a separate section. Perhaps you could explain differences between its use now, especially as pejorative of symbolic AI, versus his discussions in the book, which may indeed be technically different and more understandable to philosophers.
Would you like to add such a section? I would be glad to let you do that. I am not a philosopher, but a computer scientist.
Veritas Aeterna (talk) 20:27, 3 July 2023 (UTC)Reply

I agree with you that AI "couldn't do all the intelligent things people do." and still can't by itself. Neither can Deep Learning. There will be a synthesis.

Veritas Aeterna (talk) 23:53, 3 July 2023 (UTC)Reply

Most of the article cut

edit

I added a section about Haugeland's original use of "GOFAI" and what he was talking about.

I cut all the material that was based on mistaken idea that GOFAI referred only to production-rule reasoning systems. There was a lot. ---- CharlesTGillingham (talk) 12:28, 3 July 2023 (UTC)Reply

I restored the article for now, as we work on it together. Veritas Aeterna (talk) 20:28, 3 July 2023 (UTC)Reply


Working together Now on the Article

edit

Let me know your ideas at this point, e.g., if you'd like to add a section more closely hewing to Haugeland's intentions, versus common use now.

I will add more sources justifying that symbolic AI is not well characterized by GOFAI, too.

Veritas Aeterna (talk) 20:31, 3 July 2023 (UTC)Reply

To be clear: (1) you've misread the word "rule" in Dreyfus & R&N

edit

I get the feeling you're not reading what I wrote above.

It is literally impossible that the "rules" Dreyfus is referring to are production rules. Production rules had not yet been invented in 1965 when Dreyfus first published his critique. They are "instructions for manipulating symbols" -- that is, a computer program.

It also literally impossible that Dreyfus' critique applies only to production-rule systems of the 1980s. Production rule systems of the 1980s did not exist in 1965 when Dreyfus first published his critique. It is directly addressed to AI before 1965, because that when it was written.

R&N does not dispute this, except maybe in the snarky final joke of the quote. I don't think they intended this joke to be taken seriously. ---- CharlesTGillingham (talk) 21:59, 3 July 2023 (UTC)Reply

To be clear: (2) Your definition of GOFAI is strictly original research

edit

You've defined "GOFAI" as "a restricted kind of symbolic AI, namely rule-based or logical agents. This approach was popular in the 1980s, especially as an approach to implementing expert systems."

The Cambridge Handbook of Artificial Intelligence:

Good Old-Fashioned AI – GOFAI, for short – is a label used to denote classical, symbolic, AI. The term “AI” is sometimes used to mean only GOFAI, but that is a mistake. AI also includes other approaches, such as connectionism (of which there are several varieties: see Chapter 5), evolutionary programming, and situated and evolutionary robotics.

The Stanford Encyclopedia of Philosophy, "The logic of action":

[T]here is a tradition within AI to try and construct these systems based on symbolic representations of all relevant factors involved. This tradition is called symbolic AI or ‘good old-fashioned’ AI (GOFAI).

These are reliable sources. They define GOFAI as symbolic AI.

They don't restrict it the 80s. They don't tie it production rules (the technique behind expert systems).

Haugeland coined the term to describe programs that were "formal symbols manipulated by a set of formal rules". The "rules" here is like the "rules of chess" -- the rules governing the formal system. They are not production rules. Dreyfus (and every other philosopher talking about computationalism) is working from this definition as well.

R&N define the term as "a system that reasons logically from a set of facts and rules describing the domain". It does not mention the 80s and it doesn't explicitly say "production rules". This could equally describe McCarthy's work on logic in the late 60s.

If the definition of the term is in dispute, then, WP:NPOV we need to give each of the reliable sources a voice. We need to cover the common definition, philosophy's definition, and (if you insist) R&N's definition. ---- CharlesTGillingham (talk) 22:31, 3 July 2023 (UTC)Reply

I agree there is a controversy and conflict between our sources. The problem is this is a pejorative term now, interpreted by most to mean no progress has occurred since the expert systems bust.
So, yes, I think it would be helpful to distinguish between common usage -- which treats it as synonymous with symbolic or classical AI, and the technical use in computer science by R & N, where GOFAI's critique is more limited. And if you want to add something on its use within philosophical circles, that's fine, too.
I also agree that what R&N are talking about agents governed by logical rules, which can be axiomatic, as in automated theorem provers, and not just expressed in production rule form. I'll fix the heading in the section on that.
This way we could have two or three main sections and the material I currently have could be put under Russell & Norvig's definition.
I can add sources that mention that the current use of GOFAI is considered perjorative. I think R&N have it right, but agree common use is to treat it as a synonym for symbolic AI, but I think it is similar to treating "heart attack" and "cardiac arrest" interchangeably -- most people may mix them up colloquially but there are key technical differences. Veritas Aeterna (talk) 01:22, 4 July 2023 (UTC)Reply

(3) This article is a coatrack

edit

See WP:coatrack. This topic should be discussed in (short) subsection of symbolic AI ---- CharlesTGillingham (talk) 22:40, 3 July 2023 (UTC)Reply

Moving forward

edit

I've tagged the article. Note there is a "detailed complaint on the talk page" per WP:DETAG.

After you've replied, I will add the Cambridge Handbook definition at the top, mark R&N's definition as specific to them, and re-add the section describing Haugeland's use of the term. I will also tag other sections that are off-topic or unclear and leave them to you. I also leave it to you deal with the coatrack issue. See below. ---- CharlesTGillingham (talk) 23:03, 3 July 2023 (UTC)Reply

I have a solution

edit

I have a solution that I think will satisfy both of us. We split our coverage of "symbolic AI" into two parts

  1. The actual research program in AI using high-level symbols.
  2. The straw-man version of symbolic AI that had such an enormous influence on other fields.

We put the first in symbolic AI, we put the second in GOFAI. Specifically:

  1. Make "GOFAI" into a philosophy article and treat it as a term from philosophy (which is how it was coined in the first place, after all).  Done
  2. Remove any philosophical discussion of symbolic AI from the article symbolic AI except just to mention it exists -- say, a half-paragraph section, or a sentence in the lede.  Done (unless I get reverted again).
  3. Remove all other uses (in symbolic AI and throughout Wikipedia) of "GOFAI" as synonym for symbolic AI, UNLESS they are talking about the philosophy of AI.  Done
  4. Cover the philosophical criticism in here, from its own point of view (rather than the view from AI & its practitioners).  Not done But there is a tag indicating that someone should do it.
  5. Cover the influence of symbolic AI on intellectual history in this article, -- e.g., the cognitive revolution, the founding of cognitive science, computationalism, functionalism (philosophy), cognitivism (ethics), cognitivism (psychology), education, art.  Done (At least, there's a paragraph.)

Does this seem good to you? ---- CharlesTGillingham (talk) 16:25, 4 July 2023 (UTC)Reply

Wow! Maybe this Wikipedia collaboration really can work...this looks like quite promising, let me see if I am following you first:
1. GOFAI becomes a philosophy article -- then you can edit at will. In the See Also, you have a link to Symbolic AI.
2. I remove the explicitly philosophical parts of AI in the Symbolic AI part, specifically, e.g., changing the Controversies section, in just the subsection currently called
"Philosophical: critiques from Dreyfus and other philosophers"
to...
The Qualification Problem[edit source]
Now we turn to attacks from outside the field specifically by philosophers. One argument frequently cited by philosophers was made earlier by the computer scientist Alan Turing, in his 1950 paper Computing Machinery and Intelligence, when he said that "human behavior is far too complex to be captured by any formal set of rules—humans must be using some informal guidelines that … could never be captured in a formal set of rules and thus could never be codified in a computer program." Turing called this "The Argument from Informality of Behaviour."
REMOVED PART: Similar critiques were provided by Hubert Dreyfus...calling it GOFAI ("Good Old-Fashioned Artificial Intelligence").
Russell and Norvig explain that these arguments were targeted to the symbolic AI of the 1980s:

The technology they criticized came to be called Good Old-Fashioned AI (GOFAI). GOFAI corresponds to the simplest logical agent design described ... and we saw ... that it is indeed difficult to capture every contingency of appropriate behavior in a set of necessary and sufficient logical rules; we called that the qualification problem.

Since then, probabilistic reasoning systems have...
3. Yes
4. By 'in here' I think you mean his GOFAI article, which you might move to a Philosophy category of your choosing.
5. This part I am not quite sure I understand. I think you are inviting me to add new material, where appropriate in some of the other articles --
but only now do I think I understand, you're saying in the GOFAI philosophical page, which would be this current page, you would cover how symbolic AI relates to other philosophical areas, e.g., contributing to cognitive science or changing ideas in other areas. Yes, that's fine!
On the symbolic AI article most of what I added covers contributions in computer science and AI, along with some practical applications such as symbolic mathematics, ontologies, automated theorem provers, automated planners, etc. So, yes, you could add more general commentary.
So, if I follow what you are saying, as per the above, I think you have come up with a brilliant solution that we will both be quite happy with!
I don't want to completely eviscerate the current Controversies section in the Symbolic AI article as I think it is useful and covers these other areas. I could also add see something like -- See the GOFAI article for detailed discussion of critiques from a philosophical standpoint.
What I want the symbolic AI article to convey is that a lot of work and contributions to computer science were made in this area, many still used today, and to convey some of its viewpoints, e.g, that knowledge is important, a world model is important, and that symbols have a key role in all this, but not to argue that people think symbolically unconsciously.
(Similarly, it is also not true that people do not think the same as deep learning approaches, e.g., with back-propagation, linear algebra, or non-spiking neurons either. ) Veritas Aeterna (talk) 18:34, 4 July 2023 (UTC)Reply
Okay, we agree.
You said: "I don't want to completely eviscerate the current Controversies". Yes, of course not.
GOFAI handles the critiques coming from philosophy, psychology and linguistics, other fields. Lucas, Dreyfus, Weizenbaum, Searle, as well the critiques of cognitivism in general. All very brief.
Symbolic AI mentions the technical limitations coming from computer science & AI, e.g.
I would frame these as unanticipated limitations rather than critiques, and keep it really brief -- just a paragraph explaining it for the layman, short quote and links to the appropriate articles ---- CharlesTGillingham (talk) 03:21, 5 July 2023 (UTC)Reply
Removing the part on the Qualification Problem does eviscerate the section so I restored it. Please don't make radical changes without discussing them first (!).
I'm not sure what you're asking for regarding the other items:
  • intractability
  • commonsense knowledge / frame / ramification / qualification --
  • Moravec's paradox / Brooks --
  • LeCun / Hinton / AlexNet
Intractability is addressed to some extent by heuristics and meta-level reasoning.
And choosing less expressive logics, such as description logics, for faster reasoning with ontologies.
The Qualification problem has a section now under controversies, and I tried to remove more of the discussion of philosophy, while leaving the section coherent.
The Type I / Type II reasoning view addresses some of what Hans Moravec raises, in that Type I reasoning is indeed much better done by neural approaches for perceptual problems. BTW, I met Hans Moravec at the Stanford AI Lab.
Rodney Brooks is covered in the section on controversies.
LeCun / Hinton / AlexNet are all deep learning approaches, so seems like they belong there.
Is it more that you want these discussed as problems that were not initially anticipated?
I'm still attending the AAAI conferences. They embrace both symbolic and neural approaches. Both have limitations. I think the GOFAI discussion is painting all those who were in symbolic AI as embracing the view that reasoning is symbolic all the way down, whereas that was more a problem of early believers. I did not know anyone who believed that in all my time in the field.
It seems more like painting the neural net people as believing in connectionism as explaining everything -- without need for symbols, such as we are using here. There is / was irrational enthusiasm in both camps. Veritas Aeterna (talk) 22:31, 5 July 2023 (UTC)Reply
Right. Your last two paragraphs are absolutely correct -- "both have limitations" -- absolutely. (I'm sorry if I also expressed a little irrational enthusiasm myself along the way.)
My hope is that, between these two articles we can describe the limitations of symbolic AI, from a technical point of view in symbolic AI and explain what that means for intellectual history here in GOFAI.
It's important for symbolic AI to report that (1) the people who started symbolic AI expected it to bring us artificial general intelligence in a few decades (2) they found specific, technical limits that prevented them from being able to scale it up to general intelligence.
I think those limits were (1) Intractability (which can be mitigated a little bit by heuristics and Monte Carlo methods, etc, but can't really be overcome) (2) Knowledge -- including common sense knowledge (qualifiers, ramifications, facts), acquiring knowledge from experts, and trying to capture expert intuitions in the form of rules. (3) the incredible difficulty of capturing embodied skills, perception, anomaly detection, expert intuition etc, etc. with symbolic reasoning, (4) and the relative ease of doing these with stochastic gradient descent, which is probably how the brain does them in any case.
But I leave it to you to organize this how you see it from inside the field.
None of these limits were known by Newell and Simon in 1960. All they knew was the long tradition in Western intellectual history that "reason" -- abstract symbolic thought -- is the "highest" faculty of "homo sapiens". They had produced it in computers. It followed that their programs could be scaled up to produce human-like intelligence. It wasn't hubris or hyperbole -- it was what made sense, assuming the western rationalist tradition was right.
Haugeland showed how this thesis goes all the way back to Plato, passing through Shakespeare, Hobbes, Descartes, Hume, Locke, to the logical positivists of the 30s and then computationalists and cognitivists of the 60s.
Continental philosophy disagreed (Shopenhauer, Nietzsche, Husserl, Heidegger, Foucault, Feyerabend, Dreyfus) but it was unclear and unconvincing, and worse, they really had no concrete evidence to prove what they were saying.
But the experience of symbolic AI is evidence of another kind -- it's convincing evidence that the "rationalist" and "cognitivist" traditions in Western philosophy have been wrong from the beginning.
It turns out, abstract symbolic reasoning has severe weaknesses -- it doesn't provide God-like sapience on its own. You need other aspects of human psychology -- hunches, guesses, intuitions, prejudices, "feelings" about "how things work", mountains of common sense knowledge and expert knowledge. You need Kahnemann's "system 1". You can't sit in Plato's cave and work everything out -- you have to be out in the world, thinking like an animal with your animal brain. That's the best we have. (And it even suggests that God-like sapience might not actually be possible in the real world.)
All of this stuff is important, but it's not "symbolic AI" and doesn't belong in that article. But other fields like philosophy and psychology and intellectual history and future studies need to understand what exactly it is about symbolic AI that stopped it from creating general intelligence. Because this helps explain what parts of cognitivism are wrong, and what parts of the Western rationalist tradition is wrong. It's a counter-example -- a data point for these fields.
Our job is to make sure that data point is accurate as possible. We need to say exactly what the limits were, without overstating them or trying to paper-over them, without hyperbole or obfuscation.
I hope that our back-and-forth helps these articles to do that -- to explain things accurately. I know we've had an occasionally acrimonious dialog here, but I appreciate that you have made these articles more accurate and helped stamp out hyperbole.
Anyway, I've written too much here. I'm going to add some (toned-down) version of this to the article. The first half of Haugeland's book lays the intellectual history part out, and that will be my main source. (Right now, I'm on the road and the book is at home, but I'll get to it eventually.)
Anyway, thanks for your good work. ---- CharlesTGillingham (talk) 00:00, 12 July 2023 (UTC)Reply