Today I got Chat GPT to admit that "artificial intelligence" is just a marketing term, and requires the meaning of the word "intelligence" being stretched so far it loses explanatory value.
Quote:
What an “intelligent agent” usually requires
In philosophy and cognitive science, an intelligent agent typically has:
Persistent goals
Internal world models
The ability to act on the world to achieve goals
Learning driven by outcomes
Some degree of autonomy
LLMs:
Have no persistent goals
Do not act unless prompted
Do not update themselves during use
Do not evaluate success or failure
Do not choose actions
So calling an LLM an agent already stretches the term.
Why “intelligence” still gets used anyway
Because the word has drifted to mean:
“Produces outputs that look like the outputs of intelligent beings”
Which is descriptive, not explanatory.
That definition works for:
Chess engines
LLMs
Recommendation systems
…but it tells you nothing about how or why they work.
|
Etc