Good point. While it seems obvious to me that LLMs can never be anything more than fancy Markov chains, in my experience it seems the majority of human "logic" does not operate much differently. Very rare to encounter someone who is able to think or speak critically. Most regurgitate canned responses based on keywords.
I'm gonna respond to you, because i think you like GPT4 and i do too (even if the only use i trust for now is "Resume me this **lot of text/research article** in less than 200 words", which is already great for a knowledge hoarder like me)
You can think against yourself, a LLM have troubles doing so. Also, they fail spectacularly when asked to do real-life operation: "I have to buy two bagettes at one euros, then five chocolatine, croissants and raisin bread at 1.40, 1.20 and 1.60 respectively, how much should i take with me?" when in my head, i just know it'll be between 20 and 25 in seconds (and in fact it's 23, i took random numbers but they are quite easy to add).
> You should take 23 euros with you to purchase all the items.
Are you sure you're using GPT-4 and not 3.5? GPT-4 is incomparably more competent compared to GPT-3.5 on logical tasks like this (trust me, I've had it solve much more complicated questions than this), and you aren't using GPT-4 on chat.openai.com unless you're paying for it and deliberately picking it when creating a new chat.
Edit: Here's an example of a more complicated question that GPT-4 answered correctly on the first try: https://i.imgur.com/JMC7jsw.png
Funnily enough, this was also a problem that a friend posed to me while trying to challenge the reasoning ability of GPT-4. As you can see (cross-reference it if you like), it nailed the answer.
The rare humans who don't speak any language (or animals, for that matter) can still think, which shows that thought is more than manipulating language constructs.
Well, for one, humans are obviously at least more than a fancy Markov chain because we have genetically hard-wired instincts, so we are, in some sense, "hard-coded" if you forgive my programming metaphor. Hard-coded to breed, multiply, care for our young, seek shelter, among many other things.
Markov chains, like any algorithm, are hard-coded. And just as evolution hard-codes our genes, supervised learning (and in the future reinforcement learning) hard-codes LLMs and other AI models.
Could you provide an argument for why an LLM is a fancy markov chain that does not apply equally well to a human?