Huh? a human brain is a complex as fuck persistent feedback system
Every time-limited feedback system is entirely equivalent to a feed-forward system, similar to how you can unroll a for loop.
No see this is where we're disagreeing.... It is doing string manipulation which sometimes looks like maths.
String manipulation and computation are equivalent, do you think not just LLMs but computers themselves cannot in principal do what a brain does?
..you may as well say human reasoning is a side effect of quark bonding...
No because that has nothing to do with the issue at hand. Humans and LLMs and rocks all have this in common. What humans and LLMs do have in common is that they are a result of an optimization process and do things that weren't specifically optimized for as side effects. LLMs probably don't understand anything but certainly it would help them to predict the next token if they did understand, describing them as only token predictors doesn't help us with the question of whether they have understanding.
...but that is not evidence that it's doing the same task...
Again, I am not trying to argue that LLMs are like people or that they are intelligent or that they understand, I am not trying to give evidence of this. I'm trying to show that this reasoning (LLMs merely predict a distribution of next tokens -> LLMs don't understand anything and therefore can't do certain things) is completely invalid
Every time-limited feedback system is entirely equivalent to a feed-forward system, similar to how you can unroll a for loop.
String manipulation and computation are equivalent, do you think not just LLMs but computers themselves cannot in principal do what a brain does?
No because that has nothing to do with the issue at hand. Humans and LLMs and rocks all have this in common. What humans and LLMs do have in common is that they are a result of an optimization process and do things that weren't specifically optimized for as side effects. LLMs probably don't understand anything but certainly it would help them to predict the next token if they did understand, describing them as only token predictors doesn't help us with the question of whether they have understanding.
Again, I am not trying to argue that LLMs are like people or that they are intelligent or that they understand, I am not trying to give evidence of this. I'm trying to show that this reasoning (LLMs merely predict a distribution of next tokens -> LLMs don't understand anything and therefore can't do certain things) is completely invalid