Is it possible that the perceived limits in reasoning and language learning are just because the ACL 2024 best paper was applied to GPT-2? Or are we actually dealing with something more fundamental - a limit imposed by the transformer architecture itself?
I see. Are recursive systems better at answering "I don't know"? Because in precision medicine, where I am, that's the main reason transformers are getting rejected.
Nice article. I’m not discounting the value of more research but….. if it’s AI, then it is pattern-matching. And it’s not going to be be explainable in human terms; as humans don’t match patterns the way AI does.
Pattern matching is profound. So I’m not denigrating AI or saying it’s something less-than or more-than, etc. Just that some questions of cause-and-effect are unknowable.
The "LLM Popcorn" resonates a lot with me, I started to encourage people to watch out for "Popcorn AI" - Demos, that are fun to watch, but nothing more than that.
If, as Baudrillard suggests, human minds are already engaged in simulation, and if LLMs merely reflect the structure of these illusions, then the plea for a mechanistic understanding becomes paradoxical. Can a simulation explain its own impulse to simulate?
This is why I find Maria's critique so powerful: it brings simulation-based science to the threshold of self-reflection. However, at that point, the demand for a mechanism becomes just another detail, albeit more specific, that is still pinned to the same wall.
The question "Why?" remains meaningful only within the coordinates of those who:
* still believe in epistemic accumulation;
* use language to interrogate language;
* mistake symbolic fidelity for structural insight.
For these people, Maria's frustration marks a boundary where answers unravel and "answering" is the wrong verb. Like a Zen koan, the question begins to reveal the nature of asking itself.
by combining LLM with blockchain to follow the mechanism behind would be possible to follow
Great summary
Is it possible that the perceived limits in reasoning and language learning are just because the ACL 2024 best paper was applied to GPT-2? Or are we actually dealing with something more fundamental - a limit imposed by the transformer architecture itself?
Transformers are a crude outdated snapshot of the scaffolding as taken from the outside and behind a tree line.
OK. Thanks. But do you have a better alternative?
I’ve been exploring recursive learning systems, frameworks that preserve signal integrity over time instead of flattening it into token weighting.
Less about predicting the next word, more about remembering why the last one mattered.
Can it handle the scale of transformers?
That depends on what you mean by scale.
Transformers handle size by parallelizing pattern matching Recursion handles scale by preserving layered coherence.
Different limits, different failure modes.
Recursive systems don’t just ‘scale up’ they scale through.”
I see. Are recursive systems better at answering "I don't know"? Because in precision medicine, where I am, that's the main reason transformers are getting rejected.
In this vein: https://open.substack.com/pub/billatsystematica/p/causes-effects-and-artificial-intelligence?r=2e31mn&utm_medium=ios
Nice article. I’m not discounting the value of more research but….. if it’s AI, then it is pattern-matching. And it’s not going to be be explainable in human terms; as humans don’t match patterns the way AI does.
Pattern matching is profound. So I’m not denigrating AI or saying it’s something less-than or more-than, etc. Just that some questions of cause-and-effect are unknowable.
The "LLM Popcorn" resonates a lot with me, I started to encourage people to watch out for "Popcorn AI" - Demos, that are fun to watch, but nothing more than that.
If, as Baudrillard suggests, human minds are already engaged in simulation, and if LLMs merely reflect the structure of these illusions, then the plea for a mechanistic understanding becomes paradoxical. Can a simulation explain its own impulse to simulate?
This is why I find Maria's critique so powerful: it brings simulation-based science to the threshold of self-reflection. However, at that point, the demand for a mechanism becomes just another detail, albeit more specific, that is still pinned to the same wall.
The question "Why?" remains meaningful only within the coordinates of those who:
* still believe in epistemic accumulation;
* use language to interrogate language;
* mistake symbolic fidelity for structural insight.
For these people, Maria's frustration marks a boundary where answers unravel and "answering" is the wrong verb. Like a Zen koan, the question begins to reveal the nature of asking itself.