Arguments are all built in neural-syntax specifics. How they externalize as arbitrary points is largely irrelevant, which demonstrates how humans go extinct: confusing the two.
The basic 'fact' is words and the conduit metaphor paradox are never resolvable, they are inherently contradictory, which makes words almost entirely irrelevant, and gibberish.
AI can never solve this because CS never began at first principles.
In essence, AI is the most advanced demonstration of language's total irrelevance.
Now I see your point. You are an absolutist. If something is not 100% perfect and works for all possible cases, then it is immediately worthless. Language being in that category.
I also disagree with your point and your arguments. So many sentences in your response are blatantly false. You can win the Olympics of jumping to conclusions.
Let's start with CS. CS is the set of first principles that are then applied to software. This is because CS is another branch of mathematics, starting with Boolean logic and discrete mathematics.
Language relevance is shown here. We are using it right now. It is not a complete system because some ideas can't be expressed in language and some sentences in a logical system can't be proved or disproved, but the overwhelming majority of sentences are useful.
And everything I have written is based on first principles, you can read about Gödel incompleteness theorem for a start. It applies to LLMs because it applies to all uses of language. Nothing is specific to neural networks.
In fact, go and read about Gödel, because it proves that no logical system is complete, and your worldview seems to be dependent on the outdated assumption that there should be such a complete system. This includes all reasoning systems and all of mathematics.
No my approach is there is no model. Nothing is reducible. It's a neurodynamic approach. The idea of a world model is oxymoronic, the brain doesn't reduce anything to models, making math and logic irrelevant. Nothing you are talking about is really a first principle, how can it be, it's retrofitted using symbols.
Yours is a psychodynamic approach, the post-hoc representations brains create is enough for you. You expect reason to be the threshold. I see no reasons for anything, simply actions. The computer I expect uses no math.
btw - this is the Achilles heel to CS "Nothing is specific to neural networks."
Pretty fascinating that the illusion of counting, math, algebra will all ne superseded by measurement in analog, a measurement that requires no math, simply differences in syntax. How we code that is up for grabs. How did all these math-heads take control of reality through counting? Really a bonkers group of capitalists had nothing to do except dominate by counted value. Rather insane.