Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why question-space can't be baked into LLM weights (preprint) (zenodo.org)
3 points by h_hasegawa 17 days ago | hide | past | favorite | 1 comment


I've been building an external cognitive OS for LLMs called KIS (Knowledge Innovation System) for 18 months. The core argument:

As LLMs get smarter, they converge faster. This is the problem. Genuine inquiry requires non-convergent, open-ended exploration — which is structurally incompatible with how trained models work.

The math: question-space is a Colimit (open, non-convergent expansion). Model weights implement closure operators (Galois Connections), where φ(φ(q)) = φ(q). These two structures are fundamentally incompatible. Scaling won't fix this.

KIS operates upstream of LLMs — designing initial conditions before generation begins. Currently operational as WebKIS. Effect size d ≈ 0.8 in invention support experiments.

Preprint: https://zenodo.org/records/19305025

Happy to discuss the category theory or architecture.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: