Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> it just a statistical model, text-in, text-out (and it humans that feed the input and act on the output).

You're not thinking long-term. What happens when AI is put in charge of systems that interact with the physical world?

 help



That is a choice a human made. Imagine if someone proposed sending the outputs of a random number generator to a space laser and had it fire at will, would we blame the number generator for the destruction it causes? You may say that LLMs are not random number generators, and I would somewhat agree, but at least in their current state and level of understanding we have about how they derive their output they might as well be.

So, imagine that some humans make this choice and then AI autonomously takes over and humans can't stop it anymore. Is that enough to treat AI in such a situation as a magical alien something that can threaten your or my survival?

One thing that the whole AI debate has shown to me is how many people completely lack any sort of imagation.


My point is that wild imaginations about the current state of LLMs is the problem, we wouldn't even consider connecting a random number generator or a statistical model to a weapons system but if we start thinking of it as an intelligence some actually would be tempted to do so.

I'm sorry, but do you realize it's 2026, not 1980s anymore? Whatever you call intelligence, if LLMs don't pass your "intelligence test", there is a lot of people who won't pass it either.

And I'm pretty sure that there is plenty of countries who would make soldiers out of those people and give them weapons.


The definition of intelligence hasn't changed since the 1980's, most would say that true intelligence requires intentionality which is not something LLMs are capable of, defining intelligence can turn into a fairly deep philosophical debate (which I have no interest in having).



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: