Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

OpenAI keeping 4o available in ChatGPT was, in my opinion, a sad case of audience capture. The outpouring from some subreddit communities showed how many people had been seduced by its sycophancy and had formed proto-social relationships with it.

Their blogpost about the 5.1 personality update a few months ago showed how much of a pull this section of their customer base had. Their updated response to someone asking for relaxation tips was:

> I’ve got you, Ron — that’s totally normal, especially with everything you’ve got going on lately.

How does OpenAI get it so wrong, when Anthropic gets it so right?



> How does OpenAI get it so wrong, when Anthropic gets it so right?

I think it's because of two different operating theories. Anthropic is making tools to help people and to make money. OpenAI has a religious zealot driving it because they think they're on the cusp of real AGI and these aren't bugs but signals they're close. It's extremely difficulty to keep yourself in check and I think Altman no longer has a firm grasp on what it possible today.

The first principle is that you must not fool yourself, and you are the easiest person to fool. - Richard P. Feynman


I think even Altman himself must know the AGI story is bogus and there to continue to prop up the bubble.


I think the trouble with arguments about AGI is that they presume we all have similar views and respect for thought and human intelligence, while the scale is maybe wider than most would imagine. Its also maybe a bit bias selecting to make it through academia systems with high intellectual rigor to on average have more romantic or irrational ideas about impressive human intelligence and genius. But its also quite possible to view it as a pattern matching neural networks and filtering where much of it is flawed and even the most impressive results are from pretty inconsistent minds relying on recursively flawed internal critic systems, etc.

Looking at the poem in the article I would be more inclined to call the end human written because it seemed kind of crap like I expect from an eighth grader's poem assignments, but probably this is the lower availability of examples for the particular obsessions of the requestor.


I'm afraid he might be a true believer. The more money and/or power one gets, the fewer people push back against fanciful ideas or simply being wrong, and one can believe one is right about everything.


> How does OpenAI get it so wrong, when Anthropic gets it so right?

Are you saying people aren't having proto-social relationships with Anthorpic's models? Because I don't think that's true, seems people use ChatGPT, Claude, Grok and some other specific services too, although ChatGPT seems the most popular. Maybe that just reflects general LLM usage then?

Also, what is "wrong" here really? I feel like the whole concept is so new that it's hard to say for sure what is best for actual individuals. It seems like we ("humanity") are rushing into it, no doubt, and I guess we'll find out.


> Also, what is "wrong" here really?

If we're talking generally about people having parasocial relationships with AI, then yea it's probably too early to deliver a verdict. If we're talking about AI helping to encourage suicide, I hope there isn't much disagreement that this is a bad thing that AI companies need to get a grip on.


Yes, obviously, but you're right, I wasn't actually clear about that. Preventing suicides is concern #1, my comment was mostly about parent's comment, and I kind of ignored the overall topic without really making that clear. Thanks!


> and had formed proto-social relationships with it.

I think the term you're looking for is "parasocial."


Ah yes thank you




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: