Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Genuinely surprised at the extreme comments against sama here. I don’t think he’s a good steward of the technology, but I don’t think violence is funny or justified. I also don’t think it’s justified for him to use it to say that a negative article about him is correlated to this event. Seems to imply that an “incendiary article” led to this and that criticism is tantamount to calls to violence. He drives the conversation with apocalyptic terms, and both investors and crazy people buy into it.
 help



> but I don’t think violence is funny or justified

Well, that's okay, because even Sam Altman disagrees with you. He absolutely believes that violence, including deadly violence, is justified - hence his contract with the US Department of War to use their systems in kill chains.

Perhaps the problem is that whoever threw the cocktail didn't use AI to select him as a target, or maybe he didn't receive payment for throwing it? Because what other difference is there?


I mostly agree with you - he seemed happy for the chance to play the victim. When the system is working, war is different because it has democratic process behind approval (Iran is obviously showing the system is breaking down)

But just because horrible people exist in positions of power doesn’t mean I have to become horrible myself. I accept that there is a threshold where that changes, but I think we would disagree that we’ve hit that threshold. If anything violence now just gives more excuse to justify further consolidation of power (look I got attacked! The anti AI people are crazy, any criticism of me is just encouraging them!) Imagine if it was a serious attack on sama, they could spin it into some serious gains for them.


[flagged]


Could you explain how the Vietnamese were involved in the US democratic process that resulted in around 3 million of their people dying? Similarly, how are the Iranians currently involved in the US democratic process to veto the use of AI targeting against them? As a German citizen, how can I object to being surveilled by OpenAI products used by US agencies?

It turns out that those affected by this are actually excluded from the process by design.


One of the more curious perks of being a democracy seems to be that you can also democratically (within your own country) decide about the fate of people in other, nondemocratic countries and then get to enforce those decisions by military...

I don't think that OpenAI necessarily enforces or fundamentally respects the democratic process. After the recent Pentagon spat with Anthropic, OpenAI did not change their stance to conditionally demand lawful usage of their product.

OpenAI can market democratic values very easily, I'm sure the White House loves that kind of dog-and-pony show. But it's pretty clear that OpenAI does not genuinely care about Rule of Law, let alone preventing humanitarian disasters from citing ChatGPT as their abettor.


There isn't anybody who wants to solve problems for people to vote for anymore.

The problem is sam is a prolific liar, as has been proved many times.

It's difficult to sympathize with the boy who cried fire


I don’t think someone should be burned alive because they’ve lied unless they’ve spread intentional lies that have caused death or harm to others which I don’t believe Sam has done. Personally I find it very easy to sympathize with someone who was attacked in their own home with their family unprovoked even if they have lied in the past. It’s crazy how blood thirsty people have became lately.

I am not talking specifically about him but when you reach a certain level in society and large enough umber of people start reading or listening what you are saying your every sentence must be extremely thoughtful because it might have unintended consequences, which are impossible to measure. That’s why so many leaders are publicly so boring and bland.

I think people just shouldn’t be burned alive.

I think Sam and people like him are *spoilers* like Jules Pierre Mao and Dresden on The expanse.

I think that he may genuinely believe that ai will produce a net benefit for humanity in the long term, but I am increasingly worried that they are absolutely fine testing their creation on the world without any consideration to the harm it can do to millions of individuals.

The assertion that he is benign would be more believable if he spent a shred of time lobbying for universal economic rights of citizens, or some model for redistribution of wealth in a world where most people don't need to work to provide the necessities of society.

Oh, and he's willing to let the government use his technology to mass-spy on Americans and to create autonomous lethal AI.

Pearl-clutching about ambivalence to his fate and comparing it to the barbarism of a mob gets shrugs from me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: