Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Unserious answer about a very serious event.

I don't believe a word of Sam's "I believe" section.

 help



Ha, I was giving an AI bootcamp to a room full of people and someone asked me my opinion of Altman. I hesitated for a second and replied that I would not trust Altman further than I could throw a rock about anything.

If Graham says this guy will always stop at nothing to get whatever he wants, which I absolutely believe, then why would you trust anything that comes out of a person like that’s mouth?


[flagged]


You don’t even know what is covered. It could be anything from how to prompt to how to create your own models from numpy primitives.

Who tf is dumb enough to not do it, though?

If I was non-tech and owned a business, and someone (reputable) offers to teach me everything I need to get up to date with the most revolutionary technology of the decade (perhaps century?) for like ... 500 dollars? Why not?


$500/hr maybe. Most of these are like $5000-10k per week.

Its neural network autocomplete that helps you write text a little faster, chill with "the most revolutionary technology of the last decade/century" talk. You're offending a lot of experts in way more important areas of research.

>write text a little faster

You might actually need to attend an AI bootcamp. This is not 2022's GPT, AI can deliver plenty of value for a business owner these days.


That’s so shockingly ignorant/reductive that you shouldn’t be surprised when people start ignoring you in technical conversations.

[flagged]


Yes, actually. Or at least I've thought of outsourcing my emotional needs to it, since it's quite good at conversation.

There's a whole subreddit devoted to this: http://reddit.com/r/MyBoyfriendIsAI

and the reactionary subreddit: http://reddit.com/r/cogsuckers


Yeah, people learning new technology is terrible. /s

10 hours ago a post made the frontpage here [0] about how OpenAI is backing a law that "would limit liability for AI-enabled mass deaths or financial disasters". Now he's here saying he believes that "working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for [him]".

I know he doesn't believe a word of what he wrote in that post except, perhaps, that he cannot sleep and is pissed. I know I should be used to people openly lying with no consequence, but it still amazes me a bit.

[0] https://news.ycombinator.com/item?id=47717587


I think it's good for CEOs of powerful companies to make statements about how they don't want too much personal power and it's important to ensure everyone does well, even and perhaps especially if there's reason to suspect they don't believe it. Saying it doesn't solve the problem, but it helps create a permission structure for the rest of us to get it to actually happen.

The reason he's saying that is because he doesn't want you to create that structure. He wants you to not create the laws or checks & balances on him because you "trust that he doesn't really want the power".

It has worked for him, repeatedly.


No, I don't think that's accurate. Altman has repeatedly and loudly demanded for these to be created, including a new detailed policy proposal just this month (https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440...).

OpenAI has also repeatedly and quietly lobbied against them.

You linked a vague PDF whose promised actions are:

> To help sustain momentum, OpenAI is: (1) welcoming and organizing feedback through newindustrialpolicy@openai.com; (2) establishing a pilot program of fellowships and focused research grants of up to $100,000 and up to $1 million in API credits for work that builds on these and related policy ideas; and (3) convening discussions at our new OpenAI Workshop opening in May in Washington, DC.

Welcoming and organizing feedback!

A pilot!

Convening discussions!

This "commitment" pales in comparison to the money they've spent lobbying against specific regulation that cedes power.

Please don't fall for this stuff.


Yeah a company causing mass death or other disasters is maybe the single clearest signal that they should go bankrupt and someone else should take over (if the tech is really that important).

> I know I should be used to people openly lying with no consequence, but it still amazes me a bit.

Well that makes two of us. Character seems to mean nothing today.


[flagged]


> Incendiary and false headline aside

The text of the bill literally starts with "Creates the A.I. Safety Act. Provides that a developer of a frontier AI model shall not be held liable for critical harms caused by the frontier model if (conditions)", and defines "critical harms" as "death or serious injury of 100 or more people or at least $1,000,000,000 of damages". The headline is, IMO, shockingly accurate.

> Is Toyota liable for selling someone a car that is later used for vehicular manslaughter?

No, but they are liable for selling a car with defective brakes, even if they don't know that the brakes are defective. And if the ex-Monsanto has to pay millions in compensation for causing cancer with a product that they tested to hell and back, then I don't see how that's different when the one causing cancer is an AI just because the developers pinky swear that it's safe.


The headline is completely false and misleading. The bill does not indemnify AI companies from all mass murder as it implies. It indemnifies them if they UNKNOWINGLY provide a product that is used by others for mass murder.

If someone asks ChatGPT for places where a lot of people will be around in a city, intending to mass murder but not revealing as such, you want them to be liable? Seems absolutely crazy.


All of those are false equivalences. Let me give you a few better analogies.

Selling an axe that's known to be so defective that it breaks upon use and impales anybody nearby. Even worse, it is sold as great for axe murders.

Or a big tech company like Microsoft selling a software for planning a mass murder, including indoctrination material and the checklists of things to be done.

Or an auto company like Toyota selling a car that is known to accelerate uncontrollably at inopportune moments and advertising it as great for hit and run campaigns.

Now let's consider a few relevant examples.

An AI model sold for planning military attacks, knowing that it sometimes selects completely innocent targets.

Or an AI model sold to families, claiming that it's safe. Meanwhile, it discreetly encourages the teenage son to commit suicide.

Or selling a financial trading AI that's known to make disastrous decisions at times.

Or selling a 'self driving' car, knowing that its autopilot frequently makes fatal mistakes.

I know that I'm supposed to assume good intentions and not make any accusations on HN. Therefore let me make this rather obvious observation. Some people here are dismal failures at making arguments that are consistent and free of logical fallacies - especially when it comes to questionable practices by the bigtech.


>Selling an axe that's known to be so defective that it breaks upon use and impales anybody nearby. Even worse, it is sold as great for axe murders.

Please provide ChatGPT/Gemini marketing materials advertising it as good for mass killings.


I didn't name any single AI. But who is providing the AI used by the Pentagon and Israel to plan the mass killings in Iran and Palestine respectively? I'm surprised that people can't see the obvious danger.

People championing the absolution of billionaires who create a chatbot that can't spell strawberry who then say it should be allowed to choose who lives and dies wasn't what I expected at the turn of the decade.

Beautiful.


This can only be an intentional misreading the bill, or you haven't read the underlying bill at all. Because the headline is patently false. It indemnifies them ONLY if they unknowingly assist in mass murder.

If someone asks ChatGPT "hey chatgpt, where are spots in my city where a lot of people hang out on the street", then uses his car to mass murder 18 people, you want OpenAI to be on the stand? Sounds like an objectively insane position.

In a world with broad liability as you desire, the person who rented a hostel room to Luigi Mangione while he plotted murder should be held liable for aiding him, despite knowing nothing of his intentions.


Half of these people have financial interests in the companies in question either directly working for them or indirectly, or are already part of that class. Realize they're behind the keyboard, and there's nothing surprising about it.

He’s clearly a standard pathological lying C suite exec

unpopular opinion but i think it's written quite well

I don't think that's unpopular, it is pretty well written. But the "I believe" section is extraordinarily hard to believe given Altman's history.

> Working towards prosperity for everyone, empowering all people

> We have to get safety right

> AI has to be democratized; power cannot be too concentrated

None of these statements, IMO, reflect his actions over the past 5 years.

> we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future

I agree with this, but there is a near 0% chance of that happening anytime soon in the US. I think he probably is aware of this.

Just my opinion, but it comes off as very insincere.

To be clear, what happened is still awful and there's absolutely no justification for it.


Yes, clearly not written with his own product.

If that's the case, why doesn't he trust his own product enough to write this?

He doesn't trust it for anything else either as far as I can tell. In an interview he's boasted about how he uses a paper notebook for everything all day.

it's "written well" but not at all a smart piece of writing. leading with a photo of a cute baby before engaging in an extended defense of one's own integrity is so obvious as to be insulting

Perhaps by ChatGPT

It seems a bit stilted to be LLM'd.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: