Hacker Newsnew | past | comments | ask | show | jobs | submit | anon35's commentslogin

You don't think a human using an LLM to generate content that convinces another human to press the launch button is a concern? Sure seems like there's more than one thing we need to do.

The exact same concern already existed without LLMs. It is called social engineering, and has been a known risk for a while.

Honestly? I really don't! What kind of content do you think would trigger that? If humans were launching nukes based on Facebook posts we'd all be long dead! A good deep fake might trick your grandma, but it's not very likely to fool military intelligence.

> What kind of content do you think would trigger that?

The kind of political propaganda that leads to the US reelecting a convicted rapist whose selects another rapist to lead the Department of Defense who then renames it to the Department of War and, true to the name, starts unilaterally attacking other countries.


If trump getting elected was due to AI, I wonder why every nation isn't electing similarly awful politicians? Hungary just elected a new president who seems a lot better than his predecessor, and a lot better than trump. The Canadian prime minister is genuinely one of the best politicians I've seen in my lifetime! The list goes on and on.

No blaming trump on anything other than the people who voted for him is like blaming school shootings on anything other than guns:a popular American passtime, and complete and utter nonsense.


Bear with me this digression into freedom of speech, before addressing your point.

The utilitarian argument for freedom of speech and expression in America finds its roots in the Marketplace of ideas.

Verification is frankly, the task of all our markets - to set up incentives for being right.

With no government interference in the exchange of ideas, citizens would be better able to discuss ideas, including those not popular with the establishment.

Since no one has a monopoly on truth, it would be through this competition, and fair traffic society would be better able to understand truth and thrive.

That worked, when we had newspapers that were funded, where the media landscape was not consolidated, and where we didn’t have an abundance of technology that overwhelmed our ability to verify and be informed.

Today, through entirely private forces, we can monopolize, fracture and shape the traffic in our marketplace of ideas.

Trump is very much the ideal candidate to ride the media environment. The right side of the political spectrum is simply a far more efficient at providing a wrestling style experience for its audience. Its consolidated media environment largely pays lip service to journalistic standards, and sells a coordinated set of ideas for its audience.

The Fox News effect is a case in point, and this was from the 90s.

This media model has been co-opted globally, with every party and government now providing patronage to media houses to keep them afloat, and to build their own narratives.

The citizen who engages in these media markets simply does not enter a vibrant competitive market anymore.


> Tend to like a lot of formal rules about everything.

I would amend to: what Americans don't like to accept are what they see as preventable mistakes. The least American sentiment of all is "shit happens". Americans sometimes say that, but they don't mean it. What they really mean: "this shit shouldn't be allowed to happen". Hence the rules, and (in the extreme) the litigiousness.


> what Americans don't like to accept are what they see as preventable mistakes

Most high-achieving societies are this way.


> there's not really a destination. There is only the process of improvement

Surely you can appreciate that if the next stop on the journey of technology can take over the process of improvement itself that would make it an awfully notable stop? Maybe not "destination", but maybe worth the "endless conversation"?


I think it's not only the potential for self-improvement of AGI that is revolutionary. Even having an AGI that one could clone for a reasonable cost and have it work nonstop with its clones on any number of economically-valuable problems would be very revolutionary.


"Until then" means others will pay for your decision to ignore policy when it happens. It's never on the person who -- with every good intention, full of an instinct to "build a better world" -- willfully ignores the stuffy rules in handbooks and HR guidelines. Instead, when it backfires and someone does threaten to sue, it's precisely execs, HRs, legal who have to deal with it. The rules are there for good reason.


I run our hiring process and my employer is small enough that I would be personally responsible for this



This is an extremely important question, and you’ve phrased it nicely.

We’re either handicapping our brightest, or boosting our dumbest. One part is concerning, the other encouraging .


Which part is encouraging? We rely on the extra ordinary (talent and/or sheer drive) to make leaps of progress - what happens if they are handicapped? If the dumbest fake it and make it to the positions they shouldn't be entrusted with, what prevents the catastrophes?


>We’re either handicapping our brightest, or boosting our dumbest.

Honestly it seems like we're doing both most of the time. It's hard to only optimize resources for boosting the dumbest without taking them away from the brightest.


The brightest will evaluate the tradeoffs properly or will have education that will give them proper evaluations of AI. Maybe some bright people will be handicapped, but it won't be the bright'est'. That handicap on the bright could also lead to new forms of talent and multi-faceted growth.

What percentage of the dumbest will be boosted? What makes a person dumb? If they are productive and friendly, isn't that more important?

What percentage of the dumbest will fall farther or abandon heavy learning even earlier?



Try to keep in mind that your attention to detail is almost certainly perceived by them to be fastidious, and that quietly your seniormost colleagues and leaders may well muse: "Man, if only sublinear would loosen their standards, just imagine how much faster we'd proceed". Put another way: the fact that you can't relate to OPs problem is because you're hardwired to solve it (putting things in their place) continuously, likely without exception, which means you're paying a different cost. Try to think of _that_ cost when you bristle at their solutions.


I enjoy thinking about this perspective, but at the end of the day it's not slower in the long run to be meticulous. Quite the opposite.


I think it's less that being meticulous is time consuming than that, in the same way that things have different values to different people, things can have different costs. I feel like if I didn't put things in convenient places that may be difficult to find later, I'd end up doing a lot of backtracking in the present.

Eg, I misplace my wireless headphones a lot. Something comes up that demands my full attention, so I take off my headphones. My headphones live at my desk.

If I walk to my desk, I'm likely to forget what I needed to do - there's lots of stuff demanding my attention on my desk, after all. Someone could also engage me in conversation on my way. Much of the time I'll return to my original task without issue, sometimes I'll get distracted for 15 minutes, sometimes I'll get distracted for an hour.

It's a lot cheaper to just put down my headphones. Or maybe it's more accurate to think of it as less risky.


"Meticulous" is basically defined as "the upper end of the right level of care about detail." What you call meticulous others might call unnecessarily pedantic, or obsessive. What they call meticulous, you might find sloppy.


Exactly that, it's the No True Scotsman fallacy.


"Short-lived" depends on your perspective. Cloudflare owns the rights to that trademark now; because they believe their mission furthers that vision: https://en.wikipedia.org/wiki/The_Network_is_the_Computer (and John Cage, the Sun employee who coined the phrase, said he was fine with Cloudflare picking it up: https://spectrum.ieee.org/does-repurposing-of-sun-microsyste...)


Wahlberg lampooned himself in a follow-up appearance with Samberg: https://www.youtube.com/watch?v=xYcHxF_cO8o. So: "and" is also correct.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: