I got kind of emotional when I left Reddit a few years ago during the API drama. Moderating for years, participating for like 15… it’s hard to not feel emotionally invested in that. Sure one could simply say “it’s just a website,” but obviously it’s more than that.
Which illustrates another problem: unscrupulous actors with big names can spread whatever information they want to millions of people with minimal effort.
No I really did abuse my reach for this one! I figured it would be a relatively harmless demo of how easy it is to affect LLM answers if you have a decently trafficked website.
Ever since the invention of the printing press, every new communication technology has reduced the effort needed to widely disseminate information-- and misinformation! So you could say this is nothing new. On the other hand, this is remarkably little effort.
Yes, they can. We can be glad that respectable newspapers and TV news channels have never done it and never will. You can even trust than the headlines are accurate summaries of the content of the articles. /s
So we are wrong to express any opposition or desire to maybe raise the bar here? Aren’t we supposed to be “the good guys”? Or should we just accept a role as the menace of the world, wildly throwing its weight around whenever we have an unscrupulous president?
Those questions are moot. There are situations where it's simply impossible to have a human in the loop because reaction time is too slow or the environment is too dangerous or communication links are unreliable. Russia is deploying fully autonomous weapons to attack Ukraine today and they will be selling those weapons (or licensing the technology) to their allies. There is no option to stop. And let's please not have any nonsense suggestions that we can somehow convince Russia / China / Iran / North Korea to sign a binding, enforceable treaty banning such weapons: that's never going to happen.
There's always an option to stop. We can choose civility over barbarity, stop trying to kill people over 1000+ year old dick waving contests, and stop threatening each other with doomsday weapons because your grandpa shot my grandpa. Just because our leaders are too stupid and cowardly doesn't mean there's no option.
I wasn't aware that the US was throwing away its moral compass for the just cause of frustrating Putin's expansionism. The new story seems to be Putin gets to do what he wants, and so do we.
If you think there's something wrong with giving our warfighters the most effective weapons to carry out their assigned missions with minimum casualties then your moral compass is completely broken. Personally I favor a less interventionist foreign policy but that has to be addressed through the political process. Not by unaccountable individual defense contractor employees making arbitrary policy decisions.
You should know that every single veteran I know ruthlessly mocks Hegseth for trying to use this term non-comedically. It’s a synonym for someone who takes their service way too seriously/makes it their whole identity. It’s almost exclusively used to mock people.
Not sure you're aware, but the joke may be on you. It's apparently Putin who's convinced Trump and the Mullahs (not the band) to choose civility over babarity by allowing a superyacht of one of his cronies to pass through the Hormuz.[0]
Russian trolling at its finest, truly. This timeline keeps raising the bar on the absurdity quotient.
We aren’t Russian and Putin is not our leader. We can choose how we behave and operate. This is like saying we should use chemical weapons if someone else deploys one. You’re speaking as if it’s all so binary. “Do what they do or you lose.”
It's cheap and easy for someone sitting safely behind a computer to pretend to be morally superior when you're not the one who has to make hard decisions, or deal with the consequences. Chemical weapons have seen minimal use after WWI largely because they're not very militarily effective. Autonomous kinetic weapons actually work. Right now Ukrainians are building autonomous weapons to defend themselves against Russian autonomous weapons. For Ukrainians it is binary: do what they do or you lose. Would you prefer that they lose? And don't presume to tell us that the Russians can be persuaded to stop by non-violent means, that would be completely delusional.
>It's cheap and easy for someone sitting safely behind a computer to pretend to be morally superior when you're not the one who has to make hard decisions, or deal with the consequences.
This is a deeply flawed argument that has an obvious application back at you, but either way if you’re going to stoop to personal attacks I think we’re done here.
Who said otherwise? Clearly it’s about facilitating specific acts by the government. Why are y’all acting like it was so wildly broad? No one said “working with the government is inherently immoral.”
No. Their comment was:
“Any AI researcher who continues to work here is morally compromised.”
But, “…doing this kind of work with the federal government.” is added context that was not there and is based on your own interpretation.
The language of the parent comment charges that simply working at a company that is engaging in this makes one complicit in an immoral act, and the complicity itself is immoral. I disagree with all of that.
Yes. Working at a company explicitly profiting off of doing clearly immoral acts is wrong. It doesn’t mean working for a company contracted with the federal government is always wrong.
It’s not as slick as AirDrop and you have to sort of “prep“ both devices whenever you want to send/receive anything, it’s never just ready to go, but it’s incredibly reliable and will move anything from one machine to another. Just having that consistency across literally any device is so nice.
I see this whenever an LLM’s impact is assessed. We know. The issue is scale and the ability for smaller and smaller groups (down to individuals) to execute at scale.
Fake news always existed. Now one dude in India can flood multiple sock puppet media accounts with right wing content/images (actual example) at a scale previously unimaginable.
Yes. This is pretty well established. Neural networks in general are considerably less sample-efficient than traditional ML methods. The reason they became so successful is that they scale better as you increase training data and model size. But only with modern compute power they became useful outside of academic toy model applications.
That’s not the issue I’m hitting here primarily but yes.
My concern is that I can open up chatGPT and even with a free, “anonymous” account run an assembly line generating tens of thousands of words a day to pump to Twitter that are good enough to prop up multiple fake accounts and cause mayhem.
Now make it thousands of people like me doing it. Now add funding and political orgs. Add company leadership that turns a blind eye so long as it drives engagement. This scale and pipeline wasn’t possible 5 years ago, even if we clearly see the throughline.
I’m not even getting into fake images either. That used to require some know how. There are basically no hurdles and even if most people learn it’s fake, millions likely won’t. If you’re a little lucky, less scrupulous “news” outlets will amplify it for you as well for free.
Unfortunately the answer is usually people just want to hand wave away the critique for one reason or another. “People already do that” is an easy truism for stifling discussion.
> Now one dude in India can flood multiple sock puppet media accounts with right wing content/images (actual example) at a scale previously unimaginable.
I have the faintest possible hope that such things are going to be the death knell of social media. Yeah a lot of credulous idiots are happily giving AI thirst traps their money for stroking their confirmation bias, but that's just who's left at this point. It feels like every social media app I use is gradually bleeding users who aren't hopelessly addicted to the dopamine treadmill, because what's left is just plain unappealing to them, which selects for the people who are most vulnerable to AI shit, which is far from ideal, but also means those platforms are comprised ever more of that vulnerable population and nobody else. And the problem with all these businesses going through that is without a diverse, growing audience, you just become InfoWars, slinging the same slop to the same people every day, and every ounce of said slop is great for what's left of your audience, but absolute garbage for getting anyone new in it. And it just goes on that way until you sputter out and die (or harass the wrong group of parents I guess).
I wish all social media sites a very haha die in a fire.
Mate you're on a social media site right now that often has AI-generated content displayed at the top of whats "trending". Sure the general user-base does a better job here flagging that sort of stuff, as AI seems to be a shared interest in much of the community, but it still sneaks it's way by
You’re technically right but I think we can all agree HN is significantly different from the major players. The vast majority of us see the same posts and comments, for starters. The churn of posts is also much slower. You log on 2-3 times spread out in a day and you see 90% of the main posts. Top posts linger for 24-48hrs regularly.
No media uploading, memes are few and far between (usually punished), etc.
reply