Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In all seriousness, what is the game plan for society moving forward as AI takes more jobs? The government doesn't seem to care. The AI labs don't seem to care.

What happens when more and more people can't afford housing, kids, food, health insurance, etc.? Nothing more dangerous than a man who has no reason to live...

I don't advocate for violence, but I do foresee more headlines like this as things get worse.

 help



Nobody has one. If labor stops having value the economy will stop working and society will break down far in advance of building the infrastructure necessary for the promised AI abundance.

I like the idea of being ”post-scarcity” as much as the next guy, but I don’t understand how we get there. It’s a project in itself, it doesn’t just happen by magic, and nobody is actively trying to make it happen or has any logistical idea of what it involves.

We’ll also lose a huge number of jobs as soon as true AGI comes on stream, by which I mean the kind of AI that no longer acts like somebody who has read all the world’s books but can’t figure out that you always need to drive to the carwash.

We’ll lose these jobs and there will be no super abundance at that point, and not even government support.

There is the option of passing laws requiring companies to retain human employees. That to me is about the only viable stopgap measure.


It is not impossible to think that many people will just be served an UBI and don't expect much more in life, after all, if we have AI+Family+Housing+Food (assuming gov robots would take care of providing us free food in some form), I bet millions of people would be contented with it.

PS: I include AI as an important one in the future because it will be a direct way to get educated and replace college for example without having to pay (or very cheap).


You’ve addressed a different question, which is how satisfied with life will people be post scarcity. That’s a fine conversation to have, but it’s not the one I was having. My point is: how do we get there?

It's more likely that the people are just left out to dry instead of getting UBI.

Seeing as how austerity governments campaigned on reducing social benefits and achieved considerable success over the past few decades, I don't see how your solution consisting of granting people even more social benefits will ever happen. Unless there law and order is about to break down, there is no reason for the rich to leave all of that money "on the table".

It made me kind of angry when I saw Dario repeatedly claiming that AI would be taking all the programming jobs any minute now. His company supposedly is working for a better future, but he's giddily talking about something that could cause millions of people to lose their homes if it were true.

Our governments have a habit of being reactive rather than proactive. People have floated the idea of UBI, but if UBI happens, it will probably mean it's the only way to avert a crisis, and the amount that people will get might only be enough to rent a bedroom and eat processed food.

I think in the medium term, the reaction is overblown. Even though LLMs can make software engineers more productive, you still have a competitive advantage in having more software engineers. Medium to long term though, the goal is obviously to replace human jobs.

I'm not a communist, but Karl Marx understood that the labor force gets its bargaining power because they are necessary to produce value. What do people imagine happens when the human labor force becomes essentially completely replaceable? They imagine the government will be forced to take care of the population to prevent an uprising, but they forget that the police and the army can be replaced by machines too.


You can look up what tends to happen when human labor isn't needed anymore by reading about the resource curse - that one is also about not needing human labor. Only the least corrupt countries seem to be able to resist it. None of these countries have a very large population, so chances are that you don't live in one of them.

It's not surprising, Dario is an absolute ghoul. Exactly the same as Altman, peas in a pod.

a one bedroom and processed food sounds frickin amazing sign me up

There isn't much compelling economic data that AI has been the cause of any recent layoffs or job loss, yet you speak as if we are already in the throws of an AI takeover. Sam Altman is a salesman, he sells products that's all he is and ever has been, if you are looking for answers to why people can't afford house and food you should look at the politicians in power.

Irrelevant when such things are clearly the dream of Altman and his ilk.

I think, like other disruptive inventions of the past, there will be pain for many, but it will pass. Society will grow and adapt. There's some statistic somewhere I will paraphrase and/or botch that goes like: 90% of the jobs people have today didn't exist 50 years ago. I think no one can imagine what possible opportunities will manifest in the future. It's a lot easier to imagine everything that might go wrong because we evolved to see a sabertooth in the rustling leaves.

>90% of the jobs people have today didn't exist 50 years ago.

We also have 100% more people on the planet than we did 50 years ago.


> I think, like other disruptive inventions of the past, there will be pain for many, but it will pass

I agree. We can only hope that it'll be folks like Sam Altman who'll be feeling the pain, and not the 99%.


Why do you think so in the specific case of hypothetical improved LLMs that can do a large fraction of the kind of intellectual work humans are tasked with?

I think in such a state, there will no way up, not way to success, no way to real autonomy for ordinary people, maybe you'll even have actual oligarchal rule, since so few people do anything contributing to the economy with their labour.


Really, I don't know. But there is that underappreciated concept of "elastic demand" which I will gesture at even though I'm only casually acquainted with it. It's related to the vernacular fallacy of the economic Pie, as if it's a static thing that doesn't grow and shrink. I suspect, as the cost of producing things, including intellectual, knowledge-based products goes down, that we may just end up demanding more of these things, or better kinds.

I might look at the example of AI art. Artists were/are freaking out about it, worried that they'd lose business. I think they probably have, for some of the more utility cases for art like promotional material. However, a lot of the new consumers of AI art were not buying human art before. Some of the people making little personal projects, posting YouTube videos, making indie games, would never have paid artists to make assets for their things because it wouldn't be worth the money. I have personal experience with this on the consumer side.

Of course, when AI can do what you do for a job, it won't just be attracting currently unpaying, potential customers. Still, I'm not too confident our predictive skills as a society to say what will or won't happen. Like has happened before, many situations and opportunities will arise that will be utterly unanticipated.


Few thoughts

- Either we'll slowly become the Expanse universe (basic UBI, very few jobs, you win them via lottery)

- Or we'll go to simpler times - economics is supply and demand, if there will be more demand to human generated work (the same way there is demand for hand made arts, vinyls, paper books, vintage furniture), people will flock more to family, community. Think something between moving to the suburbs and the Amish. If people will "ban" some products generated by AI, or will prefer products generated by humans, then AI will have harder times to take their jobs. It's unlikely to happen, but think about the Organic food industry, about the high end products industry, about the farm to table / buy local industry, about the "support local artists" (farmers markets) - this will likely just grow. Won't help at scale, but it's a possibility

- Or, the Dune way, banning of thinking machines altogether on the state level, I assume some countries might go that way, for religious or other reasons, but again unlikely

- Or, current AI technology will plateau just short of full AGI, and the centaur period will stay for longer. As long as a human + AI can do things slightly better than just AI, (in my book this is not full AGI) - then there is economic incentive to hire a human instead of replacing them.

- Or full apocalypse, the matrix / skynet, idiocracy, hunger games, red rising. I hope for the ignorance is bliss option...


The end game is like the Asimov world which had only a few people and everyone else was robot servants.

The trillionaires will survive, everyone else will be exterminated. This is the world that Musk and his kind dream about.


The game plan is the same as it was for globalization and previous rounds of automation: gaslight workers into thinking that they are the problem. Push all the taxes into the labor economy and all the money into the capital economy and use the inevitable budget shortfall to justify skimping on social services. That'll work until it doesn't, at which point the Ellison strategy will be employed: pay 10% of the poors to keep the other 90% in line.

Out of curiosity... why do you think this?

I think this is complete madness. Im not someone that is in a job so I have the luxury to think critically about what is going on and... I just dont see it.

What I see is that LLMs will complement Labour and the excess returns of model producers will be very minimal (if at all any) due to the intense competition - keeping switching costs to a minimum (close to zero). This is before mentioning open source models which I expect to continue to improve.

There is no specialisation re. models at this moment in time so it is very likely to be the case.

OAI and Anthropic have to generate enough after-tax cash flows from operations to cover their reinvestment needs to continue going on. If they can't cover reinvestment then they will obviously lose as their offering will not be competitive.

There's no certainty they generate this amount of cash profits either. They still have a high chance of going bust, of course that gets lower - IF - they can keep ramping up revenues.


No. I assure you. The cost of retaining labor + AI access to augment them further is far less desirable than downsize, then augment cheaper laborers to bring the quality approximately up to the old headcount. This is exec math, and execs get paid on how much value goes to shareholders, not to keep people employed.

How about the economic impact of all the over investments in AI? It’ll all be dumped on us all Im afraid.

Thats a separate issue. lets stick to the issue re. labour

Labor looks like it’s going to become more and more commoditized and AI will turbocharge all that.

what do you think is going to happen to the general laborer market when all that money goes bye-bye?

I've reread your post a few times and I can't make heads or tails of it. I don't even disagree with anything you've said, it just seems like a total non-sequitur; nothing you've said gives any reason to disbelieve that AI will put (many) people out of work.

Sounds like you have a gap of knowledge and understanding if you're not getting it.

If you can't explain your idea, I doubt it possesses any merit. A commoditization of AI as you're describing does not in any way rule out mass unemployment.

I think what you’re describing is a more general race to the bottom where everyone loses, including the AI companies.

This won’t happen because the AI companies will collude to prevent it from happening, meaning they’ll drop out of that race leaving the rest of us to claim victory.

Generous of them, really.


No Im not describing a race to the bottom. Im saying that its in Google's best interest to ensure Anthropic and OAI do not continue to operate as a going concern and generate enough cash flows to finance reinvestment - by providing a very competitive offering.

Price of tokens is one competitive-instrument for them to achieve that but not the only one - they offer a whole lot more to enterprises that OAI and Anthropic don't.

By doing so Anthropic and OAI's valuations go crashing into the ground along with future prospects of raising funding externally.


Yes. They won't become genuinely important themselves, but they will still upset the balance between workers and capital owners, creating a more extreme situation that we have now.

soon after humans are economically irrelevant (unemployable) they will be existentially irrelevant (dead)

a system that can allocate the atoms and energy better than all of mankind won’t exist eternally to coddle hairless apes


There is no plan, besides the government using police to keep people in line.

AI will not take anyone's jobs. I, for one, don't consider AI something serious, it's still a toy, a curious tech demo, and will always remain one, outside of niche applications like NLP (there's no denying that LLMs are really good at this). The idea that anyone at all treats it seriously is just appalling to me.

Mass-production and other optimizations that use economies of scale to their benefit do take jobs. There's a serious problem in the world's economy that there simply isn't as many jobs as there are people; the world simply doesn't need this much work because the need for work doesn't scale linearly with the population. AI has nothing to do with this. It's a fundamental problem we'll have to deal with either way as our society develops, AI or not. It started ages before the current tech hype cycle.


Whether you or I or any other normie thinks the tech won't leave people jobless is irrelevant. The C-suite in every company is foaming at the mouth to replace their most expensive asset, people, and companies like OpenAI are marketing to them on the premise that the tech allows them to do that. Whether it actually can or cannot do it is basically irrelevant, there's untold billions going into this bubble, so either way we're all fucked.

Either the bubble bursts spectacularly and the global economy is in the shitter because everyone is overleveraged and heavily invested into it, or it doesn't and the psychotic C-suite replaces people anyways so they can see the line go up a quarter of a percentage point.


I mostly agree. In a technological society jobs and money are kind of virtual. The productivity gained by technology in the last 150 years made lots of work redundant and we've been managed by economists to still organise around wage labour. This is nothing new with AI. We could have abandoned wage labour 50 years ago during the 70ies and got neoliberalism instead. So we'll get more of the same with AI I guess.

> what is the game plan for society moving forward as AI takes more jobs

> What happens when more and more people can't afford housing, kids, food, health insurance, etc.?

What about when the opposite of this all happens, society massively benefits, and unemployment rates stay about what they have always been?

Will people still be yelling about the doomsday of societial collapse that has failed to materialize every single time?


How would society benefit if all the benefit collects to the top of the pyramid? Same old trickle down? The technology isn’t inherently bad but if it comes with massive unemployment and creates social unrest while a few at the top profit… That’s what is what makes me uncomfortable.

> How would society benefit

The same way that society has already benefited and continues to benefit from everything.

Wealth across all income brackets continues to go up.

> That’s what is what makes me uncomfortable.

Then you are uncomfortable about something that isn't happening.


[flagged]


I doubt you’ve ever actually seen any real societal collapse into violence, or you’d know how stupid you sound.

You already know the game plan and what will happen (hint: see this very article), but speaking it out loud will get you into troubles.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: