This is temporary. What is the SKILL.md equivalent going to be in five years? In ten? You don't already see a pattern emerging around solutions to encode that "professional experience" into the tools themselves?
These LLMs can already incorporate our entire cultural corpus yet your "professional experience" is the threshold they won't cross?
The word “incorporate” is doing some very heavy lifting in your assertion. These LLMs already have access to the whole corpus of architectural knowledge and software best practices, and yet they’re unable to reliably implement those best practices. Why not? Why do they often make completely unintuitive decisions, even when repeatedly prompted to ask clarifying questions?
To be clear by that and "cultural corpus" I meant their skill with natural languages. It is well known for instance that early LLMs were curiously better at composing sentences in English than doing basic math.
Regarding such formal reasoning we have already seen marked improvement in the last year or two alone. The question is how this weighs on your prediction re their capabilities in the next two, five, ten, etc years.
What are the properties of LLMs that have convinced you that there remains emergent complexity (e.g. the “ability” to formally reason) that we have not yet seen?
There may be gains to be had in such emergence but that is not where I see the gains in the next five years. Those gains will be made by connecting LLMs more robustly with formal reasoning, which computers are already very good at. Continued iteration on connecting these right/left brain faculties could then lead to further emergence down the line.
The present notions of harnesses, structured output or looping in the LLM to some external state or sandbox be it debugger output or embedding into a runtime already show early promising results along these lines. I see no reason to believe these gains will not continue over the next five years.
If you have some theories in the converse in that regard I am all ears.
Extraordinary claims require extraordinary evidence, not the opposite. There’s no current evidence to suggest limitless progress, or even superlinear progress with regards to compute and energy. My guess would be sub linear or even logarithmic progress vs. linear growth in compute and energy, as that’s how most physical systems behave.
No one said unlimited progress. Let's not revert to straw man claims.
If you think the potential of LLMs is overblown feel free to short the market. I don't pretend to know the future. But if I may, I don't think you are framing the debate in the correct terms. Evidence is an important facet of human affairs. So is risk. Best of luck with your predictions.
I really don't like this framing - it's hard to short a market at the best of times, let alone when governments have a vested interest in tech being too big to fail to compete in the global economic arms race - see Intel's stock in the past few months.
I agree with you both - undoubtedly there are still massive gains to be made with the frontier models we have today with tooling and iteration, yet I do not believe there's sufficient evidence to claim we are rolling towards AG/SI on an exponential curve, without some additional breakthroughs given the jagged edges and data used to train models being fundamentally linear
> Why do they often make completely unintuitive decisions
Most likely because you haven't constrained their behavior in your prompt. You're making the assumption that they "understand" that using best practices is what you want. You have to tell them that, and tell them which practices they should use.
They already fail consistently follow very simple and concrete instructions like “Please do not ever mock this object, always properly construct it in your tests”, so I’m not sure how they’re going to adhere to more vague and conceptual architectural paradigms. This is a problem with generative AI in general - image generation has similar limitations.
The capacity of the person prompting it to understand is the threshold they won't cross. They can squeeze the gap as much as possible by dumbing down answers or slowly ramping up information complexity but there is a limit to comprehension.
This is an interesting answer for questions about human agency and accountability/personhood questions but I don't see how it leads to increased confidence in the role of human as SWE.
If LLMs get good enough, one might be tempted to ask so what if most humans can't understand the output? Human civilization has by and large been a constant exercise in us collectively accomplishing more and more while individually comprehending less and less.
Our ancestors likely understood more about hunting live game or murdering each other than we do. Most of us do not consider that a great loss. Most of us living in the modern world depend on things we don't fully comprehend. I'm just not sure how this would lead to being reassured re the human as SWE.
We don't need as many hunters because we've domesticated sources of meat. We still need ranchers, butchers... an entire supply chain to get meat to consumers. We didn't remove humans from the loop, we just created specializations.
Software specialization might look very different in 10 years but I doubt that technically specialized humans will be completely removed from their professions. We might not be carrying bows and arrows anymore but we will be carrying the equivalent of a rope and a Stetson.
Ranchers, butchers... and factory farms. Most meat Americans consume have had very little interaction with a person until they are being devoured on the plate.
I appreciate your points. I agree with you that not all "technically specialized humans will be completely removed" but let's not pretend the comparison is going from a caveman with a spear to a cowboy with a lasso. If you concede it is likely to be very different at some point calling it SWE is no longer useful.
I think SWEs would be better off realizing they have enjoyed a relatively extreme level of privilege, and rather than trying to hold onto it, use what time they still have to advocate for a more egalitarian society, even if that means giving up some of their gains. Otherwise speaking of farming, the mass layoffs to come when software has been disrupting blue collar jobs for decades will really be a chickens coming home to roost moment.
Now you're arguing against your own analogy? Hunter was ubiquitous position in human society prior to the domestication of animals. 50% of the workforce in hunter-gather societies. Today, 12 millennia after the domestication of wildlife, that number is down to 9-14% of the global workforce dedicated to the production, distribution, processing, sales of meat (not including cooked food) according to opus.
Considering that only 1% of the US workforce was a software engineer I expect similar workforce optimization to occur in software engineering specializations over the next 12,000 years. /s But seriously, it's never going to zero.
Do you really want to live in a world when nobody understand software that manages nuclear power plant? Or medical devices? Or financial software? Or radio transceivers firmware? Even something so boring like databases not understood could lead to disastrous effects if this would be the government database for managing people IDS. Hmm even if this would be working fine for years what would happen if bad actor would influence models to generate code if security issues? If nobody can comprehend the output how anybody would be able to think about the danger? This is even more grim then this
https://www.citriniresearch.com/p/2028gic
We live in a world with nuclear weapons. Somehow we all cope and get up every morning. I think you are missing the point - the world is already grim. It always has been. What about human affairs say in the last century alone makes you think human oversight is some panacea? The impetus for civilization was not some innate desire for financial systems or medicine. It was not having other humans murder you. The Leviathan is already here.
The article you shared has little to do with this. Questions of how to divide up gains technology creates are a separate question from that of the technology itself. Tbh I found what you shared so boring I could barely finish it. I already in this thread made an exhortation to support politicans who commit to erasing inequality. The idea that LLMs can only exist with inequality is nonsensical. The only thing grim about what you shared is the lack of political imagination. It's boring.
Let's also not forget a lot of the market edge of SWEs comes in knowing how to navigate these parts. The fact you needed to be reasonably fluent in a language was already a barrier to entry which meant in better times new grads could earn six figures at their first job just for putting in that effort.
Maybe you will still be needed. That is one question. How well you will be paid and treated when the barrier to entry is now "I can think" is another. As the parent indicates, most people doing software are not doing things akin to pure math. I don't think most SWEs want that lifestyle anyway.
It's ok. You shouldn't fight the coming change. Instead use the time we still have to fight for more equal outcomes (vote for politicians that support UBI, Medicare for all). The longer you delude yourself that you are uniquely needed in an increasingly mechanized world the worse all our outcomes will be.
The barrier to entry to generating code may be "I can think", but the barrier to entry for solving hard, distributed/multi-faceted engineering problems still remains quite high - agents can't really do this still to a decent level of efficacy reliably.
The progress models have made in the last 5 years aren't convincing me they'll bridge that gap too soon, although I can see how some people are convinced by how decent agentic harnesses make things. I know it's really easy to get very hyped with the current state of the technology, but try to have a bit of skepticism.
Qualified immunity is a stain on American jurisprudence and an insult to the idea of America as a free society.
Demand of people who want your vote in the coming elections that they support a legislative correction to this judicial activism. This country was founded in large part because 250 years ago the British sent soldiers into American cities and American homes, with powers to detain, arrest and deprive of life and liberty with no accountability. If a colonial was wrongly treated they would force adjudication in favorable courts back in Britain, effectively making their soldiers immune from accountability.
The fact our judicial system has saw fit to independently replicate this injustice that none of us voted for is a crime against the very notion of what it means to be an American. Hold your leaders accountable.
The parent said American corporations. No one with any sense wants a dependency critical to their state or private company sitting under the direct control of America any more.
I think it's intellectually dishonest to dismiss the absolute accumulation of human's knowledge under very specific brands for profitability using false equivalencies. When I build something using chatGPT, especially if I was unable to build it before, I arrive at a result that I could have previously arrived with "hard work" by skipping the "hard work" part.
Now, many will argue that you wouldn't have poured in time and energy in that endeavour anyways, so it's fine. But the crucial part missing here is the effort. We're about to witness the side effects of societal-wide reliance on LLM's, the same way we're still paying the price for the social media boom, misinformation, propaganda, echo-chambers and algorithmic bubbles.
Notice that none of the above actually invented misinformation, etc. they just magnified an existing problem. LLM's magnify the need to "get it done, fast" but I don't see the engineering excellence everyone promised me that I'll see at any level.
In the US, much of the woods are owned by corporations too. Those that aren't are, in theory, owned by the public, but the oligarchs work hard to hollow that out so that practically public lands are owned by them too.
It's an adversarial economy. Using a LLM at work doesn't mean the work is challenging. A lot of jobs are "bullshit jobs". People are using LLMs because it gives them back time. If they don't use it their colleague will and make them look bad.
Company might fire you tomorrow. Fundamentally if a LLM can do the job it's not just employees at risk, it is also the company. There is a lot of symmetry actually with how companies delegate to employees to how employees delegate to LLMs. You can follow the logic to conclude a lot of companies are then bullshit companies. This is not a problem for the individual to solve. Your job at work is akin to the company's - earn the best return while you still can. Wasting your time for the essentially the same output at a slower pace is a bad return.
When people get laid off en masse this incentive structure will have to be altered. But telling an individual to ignore their basic economic incentives until then is unlikely to work.
I have also come to the conclusion independently that a lot of companies are bullshit companies, maybe that is closer to the core issue. For the individuals who do have some choice in the matter, I think it is important to hold on to their skills by continuing to use them. It sucks that our work culture is so competitive, but from that angle I believe they will stand out eventually as more competent.
Most companies are real, it's just that a good fraction of the work is mostly unnecessary. Partially because of the overhead of doing business activities that is unneeded most of the time, partly because we don't know what work will be useful, and partly for silly social reasons
I keep coming back to the idea that all the upheaval combined with all the new tools at our disposal will empower and motivate people to start businesses that challenge the status quo. I've lived long enough to see that play out at scale, it is basically how we got Google. That might not sound encouraging, but Google was once a really inspiring company and one of the best places to work.
They will never release them. The distraction will morph into all the electoral subterfuge they will attempt as they increasingly fear losing power at the polls. They know what's in those files and what will happen to them if they lose in 2028. Thus they will be even more incentivized to behave badly.
If gas prices double from here it will be less stupid distraction and more overt authoritarianism... the ICE question has not been settled. ICE is still violating your neighbors and making a mockery of what is supposed to be a society of free people. They merely thought the overt city takeovers and shooting Americans in the head had become a bad look that wasn't worth it politically. The persistence of this calculus is not inevitable.
As of March 18, 2026, Immigration and Customs Enforcement (ICE) reported that 46 people died while in their custody or detention facilities since the start of the second Trump administration in January 2025. The number of deaths of people in detention during 2025 exceeded the highest seen in over two decades, and deaths in 2026 are on track to meet or exceed that number. President Trump implemented immigration policy changes focused on increasing interior enforcement efforts to support mass deportation, which increased the number of immigrants detained by ICE to over 68,000 as of February 7, 2026, an increase of over 70% from the 39,000 immigrants held in detention at the end of the Biden administration in December 2024.
An export ban wouldn’t really help much: US oil production is (now) predominantly light crude, while US refinery capacity is oriented towards heavy crude from the gulf or Venezuela.
We produce more oil than we use, but we can’t refine it all.
Refining light crude is essentially the same process as heavy crude with fewer steps. US refineries are designed to handle virtually any kind of crude and are highly configurable. That flexibility is part of what makes their refinery business so successful. US refinery capacity is ~50% larger than their domestic oil production; it is a major export business for the US.
The real cost to not processing heavy crude oil is that many refinery assets will be sitting idle because they aren't needed to process light crude.
> An export ban wouldn’t really help much: US oil production is (now) predominantly light crude, while US refinery capacity is oriented towards heavy crude from the gulf or Venezuela.
That's not too much of a problem. A refinery tooled for heavy sour crude technically can process light, heavy, sour and sweet crude - the other way around would be an issue because you'd need to construct hydrocracker and desulfurizer stages first.
The issue is a financial one. A refinery is often a multi-billion dollar asset, and having significant parts of its value sit around unused for prolonged times means write-offs which means stonk number go down, and as we all know there is nothing more important for the economy than the stonk market.
Another, but smaller, problem is that running a refinery on different crude compositions means that the volume ratio of the various oil products changes, and the refinery may find itself sitting on more, say, heavy fuel oil than it can store, sell and ship. And once the tanks are full, production has to stop.
It could help in the long term by underwritig refinery retooling. The problem is you'd almost certainly need public support for those investments, given they could be undone by the lifting of such a ban. (An export ban would also trash America's reputation with our import partners.)
It may be a bad idea (for various reasons), but it is one already being floated. Here is a press release just today from a California congressman who is proposing a bill to this effect.
If you agree with the parent that Americans are going to feel more energy market pain in the coming months I would imagine the pressure for this will only increase.
It’s actually harder (requires more advanced technology) to refine heavy and sour crude. The US refining industry process this type of oil mainly because it’s more profitable not because of some limitation.
American oil on the other hand (As in extracted out of the ground) is actually too high quality for domestic consumption therefore gets shipped overseas and sold at a premium. The weird economics of this are made possible by globalization. While it’s not fungible on a dime it’s easy to solve and the US really does hold all the cards when it comes to the petroleum industry.
Fake numbers, but I have heard it is something like the US produces 100 units of light crude -exports it all, and imports 50 units of heavy. Net exporter, but the stuff we use domestically for gas refineries comes from elsewhere.
Technically, the refineries can be retooled to take a different blend, but it is expensive to do.
US crude oil is exported to foreign refineries for blending purposes. By blending low-quality crude with high-quality crude it can reduce the total costs to the refiner even after accounting for the fact that you had to buy high-quality crude to improve the properties of the domestic crude.
"U.S. crude oil and lease condensate proved reserves decreased 1% from 46.4 billion barrels to 46.0 billion barrels at year-end 2024" [1]. At February's 180 million barrel/month import rate, that's only 21 years of supply in the ground.
Reliance on oil, for America, is a long-term reliance on foreign oil.
Polymarket is already working on a full return to the US market aided by sympathetic policy changes of the current administration.
Additionally, the claim "most of the companies registered in Delaware are not trying to dodge US federal regulations" strikes me as dubious. Every company seeks to lower its regulatory burden. If they're not finding loopholes, then often they're the ones writing the regulations and funding congressional campaigns. I'm not sure the claim Polymarket is unique re its relationship to the government in this respect is credible. They seem to be working quite intimately with the current administration on returning from their Biden era "ban".
There’s dodging and then there’s _dodging_. If you are operating in a legal gray area, that’s an unsavory business practice that is, as you say, widespread. Then there’s operating illegally in full view of everybody. I do not personally ascribe to the idea that a thing is OK just because one is not currently being prosecuted. Polymarket (and Kalshi) is bad for the country, their claims to the contrary are highly dubious, and it’s a case where not only are they actually in the wrong, they are quite specifically legally wrong.
You may feel that way, and I may sympathize. But I really think you are over-indexing on your own personal belief that they are "bad for the country". If we follow your logic then a company is doing more _dodging_ simply on the basis of one's own moral aversion. So maybe if I'm an environmentalist I think coal companies are especially dodgy. If I'm a pacifist maybe a defense contractor. If I'm an evangelical maybe a company that contracts with the government re some reproductive care.
"operating illegally in full view" vs "legal gray area" is not a determination that can be made based on your subjective view of what "makes a thing OK". The fact that you pair the accusation that they are "operating illegally in full view" with the notion that you can condemn a thing that is not "currently being prosecuted" only further undermines your argument. Your moral objection is your judgement to make, the question of what is illegal cannot be. The latter is exclusively the domain of the courts, not any individual (or collective) moral outrage. Your seeming desire to conflate the two to satisfy your personal feelings unfortunately undermines whatever cogent points you may have re their legality on the merits.
The fact is they are currently working with the government on a return to the US markets. engaging in a government process such as they are seems to not resemble anything akin to "operating illegally in full view of everybody". You would be more convincing if you would levy your criticism in more reasonable terms. I personally suspect there is a lot more "gray area" here than you seem to contemplate.
> The fact is they are currently working with the government on a return to the US markets. engaging in a government process such as they are seems to not resemble anything akin to "operating illegally in full view of everybody".
This is a seriously tiresome argument. How about this? Feel free to cite how their recent moves will enable them to
1. satisfy regulators that they are not violating the Commodity Exchange Act;
2. satisfy other parts of the government that they are not simply illegal gambling;
3. satisfy the states that are actively suing them RIGHT NOW.
It does not matter that there are friendly people in the administration. The fact is that they were told to wind down their markets and leave. They did not do this. Even if their behavior may become legal in the future, it is currently illegal.
My personal objection is IN ADDITION to the legal problems. My personal opinion is that this business and the people who run it SUCK. There’s no conflating: both things are true. Why do you insist on sticking up for douchebags?
Because it's not illegal to be a "douchebag". I don't believe in mob rule. You proudly declare things illegal with all the confidence of someone who has never been a victim of the state's power. You must have a rather shallow view of history. I find your mild histrionics deeply unimpressive. Why do you insist on appropriating the language of law when you clearly have little respect for its precepts?
> the claim "most of the companies registered in Delaware are not trying to dodge US federal regulations" strikes me as dubious.
Why would it? Choosing the state to incorporate in has very little to do with US federal regulations. If the US wants to come after your company for some reason, they file in federal court, and the state you're incorporated in is irrelevant.
When incorporating, you choose the state based on its business-related laws and how they might apply to your company. You choose based on the experience of their judicial system in handling business matters. You might choose because there are a ton of other businesses incorporated in that state, and that's created a lot of court cases and a lot of precedent that can give your own legal team more confidence in how different sorts of legal challenge might play out.
If you were trying to avoid US federal regulations, you might incorporate in Delaware for the simple reason that Delaware is a safe default, given how common it is for companies to incorporate there. Incorporating in an unusual state could raise an eyebrow here or there. But ultimately it's not going to matter all that much. And even if it's true that a federal-regulations-skirting company would have a measurable benefit to incorporating in Delaware, there's no reason to believe that lots of companies incorporated in Delaware are trying to skirt federal regulations. That's just an unfounded assertion.
As an aside, it's not true that every company wants to decrease its regulatory burden. Once a company gets large enough, lobbying for extra regulation can be a barrier to entry for possible competitors. Also consider that "reducing regulatory burden" doesn't necessarily mean doing something illegal. In the case of Polymarket, they probably are, but plenty of other companies find ways to reduce their regulatory compliance needs in perfectly legal ways.
My comment was unclear. That quote was intended to connect with the parent claim that Polymarket was unique in "trying to dodge US federal regulations". The chain was:
> I don't get it. Most companies registered in the state I live in, for example, are not actually located here. They simply receive mail through their registered agent there. Why would this be news?
>> On the other hand, most of the companies registered in Delaware are not trying to dodge US federal regulations.
What I found dubious was predicated on this "on the other hand" - that is the notion that Polymarket is really doing anything unique re its dealings with US federal regulators.
As per your last paragraph that touches on this, I already addressed this in another thread, but I'm simply not convinced that Polymarket is very unique here. It is common for new enterprises creating new industries to come into conflict with the law, and for both to evolve. Obviously Polymarket is not some large incumbent. The point was I find the notion that they are doing something singularly illegal or out of step with how most businesses operate dubious.
I detect that in comments around this chain you and some others seem to want to create a hard barrier between the law and enterprise. That's not how reality works. Regulations change. Policies are modified. New laws are passed. Governments and businesses often collaborate in this process. To get back to what I was trying to respond to, I am simply asserting that indeed this should really not be "big news".
>the claim "most of the companies registered in Delaware are not trying to dodge US federal regulations" strikes me as dubious
huh? you aren't making a coherent argument. registering in any US state you are still subject to the same federal regulations, Delaware is not different, it offers no shelter from federal regulations.
in fact, if it is not your primary state of operation, then it subjects you to federal regulations for interstate commerce where you might not otherwise be.
This would be a more convincing take if reasoning LLMs didn't already exist. Given the growth in capability over the last few years alone nothing about your description "several minute explanation of how the item description and the slight differentiations of the boxes" seems beyond an artificial intelligence to solve by the time humanoid robots would be ready to physically traverse a warehouse.
Your last point is also interesting given perhaps a robot is more amenable to such instruction, thus creating cascading savings. Each human has to be trained, and could be individually a failure. Robot can essentially copy its "brain" to its others.
Or likely more accurately, download the latest brain trained from all the robot's aggregate experiences from the amazon hivemind hq
These LLMs can already incorporate our entire cultural corpus yet your "professional experience" is the threshold they won't cross?
reply