When I see how much the latest models are capable of it makes me feel depressed.
As well as potentially ruining my career in the next few years, its turning all the minutiae and specifics of writing clean code, that I've worked hard to learn over the past years, into irrelivent details. All the specifics I thought were so important are just implementation details of the prompt.
Maybe I've got a fairly backwards view of it, but I don't like the feeling that all that time and learning has gone to waste, and that my skillset of automating things is becoming itself more and more automated.
The real question is what kind of pay that work will demand.
It's will be great to still be employed as a senior dev. It will be a little less great with a $110k salary, 5 day commute, and mediocre benefits being the norm.
That's the 10-20 year cycle always, though. The .com crash led to a major downgrading in the status of "tech" people for a few years, and then a slow recovery til it was insane again.
However, I'm not eager to be living through this again. It feels very spring/summer 2002 to me right now. That was the lowest point for the local market back then.
I don't think this latest contraction has much to do with AI though. It's more about higher interest rates, recessionary economy, trade wars, etc etc.
In most countries, even for highly skilled workers, this is the norm (i.e. most countries outside of the US). I know some very good engineers (e.g. dealing with large revenues (1bil plus) owning core systems) on this kind of money. Not everyone gets the lucky break.
At least for many on this forum you got a chance to earn good money while the sun was shining. AI threatens even the people that didn't.
Regardless of whether $110k is good money (it is basically everywhere except a few metro areas) your salary cap will be whatever the models can deliver in the same time as you. It follows you want to be good at managing models (ideally multiple dozen) in your area of expertise.
Do you actually disagree with the "minutiae was always borderline irrelevant" part or that it comes along with "making somebody money"? I pretty strongly agree with the original quote including the "possibly with software" part.
Minutiae such as tabs vs spaces and other formatting changes are pretty clearly "borderline irrelevant" and code formatters have largely solved programmers arguing about them. Exactly how to best factor your code into functions and classes is also a commonly argued but "borderline irrelevant." Arguments about "clean code" are a good example of this.
Broadly, the skills I see that LLMs make useless to have honed are the the minutiae that were already "borderline irrelevant." Knowing how to make your code performant, knowing how to make good apis that can be stable long term, in general having good taste for architecture is still very useful. In fact it is more useful now.
How is enshitification (the gradual degredation of service and products for commercial gain) even related to what's being discussed (the gradual obsoletion of a certain set of skills of an SWE)?
All senior devs know what a project looks like that had only juniors and no tech leadership. One big mess. Project stalls. Team spends 98% on bugs and emergencies, and still cant get a grip on curbing the drama. Why? All the point you say for AI are true for juniors as well: when to tell someone to redo a (part of) a project/feature? That same intuition works when collabbing with AI.
Super well said - right. “Try again with quick feedback” vs “try again with significant feedback” vs “try again, but only a subset of the original task” vs “let’s have someone else do this”
I've been deep into AI full-time professionally for some months now, and for the first 4+ weeks I felt the exact same way as you describe - it is a form of existential crisis, especially after spending the bulk of the past 25 years honing my coding-fu algo ninja skills, my identity was totally wrapped up in it.
Keep at it and keep leaning in to embrace it, I promise it gets better! It's just a big adjustment.
Don't be so grim! This will just give you access to not worry about writing clean code as much as you did in the past - you can focus on other parts of the development lifecycle.
The skill of writing good quality code is still going to be beneficial, maybe less emphasized on writing side, but critical of shipping good code, even when someone (something) else wrote it.
This seems broadly correct? Industrialization was amazing for people's standard of living, but it absolutely meant that the average physical good became detached from their craftsmen's learned and aesthetic preferences.
And contrariwise, the argument against tools like these sounds like:
"I never use power tools or CNC, I only use hand tools. Even if they would save me an incredible amount of time and let me work on other things, I prefer to do it the slow and painstaking way, even if the results are ultimately almost identical."
Sure, you can absolutely true up stock using a jointer plane, but using a power jointer and planer will take about 1/10th of the time and you can always go back with a smoothing plane to get that mirror finish if you don't like the machine finish.
Likewise, if your standards are high and your output indistinguishable, but the AI does most of the heavy lifting for the rough draft pass, where's the harm? I don't understand everyone who says "the AI only makes slop" - if you're responsible for your commits and you do a good job, it's indistinguishable.
I’d actually argue that we have some absolutely fantastic tools already that are the equivalent of the things like CBC and power tools.
Dev tooling has gotten pretty solid these days, LSP’s and debug protocols, massively improved type-system UX, libs and frameworks with massively improved DX, deployment tools that are basically zero touch, fantastic observability tooling, super powerful IDE’s.
The CNC machine doesn’t wander off and start lathing watermelons when you’re not looking and your planar doesn’t turn into a spaghetti monster and eat your dog if you accidentally plane some wood on the wrong day of the week.
Realistically, though, even if AI doesn't only make slop, the amount of effort it takes to ensure that it's not slop is even harder to justify than maintaining a "clean" codebase manually used to be. More and more you'll see that "rough draft pass" ending up as shipped product.
Why? Well, it happened that way when manual tradecraft gave way to automated manufacturing in just about every other industry, so why should ours be exempt?
You were likely happy to be automating other people out of a job, now it's happening to you. This is the creative destruction that is critical to a healthy and prosperous economy.
You can enjoy automating tasks and not be destroying others. At my job I’ve been the main person automating tasks which has allowed us to be more accurate and more efficient and grow headcount by 50 percent for the team. You could argue we’d have grown more but the entire company has had 20 percent layoffs since I joined so I would push back on that.
I used to think so. But now I think it's not very useful with what I've seen from others. Maybe if you do frontend.. The people who I see vibe coding with no experience actually programming... It is completely useless. It can only do the most simple tasks, anything beyond it will constantly make critical errors and random mistakes. That is using what was the latest Claude version before this. I've also not really used AI coding stuff myself at all do take that as you will.
Even for frontend tasks it makes mistakes when you ask too much of it...
What will it create for me? A basic react/nextjs frontend and a <popular website> clone? Anything that requires more careful planning and esoteric functionality it can't do.
Oh yes, and the code I've seen it write... It can take what should be 20 lines of code and turn it into 600!
But systems level thinking, taste, technical creativity and all other “soft” skills have never been more relevant. I can do some pretty awesome things with my aider. I can implement things which I thought were cool and useful but couldn’t be bothered to without AI.
In general, a good rule of thumb is only code "clean" enough so that you / your team / someone else can figure out what the hell you were doing at that particular area of the source code
_Clean Code_ is an extremely well-known book on programming by Robert “Uncle Bob” Martin from the 2000s. Posts about it have come up on HN as recently as this year.
Maybe it’s a sign of the times, but I’m surprised you’ve never come across it. I say this as someone who doesn’t agree with many of the suggestions.
The fact that he capitalized both Cs indicates he's talking about the book, which is famous enough that I learned about it and its influence when I was in school ~15 years ago.
GP wrote clean code (lowercase) which most people would take to mean the general practices of hygenic, well maintained code.
Clean code is over abstraction, spaghetti code. The people who are part of this cult just point to the source material and title, never critically think about why it might be bad (it’s super slow, check YouTube “clean code performance” for why) or entertain alternatives.
I’ve been using AI coding tools (Cursor, Claude Code) for React/React Native side projects. I have experience with these frameworks so I could guide the AI with individual tasks and catch mistakes, and overall it worked pretty well.
Recently I tried building a native iOS app with zero Swift experience, giving the AI just a markdown spec. This was basically vibe coding, I didn’t understand much beyond general software principles. It quickly broke down: hallucinated method signatures, got stuck on implementing extensions, and couldn’t recover. I would run the app on my device and give it feedback and logs. After hours wasted, I spent some time reading the docs and fixed the issues myself in 30 minutes.
My takeaway: AI will accelerate developers but won’t replace them. Still, acceleration means fewer engineers will be needed to ship the same amount of work.
Eh, I’ve gotten over that. I’ve been using Claude recently on a personal project for a friend who wanted to take a known export file format and turn it into a list of good households for local political candidates to hit when knocking on doors. And I did that. But it’s been a while since I used pandas and numpy so I told Claude to swap out my loops for efficient code. And he did. Then, just for fun, I said, “Hey, since I am providing you with street lengths and long/ lats, use K means clustering to group high scoring houses into walkable routes and then plot the whole thing in a map from OpenStreetMap.” Five minutes later I had all of that. I could have done the latter, but doin any “real CS” thing would take me days. There’s not a bunch of value in me taking days to do something but there is value in knowing about K means clustering, knowing OpenStreetMap exists and having a feel for efficient code. Plus more high-level things like what good code does and doesn’t look like.
I 100% agree with you, but to play devils advocate, what would stop an LLM for telling you all about k means clustering and openstreetmap and everything when you ask about an efficient way to cluster deliveries on a map?
Also... One of the more dangerous things that can happen with Claude is this: it goes to implement your K means clustering (or whatever) and runs into difficulties, and actually builds something else, but calls it K-means, or slips it by you in a long conversation ("This is getting complicated, so I'll just..."). And it's only if you actually know the algorithm and review what it did that you can be confident in really publishing the work it produced into the public sphere.
I think clean code is more important than ever. LLMs can work better with good code (no surprise), and they are trained on so much shit code they produce garbage in terms of clean code.
They also don't have a good taste or deeper architectural understanding if big codebasis where it's even more important.
What you learned over the years, you can just scale up with agents.
Likewise, a lot of what we learn at school or university is superceded by new knowledge or technology (who needs arithmetic, when we all have a calculator in our pocket??), but having an intimate knowledge of those building blocks is still key to having a deeper and more valuable aptitude in your field.
Trust me, these “vibe coding” tools don’t speed up productivity much in the real world. At the end of the day these systems need to be maintained by humans and humans are the ones held accountable when stuff breaks. That means humans need to understand how the systems (code, infrastructure, etc) work. You can automate the code, even some decisionmaking about how the program should be organized, but you can’t automate the process of having a human develop their mental model of how & why the system works. That was always the bottleneck and still is to this day.
When everyone else has given up on software dev as a career path, you’ll be one of the few who the CEO can call upon to explain why X or Y broke, and fix it. That will make you insanely valuable.
I'm scrolling through lots of comments here over many of them being entirely dependent on chatbots to vibe code and even some here are unable to write a function by hand anymore, which is concerning.
Perhaps your comment is the only one so far that is talking sense of the true side effect of over-reliance on these vibe coding tools and the reality is the cost of maintainance.
characterize it in terms of truth, clarity of truth, simplicity, and correctness. I think we should always evaluate things along those dimensions. Is it true, does it produce truthful things. It makes the evaluation very objective.
And now wait till you realize it's all built on stolen code written by people like you and me.
GOFAI failed because paying intelligent/competent/capable people enough for their time to implement intelligence by writing all the necessary rules and algorithms was uneconomical.
GenAI solved it by repurposing already performed work, deriving the rules ("weights") from it automatically, thus massively increasing the value of that work, without giving any extra compensation to the workers. Same with art, translations and anything else which can be fed into RL.
It's not that it was uneconomical, it's that 1) we literally don't know all the rules, a lot of it is learned intuition that humans acquire by doing, and 2) as task complexity rises, the number of rules rises faster, so it doesn't scale. The real advantage that genAI brings to the table is that it "learns" in a way that can replicate this intuition and that it keeps scaling so long as you can shovel more compute and more data at it.
In a way, yes, you'd be paying the people not just to write down the rules but to discover them first. And there's the accuracy/correctness/interpretability tradeoff.
But also, have there been any attempts on the scale of the Manhattan project attempting to create a GOFAI?
Because one idea I ran into is that we might be able to use genAI to create a GOFAI soon. And it would be as hard as using genAI for any kind of large project. But I also can't convincingly claim that it's somehow provably impossible.
You can’t “write down the rules” for intelligence. Not for any reasonable definition of “writing”. The medium of writing is not rich enough to express what is needed.
Do you believe intelligence can be achieved using ANNs? If so, ANNs can be serialized, therefore writing is rich enough.
It might not be an easy to work with format though. If you believe the broad LLM architecture is capable of reaching true intelligence, then writing is still enough because all LLMs are is the written training data and the written training algorithm. It's just that is was impossible to pay people to write enough training data and provide enough compute to process it before.
As well as potentially ruining my career in the next few years, its turning all the minutiae and specifics of writing clean code, that I've worked hard to learn over the past years, into irrelivent details. All the specifics I thought were so important are just implementation details of the prompt.
Maybe I've got a fairly backwards view of it, but I don't like the feeling that all that time and learning has gone to waste, and that my skillset of automating things is becoming itself more and more automated.