Hacker Newsnew | past | comments | ask | show | jobs | submit | ap99's commentslogin

This mindset is fine (it's mine essentially too).

But it absolutely has to be combined with verification/testing at the same speed as code production.


I generally do have that mindset, but over the past 1y of Claude code I do notice that I’m clearly losing my understanding of the internals of projects. I do review LLM generated code, understand it, no problem reading/following through. But then someone asks me a question, and I’m like… wait, I actually don’t know. I remember the instructions I gave and reviewing the code but don’t actually have a fine-details model of the actual implementation crystallized in my mind, I need to check, was that thing implemented the way I thought it was or not? Wait, it’s actually wrong/not matching at all what I thought! It’s definitely becoming uncomfortable and makes me reconsider my use of Claude code pretty significantly

> I’m like… wait, I actually don’t know.

reminds me of the experience of reading a math text without doing the exercises, thinking that you've understood the material, and then falling flat on your face when you attempt to apply your "understanding" to a novel problem. there's a significant difference between passively reading something and really putting active effort into it. only the latter leads to actual understanding ime


I've had this issue too, and I feel it was an important lesson—kind of like the first time getting a hangover.

On the other hand, LLM-generated code comments better than I do, so given a long enough time horizon, it could be more understandable at a later time than code I've written myself (we've all had the experience of forgetting how things work).


It's not. Invariably, the code is locally fine and globally nonsense.

Same experience. I've been writing code for many decades, but that experience doesn't mean I can remember what I read when reviewing generated code. I write small, focused commits, but I have to take a day off each week to make changes by hand just to mentally keep up with my own codeset knowledge, and I still find structures that surprise me. It's not necessarily that the code quality is poor, but it's not like I (thought) I had designed it. It's lead to a weakening of my confidence when adding to or changing existing architecture.

I do think that this is natural. When you use LLM coding tools, you're becoming a lot more like an architect/staff/manager, rather than the direct coder. You're setting out the spec, coming up with the design, and coming up with the high level structure of the project.

However, this comes at the cost of losing track of the minute details of the implementation because you didn't write it yourself. I find it a bit analogous to code I've reviewed vs code I've written.

However, I've found using AI for code structure summary and questioning tends to be a good way to get around it. I might forget faster, but I also pick it up faster.


I've found that for non-trivial features, I typically benefit from 3-4 rounds of: are you sure this isn't tech debt, are you sure this is thoroughly tested for (manually insert the applicable cases, because they aren't great at this, even if explicitly asked), are you sure this isn't re-inventing wheels, adding unnecessary complexity by not using existing infrastructure it should or that other existing code would not benefit from moving to this, are you sure you can't find any bugs, in hind sight, are you sure this is the best design?

Then, after it says, yes I'm sure this is production ready and we're good to move on, you have Codex and Gemini both review it one last time, and ask it to address their feedback if it's valuable or not.

After all this, it's the only time I'll look at the code and review it and make sure it's coherent.

Until then, I assume it's garbage.

I'd estimate this still improves velocity by 10x, and more importantly, allows me to operate at a pace I couldn't without burning out.


One-off tasks and parts of the stack that already have lots of disposable code do not need the same scrutiny as everything else. Just as there is a broad continuum of code importance, there is a broad continuum of testing requirements, and this was the case before AI. Keeping this in mind, AIs can also do some verification and testing, too.

What's an example of data that might have been stolen?

This sounds horrible to me.

100% agree.

I want to buy from a company whose goal is to make the best products, not make the most money.

You optimize differently for each.


His comment is more of a general commentary that east African countries are notorious for doping.

Like, if we find out the top two finishers here doped very few would be surprised.

That said - it's still an amazing accomplishment.


I’d be surprised, given how outspoken Sawe is about doping. He invited the AIU to test him before Berlin and Adidas also paid.

> Determined to prove he is competing clean, Adidas provided $50,000 (£36,900) to the Athletics Integrity Unit, the sport's anti-doping body, to frequently test Sawe over a 12-month period.

> That began with a reported 25 out-of-competition tests in the lead-up to Berlin in September, continuing at a similar rate as he prepared for London.

> Sawe said on Monday: "It's very important to me because it gets out the doubt in my career of athletics and yesterday's performance.

> "It shows Sabastian Sawe is clean. It shows running clean is good, and we can run clean and we can run faster.


Armstrong never failed a blood test and he was tested without being warned before hundreds of times a year.

This proves nothing, absolutely nothing at all. It's just a PR move by adidas and actually it seems to be working on you.


I find con men are the first to protest their innocence, and even suggest well-curated "proofs" of the same.

It's not evidence either way, in the arms-race of high-tech doping.


It absolutely is a fair comparison.

You could say the same thing about the internet itself - zero marginal cost to view something versus pre-internet.

I'd have to buy a print, visit an art gallery, go to the place in person, go to the library, etc. That's all friction and cost to "ingest" art. Some of it costs something and some just the cost of going.


> It absolutely is a fair comparison.

It's not a fair comparison because it's wrong. Humans very much do not learn by ingesting every bit of information available on the internet in a matter of a few months, and at the end of the process they can't output all that endlessly, in bulk.

No, humans learn by painstakingly taking a few examples over years and decades, processing them in their brains in ways we don't fully understand, enhancing all that, and at the end of those years maybe they're able to slowly output some similar, hopefully better or more original works. But by far most humans won't manage to do it even after decades of trying.

Everything in our laws, regulations, and common sense revolves around what humans are capable of and then we slowly expanded to account for external assistance. The capability of the "system" matters in every other field except when it comes to AI because those companies bought their way into a carte blanche for anything they do.


Can you explain some of these alternatives that are so bad?

One bad possibility is that AI & robotics advance to the point where they can do every job better and more cheaply than humans; and then humans are no longer employable and all die if they have insufficient capital to survive the period between unemployment and post-scarcity.

Another possibility is that, once AI exceeds human performance in all economically useful activities, including high-level planning, governance, law enforcement, and military actions, it discovers that the benefits of keeping humans around aren't worth the costs and risks.


Bad: let tech (now "AI") companies, built on the collective (often in theory IP-protected) output of humanity, own and mediate an ever increasing proportion of the value created in society. Intellectual rent-seeking, if you will.

Bad: the above but also their power and influence grows so much and governments are so ineffective (or corrupt) against them that the tech companies also become de facto governments and people rely on them to survive. Also they destroy earth even faster with nobody left to stop them. The full fat cyberpunk dystopia.

Bad: the above but with lots more fascism and war. Too many people seem to want this.

Bad: regulate AI to such an extent as to cede all growth and technological leadership to whoever doesn't

...


If I see art and get inspired by it, then paint my own thing and make millions do I owe my inspiration money?

If you end up creating something sufficiently similar, yes in fact you do. Or rather, you have done a copyright infringement and retroactive payment may be one of the remedies.

This also applies to AI, just worse because:

A) AI is not a human brain, and pretending that the process of human authorship is the same as AI is either a massive misunderstanding of the mechanics and architecture of these systems, or plain disingenuous nonsense.

B) AI has no capability of original thought. Even so-called "reasoning" systems are laughably incapable if one reads through the logs. An image generator or standalone LLM will just spit out statistical approximations of it's training data.

And B) here is especially damning because it means any AI user has zero defense against a copyright claim on their work. This creates enormous legal risks.

The model for copyright trolling is trivial. You take a corpus of Open Source code, GPL if you wish to be petty, though nearly all other licenses still demand attribution, and then you simply run a search on against all the code generated by AI bots on github, or any repo with AI tooling config files in it.

Won't be long before the FSF does something similar.


But open models are only about 8 months behind closed models. So even aggressive copyright-enforcement would only create an 8 month delay.

This is essentially a LimeWire problem. And OpenAI is essentially Spotify.

Even with revenue sharing, 99% of artists will get nothing (just like streaming), and revenue will be much lower than before (just like streaming compared to record era).

Only IP giants like Disney would see any real income.


Yes, you do owe the inspiration money if the result is close enough. Welcome to intellectual property laws!

I'm amazed that people see America as different from any other country in terms of who should be allowed in and what constitutes bad behavior.

Being in America is a privilege that can easily be taken away. Guests of America should walk a narrow path.

Same as being in any other country.


How is this insane?

The US isn't some global free zone where everyone has a right to come and go - do as they please.

If you came to the US legally with a visa. Great. When you signed your visa documents there were some questions they asked you and some fine print that basically made you liable for "bad behavior."

I'm an American living in the UK and I'm under no illusion that if I start doing dumb stuff here it's possible they tell me to leave. (Tho apparently the UK government has a pretty lax attitude with who they ask to leave.)

If someone wants to come to my country and behave in any way outside their best - then yes I support the government kicking them out.


I don't think protests in general are "behaving outside your best". Now what those protests contain is an entirely different matter. I read an article about the arrest of a foreign student recently who attended numerous "death to America" protests. I can support deportation in a case like that (even if only for the complete lack of self awareness), but not for all protests.


Protesting against ethnic cleansing is a bad thing, that’s what you’re saying?

No matter what kind of mental gymnastics you try to do, this is just an obvious case of a foreign government having a huge influence and control over internal US affairs.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: