Hacker Newsnew | past | comments | ask | show | jobs | submit | kokada's commentslogin

But it is also rare cases where a a few percent points actually make a huge difference. Remember when reviewers are doing benchmarks they're generally using a standardised test suite with uncapped framerates. For most people they would be perfectly happy to hit a target framerate, or if they really want to play uncapped they would first reduce a few graphical setting to archive good performance (most of time with imperceptible changes in the graphics). It is rare when the performance of the game is so tight in a hardware that a few percent points actually matter.

To give a particular example, I started playing GTAV on Windows after building a new PC since I had no spare drives. After finally installing Linux I decided to try GTAV on Linux just to see how well it would run. And it runs amazingly well, and yes, it runs a few percent points slower than Windows, but the only tradeoff I did was slightly increase FSR4 and the game still looks amazing. I didn't really notice any graphics issues, especially not during actual gameplay (if I stayed at the same place and started to nitpick I could notice differences).


Not sure how reliable this site is, but if it is correct it looks like 10: https://www.cvedetails.com/vulnerability-list/vendor_id-72/p....

Maybe coreutils is so old that most security vulnerabilities was solved before CVE even existed. But I think this is also a good argument why we are replacing a solid piece of C code to Rust just because it is "memory safe" and then have lots of CVEs related to things like TOCTOUs (that Rust will not save you).


I'm not against rewriting it in Rust because I believe it really may help in certain class of bugs, but indeed it should not be replacing the old version instantly for that reason. Both could co exist, even tho you still need some guinea pigs to test it out and find issues.

Other than security, Rust brings major improvement to the tooling and may help bring fresh members that wouldn't want to contribute to C code. I understand why some projects go that route


> Other than security, Rust brings major improvement to the tooling and may help bring fresh members that wouldn't want to contribute to C code. I understand why some projects go that route

But it loses old members who don't program in rust, already know the projects, all the reasons of why "this thing" was done "that way". and introduces a new set of bugs, plus now you have two versions of the same thing to maintain.


People thinking that using a superior tool (on paper) enables them to automatically write better tools than the ones who are battle tested over the years baffles me to no end.

Yes, you can go further, possibly faster. OTOH, nothing replaces experience and in-depth knowledge. GNU Coreutils embodies that knowledge and experience. uutils has none, and just tries to distill it with tests against the GNU one.

...and they get 44 CVEs as a result in their first test.


There was an article posted to HN recently that enumerated bugs in the rust rewrite.

Iirc the bugs had to do with linux system details like fs toctou and other things you'd only find out about in production.

Ideally we'd have a better way of navigating platform idiosyncrasies or better system APIs, so that every project doesn't have to relearn them at runtime. But the rewrite isn't pure downside.


I'm personally not against Rust rewrites in principle. But doing them in this drive-by hostile manner, esp. with non-GNU licenses smells "hostile takeover" for me, and dismantling core free software utilities is not nice in general.

> Ideally we'd have a better way of navigating platform idiosyncrasies or better system APIs

I believe trying to make something idiot-proof just generates better idiots, so I prefer having thinner abstractions on the lower level for maintenance, simplicity and performance reasons. The real solution is better documentation, but who values good documentation?

Graybeards and their apprentices, mostly from my experience. I personally still live with reference docs rather than AI prompts, and it serves me well.


My read on those was basically that the classic filesystems are hopelessly broken and we need ACID guarantees in the next-gen filesystems, like 20 years ago.

Not saying all of them were about FS TOCTOU bugs but once I got to these, that was my takeaway.

Obviously just using Rust cannot fix _all_ bugs, and I reject any criticisms towards Rust rewrites that tear down this particular straw man (its goal being to make it impossible to argue against). That's toxic and I get surprised every time people on HN try to argue in that childish way.

But if we can remove all C memory safety foot guns then that by itself is worth a lot already.

Losing decades-old knowledge on how the dysfunctional lower-level systems work would be regrettable and even near-fatal for any such projects. That I'd agree with. But it also raises the question on whether those lower-level systems don't need a very hard long look and -- eventually -- a replacement.


I like rust as a language, but boy, the violent, zero-sum proselytising gets on my nerves. It's not enough for Rust to win, but C must be beaten to a pulp and its head mounted on a pike.

New projects wearing an another project's skin have always bothered me - regardless of language. Ubuntu did a similar thing way back with libav masquerading as ffmpeg.


How dramatic. I'll ask you as well: any proof for those colorful pictures you're drawing? Or are the people advocating for Rust a convenient target to vent other, very likely completely unrelated, frustrations?

I'm very happy to work with multiple programming languages without getting religious about any of them. They all have drawbacks, Rust included of course.

However, just my mere skepticism about the existence of the "violent proselytizing for Rust" of course immediately had me put in some imaginary group of fanatics. Which is of course normal. People love their binary camps and nuance and discission about merits be damned.


As another data point, I have gone through enough flame wars, incl. the usual ones, and Rust.

There's certainly a fanatic group of Rust developers who really want to eradicate C and C++ from the people's knowledge and all codebases in this universe, so far so openly hating the developers and designers of the said languages.

Same was (or still is) true for some LLVM/clang people w.r.t. GCC.

This is why I use neither.

I'm always happy to discuss PLT and merits of programming languages with neutral parties, even in lively fashion, but when open-mindedness gets thrown out of the window, I do leave the room.

These kinds of healthy discussions will benefit both parties. Hubris, ego, closed-mindedness and fanaticism won't.

Related: What Killed Smalltalk Could Kill Ruby, Too: https://www.youtube.com/watch?v=YX3iRjKj7C0


Well, I don't see them in HN is what I am saying. Obviously not scanning 24/7 but every time I enter an HN thread where Rust is even loosely mentioned, I brace for the inevitable bullies imagining they are victims. And this thread is exactly the same, sadly.

I am genuinely curious where this fanatic group is. Where are you witnessing them?


> I brace for the inevitable bullies imagining they are victims.

As a person who is bullied physically, verbally and emotionally for years, I'd not throw words bully/victim like wrapping paper like that. Moreover, I'd never bully anyone. I'm not that.

> I am genuinely curious where this fanatic group is. Where are you witnessing them?

Discord servers, mailing lists, issue threads, discussions, here and there. They are very vocal and abrasive minority, but it's enough to make me stay away from them. A special-ops group of these people claim that Rust needs no official specification and they can just ad-hoc develop the language and spec as the compiler evolves, as a side-product of compiler itself (i.e. spec is the compiler).

Last time I encountered them as functional programming fanatics in mid 2000s to 2010s. They successfully made me dislike the community so much that I didn't touch any functional programming language to this day.

Make no mistake: My favorite languages have the same fanatics, and I stay away from them, too. For example, C++ fanatics are an interesting bunch. They don't bully other languages, but new C++ developers who doesn't code like them or the way they like.

Maybe one day I'll start writing Rust, after gccrs stabilizes (they're going well) or really start writing lisp, but I'm sure that I'll never ask a question to a mere mortal about programming in either language.


> As a person who is bullied physically, verbally and emotionally for years, I'd not throw words bully/victim like wrapping paper like that. Moreover, I'd never bully anyone. I'm not that.

I was bullied as well. Knowing karate and aikido helped but not much, those people just hated me for reasons I never quite understood and kept coming in groups even. Some days I wondered whether I'll go back home from school alive. However, me entering middle age has me almost not caring anymore about the reasons they were like that, so I got that going for me which is nice.

I am not "throwing" words. I believe I know what I am talking about because I witnessed a few bullies wisening up to losing prestige and status for being rightfully called out and learning to pretend they are the victims... and it worked in part. It was sickening then, it's sickening now, wherever I spot it. HN is one of those places.

And btw I was not talking about you. You seem more reasonable than f.ex. this poster under my comment here: https://news.ycombinator.com/item?id=48123734

> Discord servers, mailing lists, issue threads, discussions, here and there. They are very vocal and abrasive minority, but it's enough to make me stay away from them.

OK, I'll admit ignorance because I don't go to any of those places or at least it's very rare.

One thing jumps at me: you are avoiding those people which is 100% fair and I would as well. But why avoid Rust itself? Why look down on any rewrite-in-Rust initiatives? Why do you allow yourself be emotionally manipulated? Would you stop believing in your favorite alternative-energy or alternative-engine approaches if they had the 0.1% toxic zealots screaming for attention on events dedicated to those areas?

I can somewhat relate, mind you. One example: I hated how everyone was trying to make me read some book classics and basically made it a point to avoid them just based on that. I was fully aware that was an irrational reaction that was likely robbing me of enjoying good art. I take big pride in myself for finally overcoming this some 2-3 years ago and starting to go through those books. They were nothing special, mind you, and I still couldn't see why people deem most of them classics but at least now my opinion is my own and built with my own two eyes and brain.

> Make no mistake: My favorite languages have the same fanatics, and I stay away from them, too.

Well, that by itself seems to close the discussion. You are aware of this nuance.

> Maybe one day I'll start writing Rust, after gccrs stabilizes (they're going well) or really start writing lisp, but I'm sure that I'll never ask a question to a mere mortal about programming either language.

I refuse to feel shame about wanting to learn and absorb other people's expertise. If somebody is being an arse about it then it's them who are embarrassing themselves; not me. But I do agree it's a waste of time and I'll admit nowadays I start with an LLM session and only then branch out to people if I feel unsatisfied. But that's a function of how awfully busy I am and not that I am becoming more antisocial. (Which also explains I dissociated for 1-2h and preferred to read HN or a book.)


But removing all the memory footguns while introducing hundreds of syscalls footguns where rust won't help you at all might not be better at all,

I agree, absolutely. Hence my adjacent thought that maybe all this should just be thrown away and we should invent an FS with ACID semantics.

I'm all for gradual improvements but at one point and on we should zoom even further out and pick our battles well.


> maybe all this should just be thrown away and we should invent an FS with ACID semantics.

You're describing WinFS, which looked into and ultimately abandoned Microsoft 20 years ago. I'm sure other groups have looked into this as well, but there's no such thing as free lunch.

> I'm all for gradual improvements but at one point and on we should zoom even further out and pick our battles well.

That sounds a lot like picking up more battles, yet we all still have 24 hours a day. Recursively trying to perfect lower layers will have you like Hal changing the lightbulb https://youtu.be/AbSehcT19u0


Well, recursively trying to perfect lower layers is what I am advocating for us to not do.

As a guy who prefers to stop and think before coding, to me a lot of the older UNIX / GNU primitives seem broken (like the env vars process inheriting discussion that was here a while ago) and should be completely rethought. I also think people overreact and believe "everything will break". And we have libraries and runtimes that only implement small parts of libc and the deployed apps that use them are running mostly fine for years.

My broader point was: shall we not start breaking away from all this legacy? Must we always rely on corporations to lead the charge?

But yes, I do of course agree with the only 24h a day thing. And likely nobody would want to pay for such a trail-blazing work anyway. Sad world.


If we are going so far to only guarantee correctness if we are using a FS that implements ACID semantics, why not just reinvent the whole kernel and remove all footguns, including memory safety? We could have a OS that each syscall to memory allocation can only be done through safe API.

Otherwise, it doesn't really make sense. The only reason we have things like Rust and other memory safe languages is because we want to create safer programs in the existing imperfect OSes that we have currently.


Why not indeed? It would bring me a lot of hope.

Some time ago I loved the idea of Fuchsia... but then I learned it's made by Google. Sigh.


Yes, this is why I am saying your idea of just reinventing the FS doesn't make sense. You don't get neither the wider ecosystem you get by having an OS compatible with e.g., POSIX semantics nor all the benefits you could get if you reinvent the whole OS.

Wait, what? ptyxis is not the default GNOME termjnal. It is the terminal of choice for both Ubuntu and Fedora, but the default terminal in GNOME is Console, internally known as kgx: https://en.wikipedia.org/wiki/GNOME_Terminal.

Gnome Console seems to be intended for people who don't use terminals. I quickly install GNOME terminal for real use.

I was using GNOME Console in a postmarketOS install in my Chromebook. The fact that it is lightweight compared to say Ghostty (my main terminal everywhere else) made a difference in performance for such a constraint device.

And I didn't really miss any features to be honest, it has the basic that you expect (things like tabs). It is less customizable than other options, but the defaults were good enough for me.


I did not know about this. I used it on Fedora, and I thought Fedora was as close to "default GNOME" as possible.

> but then, like, what do you actually want?

As an author of some homebrewed Go software in the past and trying to distribute in all 3 big OSes, I completely understand the blog post author's points. The problem is not Gatekeeper per see, it is just the combination of things that makes everything infuriating:

- I could justify going for the whole "Apple Developer Program" even with all the bullshit things you need to do to get certified if this was a one time payment like in Google Play Store. But it is yearly. Like the author, I would probably get 0 (or close to 0) dollars in recurrent revenue for those apps, I could justify a one time payment but a yearly one is ridiculous, it is not like Apple needs this money to be profitable (they probably get a much higher margins on selling things on Apple Store)

- Gatekeeper UX is infuriating. The equivalent on Windows (SmartScreen, as the author also cited) is still basically the same as Gatekeeper as far I understand (e.g., you need to have a valid certificate on your app or SmartScreen will deny the app execution until you clear the safety bit). But SmartScreen, different from Gatekeeper, has an actual good UX, as the error messages are clear and actionable (and also don't require a command line command to bypass)

- The author was still in a more "happy path" than me since their app seems to be a CLI only app. In this case just removing the quarantine bit with `xattr` works fine. In my case I was trying to distribute a desktop app, and I needed some special permissions to show notifications. This means I need to package my app in a proper `.app` bundle, include the required XML requesting the permissions and I am now required to sign the app. And since I am required to sign my app, I either pay the yearly payment fee to Apple to get a certificate to sign my app or I ask the users to resign the app with a self-signed certificate before launching

So really, I don't want that much actually. I can definitely handle all bullshit Apple wants, but I want at least a cheaper way to develop apps in their ecossystem. Maybe a new basic certification program that you have a one time fee and you can sign your apps but not notarize them. That way Gatekeeper would still complain, but at least my app would work without resign.

Or limit notarization to X amount of users (non-stabled notarized apps talks with Apple servers during the app first run, so they could just limit the amount of allowed tickets to X amount of users). If my app ever pass X amount of users, I will gladly pay the Apple tax, but 99USD/year for something that I will never see it back is too much.

Edit: BTW, I know, maybe 99USD/year doesn't seem too much for some. But Apple also doesn't do any regional pricing as far I know, and 99USD/year is crazy expensive in the country where I come from for example.

Edit 2: I am sure things are better nowadays with Claude/ChatGPT, but also trying to understand how to do the correct thing for your app is very difficult, especially if you're not using Xcode, since Apple assumes you're using it so all documentation refers to Xcode.


I think there is a very specific niche that this notebook is target for, and this definitely doesn't seem for you, the kind of person that having a cheaper laptop is more important than some of the unique features than this one or a Framework 13 Pro have.

For the unique part of this laptop that AFAIK a Dell XPS won't have is the Coreboot BIOS, that also probably means better support in the long term for BIOS updates.

To be clear, this is also not a laptop for me (but I did pre-order a Framework 13 Pro), but saying "nerd tax" or "anyone who buys one is either giving a donation or an idiot" like the other comment is just focusing in one part (the price) and not looking at the other.


The Framework pro is much more competitive price wise. I’m actually interested, but I’ll let you review it first.

The framework doesn’t support Coreboot though.

Different markets have different price ranges.

Linux nerds have money, thus, this starts at 3500$

Intel Core Ultra 9 275HX 2.1GHz Processor; NVIDIA GeForce RTX 5070 Ti 12GB GDDR7;

https://system76.com/laptops/serval-ws

Same CPU and GPU, as a gaming laptop, 2199

https://www.microcenter.com/product/691610/legion_pro_7_16ia...

It’s not like System 76 is developing special Nvidia drivers or anything.

Keep in mind most of these niche laptop brands, aside from Framework, just resell hardware. You definitely can get a better deal if you put in the work.


system76 does not develop special Nvidia drivers, but they do work on integrating GPU switching etc. in the Linux desktop: https://github.com/pop-os/system76-power/

I don't think this is necessarily about Linux nerds. For any kind of work laptop, time saved tinkering easily worth the extra $$$.


Worth a 1400$ price difference?

Plus PopOS hasn't been doing great lately , Cosmic has issues.

Linux must remain a FREE alternative to Windows. If I need to pay an extra 1400$ for a specific Linux laptop with the same specs it's vastly less competitive.


There is a difference between making a mistake like this one and being humble (e.g., lessons learned, having a daily external backup of the database somewhere else, or maybe asking the agent to not run commands directly in production but write a script to be reviewed later, or anything similar) and just blaming the AI and the service provider and never admitting your mistake like this article is all about.

The fact that this seems to be written by AI makes it even more ironic.


Indeed. I swear reality gets stranger and more implausible by the day.

"That isn't backups. That's a snapshot stored in the same place as the original — which provides resilience against zero failure modes that actually matter (volume corruption, accidental deletion, malicious action, infrastructure failure, the exact scenario we just lived through)."


Agree in that this person seems to trying to shift blame, but still think he's right in that Cursor and Railway also have glaring weaknesses. Yeah, it's was somewhat of a perfect storm of mistakes with blame to go all around.


I don't think this is a minor point. It seems clear by this point that the author is clueless how even API works and are just trying to shift blame for third-parties instead assuming that they're just vibecoding their whole product without doing proper checks.

Yes sure, there seems to be lots of ways this issue could have been mitigated, but as other comments said, this mostly happened because the author didn't do its proper homework about how the service they rely their whole product works.


It's also moot.

If the API replied "Are you sure (Y/N)?" the AI, in the mode it was in, guardrails completely pushed off the side of the road, it would have just said "Yes" anyway.

If you needed to make two API calls, one to stage the delete and the other to execute it (i.e. the "commit" phase), the AI would have looked up what it needed to do, and done that instead.

It's a privilege issue, not an execution issue.


Exactly, that just reinforces the fact that the author is just blaming others instead of getting any valuable insights about this "postmortem analysis".


He also seems to be lying, he wrote on Twitter the agent was in plan mode. That part has to be exaggerated.


I can’t say for sure, but I think Claude’s mode is nothing more than part of the system prompt. I don’t think it actually takes away web request or file write tools. I say this because I could swear I’ve seen Claude go ahead and make some changes even while we’re in plan mode. Web requests certainly, because it can fetch docs and so forth.


You’re not alone, I’ve absolutely seen the same behavior occasionally with Opus in OpenCode where it takes actions it shouldn’t be able to in plan mode.


that sounds like opencode has a privilege bug too?


Considering it happens across both opencode and other apps like Claude and Codex as well as across models it seems like something inherent to the models themselves and not necessarily a bug in the apps wrapping them. But maybe there’s more opencode et. al could be doing to prevent it.


The harnesses are the part of the stack responsible for tools, so it would be a bug there, not the model. The model itself isn’t doing anything but generating tokens. The harness gives it a blob of text telling it which tools exist, and the model may choose to tell the harness to call one.


“Plan” vs “execute” modes seem more like suggestions the models _mostly_ follow. I have absolutely had models (Codex and Sonnet/Opus) perform actions in plan mode they should never have been able to take like editing files or starting to work on a plan that was just created.


I completely disagree. I think the author makes a fair point about safety concerns regarding AI tooling. The author sounds knowledgeable enough to me. Even if some of their suggestions are a bit crass, most of them aren’t. Railway should most definitely not be putting backups within the same volume (even if documented). AI should not have done that operation when they have explicit rules not to. The industry has a lot of work to do in this department. I would be extremely pissed off too.

The whole “vibecoding” argument is stupid. Everyone is pissed because it’s taking their jobs and saying, “welp, you shouldn’t have vibe coded then” when issues like this occur. Issues like this occurred and still occur without vibe coding. Probably much more often by actual people than AI. I’m frustrated too; I love coding. I’ve been doing it for 15 years. But either way, we have to get used to the idea that we won’t be coding in the future. The whole industry is moving that way and moving fast. You can’t do anything to change it. You can’t deny that you can complete projects 1000000x faster when coding with agents than by your own hands. Adapt. Stop complaining.


> The industry has a lot of work to do in this department

The “industry” has an answer to this problem. It’s called a blameless post-mortem.

Don’t blindly externalise the blame onto everyone else, assume we work in a imperfect world and build safety around the process such that this doesn’t / can’t happen again.

If all you do is finger point to shift the blame, then you’ll have an infinite number of avoidable incidents to show for it

> Issues like this occurred and still occur without vibe coding

Right and so you focus on fixing the elements of the process you can control.


> AI should not have done that operation when they have explicit rules not to.

How much experience do you have with LLMs?

One of the first lessons developers learn after working with LLMs a bit, is that the LLM will hallucinate, and you need to be alert and competent enough to recognize when it happens. Sort of like a car with steering assist requires you to pay attention and take personal responsibility for anything that happens.

As a consequence of that, one of the second lessons developers learn after working with LLMs a bit, is that there is no such thing as "an explicit rule" for LLMs. "Explicit rules" can still be ignored by an LLM under many different circumstances. The sooner the developer learns this fact, the sooner they can be productive with LLMs, and the less likely they are to delete their own production database and blame it on their tools with which they're unfamiliar.


> The author sounds knowledgeable enough to me.

Nope, their complaint about having an API ask if you should delete or not clearly shows the author has no idea how API works. They could have said that a deletion API could require 2 different requests, one for the deletion request that returns a token and another for confirmation with the token returned by the first request, but this is not what they said so.

Also as others have said, this wouldn't have helped anyway because the AI could just call both APIs one after another and the result would be the same, especially if the first request returns "call this other endpoint with this token to confirm your deletion request".


If that is true, this is malicious complaint. Unless Safari has the same restrictions, of course.


> As an example, nixos keeps state around regarding user id/username mappings, to avoid giving the same user id to different users across time. So a fresh install of nixos might leave services unable to read their data files, because the file might be owned by a different user id.

One reason to set `mutableUsers = false`: https://mynixos.com/nixpkgs/option/users.mutableUsers.

> And if you activate and enable incus, for instance, it will probably create a bridge device: the device will remain in place after you remove incus, which will have implications for how your network/firewall works that your configuration will depend on but will not enforce or be able to reproduce.

Impermanence: https://github.com/nix-community/impermanence.

To be clear, I don't use neither. But you can get NixOS to be almost completely stateless (if this is something you care) with a few changes. The power is there, but it is disabled by default because it is not the pragmatic choice in most cases.


`One reason to set `mutableUsers = false`: https://mynixos.com/nixpkgs/option/users.mutableUsers.`

That doesn't help. Mutable users is about the lifecycle of the /etc/passwd file. What's I'm referring to is /var/lib/nixos/uid-map.


I think macOS makes some trade-offs to give a supposedely better user experience as long you're part of the 80%. If you're not though, yes it is painful.

For me the macOS Display management experience is absolute dreadful. I had the same issues as the author's and I even had to pay actual money for a third party application (BetterDisplay) to fix some of the issues.

The most infurienting one for me is that I can't disable the internal MacBook display when I am connected to an external monitor without closing the lid. Why you may ask? Because I want to keep using the TouchID. However this is impossible in macOS without an external app.


Which external app even allows that?


BetterDisplay allow you to disable the internal monitor while keeping the lid open, this way I can still use TouchID.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: