Hacker Newsnew | past | comments | ask | show | jobs | submit | tempest_'s commentslogin

Plenty of options for putting auto steer on a dumb tractor already exist.

Cheap ones too -- aliexpress has them.

But there's more to agtech than driving a tractor around, a lot of what these big integrated systems do (at the high end) is very data driven -- determining where and how to plant, irrigate, fertilize, etc. There's a lot of integration work beyond just making the tractor drive.


> But there's more to agtech than driving a tractor around, a lot of what these big integrated systems do (at the high end) is very data driven -- determining where and how to plant, irrigate, fertilize, etc.

How difficult is this to implement outside of big ag-tech? I feel that a community of experienced farmers and programmers (or programmer-farmers) could tackle this.


Is suspect most farmers would prefer the diy add-on version of these than the single manufacturer integrated one. A modern smartphone and stay of I/o sensors send like it could do pretty much the entire job

Right, but that has nothing to do with a vendor making a dumb tractor. Why do we need to dismissively move the conversation from TFA. The data driven approach is made up of several parts, and we're looking at a specific part

Making a dumb tractor for the use-case of dumb tractor is obviously a winning idea.

I just don't think you're going to effectively compete with big agtech by putting a bunch of parts in a box, shaking it, and hoping you end up with a beautifully integrated solution. Integration hell is the reason big commercial firms dominate when it comes to large integrated systems.


Why not? They sell telematics systems separately from cars. It’s possible to do this and it might not be too difficult depending on how the system is composed.


admittedly, i'm not a farmer nor an expert in data driving farming. but getting a farmer the ability to precisely drive a tractor in a field so that planting seeds, applying fertilizer, and any of the other steps would be a huge win. The settings used when doing that can easily come from bigFarmData gained from other sources. Can it be used even more precisely when everything is gathered/integrated by one company? That's a question that I'm not by default saying yes to, but it seems like you do think that is true. Even if it is true, does that mean the difference from a farmer going broke because his DIY tractor behaved slightly differently than your solution? I'd posit that a farmer only being allowed to play the bigFarmData game by only being allowed to buy from one vendor that is expensive while also forcing any repairs to be expensive will cause farmers to financially unnecessarily struggle.

Scale is a huge factor. It makes the most sense to invest in precision ag tech when you have enough acres that the investment pays off. At 5000+ acres, farms are using integrated systems that combine satellite data, on-tractor sensors, soil sensors, drone sensors, in-field weather sensors, with a lot of science to squeeze the most out of the land. At that scale, there's a lot of money invested in a season and you aren't looking for a DIY project, you need production quality product with proven scientific rigor. You probably don't have the manpower to do a DIY project anyway, you are relying heavily on automation and outsourcing. And at the low end, it it more effort to implement any of this than you'll get out of it.

So a DIY solution is aiming for somewhere in the center of the market -- enough at stake that it makes sense to bother, but not enough enough money to avoid the headache of DIY. It might make sense for some mid-sized farms in developing economies, but it seems to be a narrow window to me.


The economics of farming (at least in the US) are brutal. Scaling up is really the only way to make a living long term. Some of this is due to equipment cost (look up how much a combine costs), and some is due to competition. It's not unusual for a farmer to be land rich and cash poor.

If you want to see a couple of guys learning how to farm from scratch, visit https://www.youtube.com/@spencerhilbert. Spencer and his brother made a bit of money off games and Youtube and have been starting out on corn, hay, as well as raising beef. It gives a pretty good insight into how pervasive tech is in farming, and how despite that, how much of farming still relies on hard, physical work.


Im not really in the space but all the CAD things I see lately are browser based "cloud offerings"

Im not sure is CAD stuff is just served by a basic graphics card at this point or if there is some server side work going on.

OS doesnt mean that much when every industry decided that Chrome was going to be their VM


No one is using that cloud crap professionally. The bread and butter of the CAD world is Windows PCs with tons of RAM and certified GPUs.

> No one is using that cloud crap professionally.

I would bet there are at least some people using Onshape at their job. https://www.onshape.com/en/resource-center/case-studies/


Hardcore CAD systems like Solidworks or CATIA still aren’t browser based.

I suppose it depends on what you use it for (doesnt look 1 to 1 for cloud to local) but it looks like both have offerings in the space

https://www.3ds.com/cloud

https://www.solidworks.com/product/solidworks-xdesign

but like I said I just see what gets advertised at me in youtube ads


A carrier battle group can easily be seen and tracked by commercial satellite constellations.

At minimum they travel with 6 or 7 ships and leave a wake a mile long and they only go tens of miles an hour, it isnt a speed boat.

Here is an Indian carrier (formerly Russian) on google maps and the US ones are large https://www.google.com/maps/place/14%C2%B044'30.3%22N+74%C2%...

I think people forget how many satellites are pointed at all parts of the planet. They are used for crop reporting and weather and all sorts of shit. It isnt the 1960s where only the super powers have them and they drop rolls of film.


Satellites aren't pointed at "all parts of the planet". They're generally taking regular photos of known locations, when the right type of satellite passes over. That's where you get lucky shots like the one you noticed. Then that satellite has to orbit, and there isn't another one nearby just ready to take another photo. Then the carrier changes direction...

Sure any single one but there are many companies, some with hundreds of satellites in orbit at any given time who will point it where ever if you pay them enough

Which is why you get things like this https://www.cnbc.com/2026/04/05/satellite-firm-planet-labs-t...

An aircraft carrier is not that fast, if you see it once you know roughly what radius of circle it is going to be in for a while (ignoring the fact that they are likely going somewhere for a reason its not their job is to say out of sight)

edit: aha that company literally lists it on their website https://www.planet.com/industries/maritime/


This is literally the point: it's easy to tell them to point a satellite at beirut and get pictures every 3 hours or whatever, it's much more difficult to tell them to point at a location in the middle of the pacific ocean... because you don't know the location in the first place.

Beirut doesn't move around a lot. Carriers do. While there are a lot of satellites pointing at the earth at any one moment, this isn't some kind of Hollywood super screen showing a real time image of the entire pacific. You just see whatever small patch the satellite happens to be pointing at.

And again, ignoring the part where america would probably start shooting down satellites.


>because you don't know the location in the first place

Do you seriously think China doesn't track US carrier movements?


Do you seriously think the US Navy doesn't avoid Chinese tracking? What kind of a question is that? Like, there's probably a magazine that lists the cruising destinations of most of the carriers, what ports they're going to stop at next, etc, because, you know, they're not at war and trying to maintain secrecy.

> Do you seriously think the US Navy doesn't avoid Chinese tracking?

How would they avoid having a Chinese satellite continuously track their movement? They have the capability to do that, there is nothing USA can do about it except shoot down all the Chinese satelites.

https://defencesecurityasia.com/en/china-three-satellites-tr...


US carrier groups probably pose the #1 strategic threat to the PRC in the Pacific. You can safely assume they throw whatever resources are necessary at the task of knowing their whereabouts.

I mean, you can try all you want, but there's limits to hiding a fleet of ships on the open sea. They are huge, emit immense heat signatures, and produce miles-long wakes while moving. As long as there are satellites overhead, they will be able to find them.

I suspect we might be talking past one another because we have different degrees of precision in mind: I'm not saying the Chinese could have a missile target lock on a carrier whenever they wanted, much less in wartime. Far from it. But I highly doubt you can reposition a carrier group without them catching wind of it within hours.


This is the sort of arms race that is going to change every year. I just read an article that claimed that China has launched a system of satellites that use non-visual means to track ships in the pacific (via.. emissions or radar or something?) and china can certainly afford to put a bunch of them in orbit.

It's not impossible to track a carrier group via satellites, but it's not trivial either, you can't just, like, open up your windows gui and click on a satellite and click the button that says "follow this carrier" because like satellites orbit and fly around the earth and the ships can alter course when you don't have eyes on them and so on and so forth.

And yeah, as you point out, there's a big difference between having a satellite picture showing a probable carrier group at X and Y coordinates and being able to actually strike the thing.



Now I’m contemplating just how small and light of an instrument could be carried on a Starlink-style satellite that could detect a large ship. A smallish COTS telescope, e.g. a Celestron 8SE ($1700 retail) could easily see a ship from the Starlink constellation altitude.

Never mind that the Starlink radio arrays are, well, radio arrays that quite effectively cover the whole planet. If you think of each satellite as a radio telescope, its resolution is crap and probably cannot disambiguate a carrier group from anything else (at least according to disclosed specs). But it would be quite interesting to build a synthetic aperture array out of multiple satellites. This would rely on emissions from the ships themselves, but I bet it could be done and could locate ships quite nicely.


Seeing it isn't the issue. Scanning the oceans is the issue.

Carrier groups also don't emit radio when they aren't interested in being detected!


Citation needed.

All of those can be achieved with replaceable batteries.


Are you claiming it's not cheaper to embed batteries?

Citation needed. It seems pretty clear that a mechanism to allow a user to access a battery will increase complexity, making all the other properties harder to achieve.

Fairphone managed to do it, I'm sure companies with more budget than them can figure it out.

Not water proof and definitely big for its capacity.

Yes, hence why I'm sure companies with 100x the budget can do better.

You're asking for proof that effective waterproof phones with removable batteries exist?

https://m.gsmarena.com/results.php3?chkRemovableBattery=sele...


You're proving the point.

1) iPhones for example are ip68 rated while those are just ipx8/9

2) Do you want to be limited to the universe of those search results? Do you want to buy a Sony Xperia?

You can't make batteries directly replaceable at the same quality and price. There are tradeoffs. Obviously waterproof non-embedded batteries exist. Just like you could make a removable battery the same slimness as embedded. With massive tradeoffs. It's capacity will be terrible. No one is surprised a removable battery can be waterproof but the point is there are tradeoffs.


I don't see those options in the search results either way

In any case we heard the same sort of rationalization for getting rid of the headphone jack, so color me extremely skeptical-- yes of course there's going to be trade-offs, but what a coincidence that headphone jacks, replaceable batteries, SD card slots have all gone by the wayside, which just so happens to allow for upselling Bluetooth and cloud storage


> just ipx8/9

Do you actually need it? For what?


Kinda weird to argue for longer life via battery replacement and against longer life via contaminant protections. My phone is regularly covered in chalk dust, sawdust, water, …

1 mm thickness is a fine trade-off

No, the list was "Cheaper, higher battery capacity, water proof, smaller, stronger". I don't think it's all that controversial to say that there are engineering tradeoffs to be made here. You can make a waterproof phone with a removable battery, but you can't make a waterproof phone with a removable battery that is as good or better than an iPhone in every other respect too. If you could, iPhones would already have removable batteries.

> If you could, iPhones would already have removable batteries.

A crazy take since apple has very clearly made anti-consumer moves in the past.

If having a baked in battery caused there to be 1% more iphones sales which would they choose.

You were likely nodding along when Jobs was out there telling people they were holding the phone wrong.


My point is that if it's all of those things (crucially, including cheaper), then it's a Pro-Apple move to manufacture iPhones that way. There would be no downside. To the extent they make anti-consumer moves at all (which I'll cede for the sake of keeping this brief), they do so because those moves are pro-Apple.

The crazy take is thinking that a design choice that causes there to be 1% more iPhone sales is an anti-consumer move.

Planned obsolesce are anti consumer and increases sales. So yes anti consumer design can increase sales volume, that is often the point.

Replaceable batteries lets you use your phone longer, that means people will take longer to buy a new phone and reduce iphone sales. Such anti consumer moves requires regulations to be fixed, since there is no incentive for the company to be pro consumer here.


That relies on the questionable assumption that consumers don't understand the overall value proposition.

The point is that the incentives are not pointing towards "make better phone" they are pointing towards "sell more phones"

Sometimes "better phone" drives "sell more phones"

Sometimes it doesn't.


Very often it does, certainly more often than a government regulation results in a better product.

Can you explain your reasoning? Is there some minimum sales threshold required, and 2 million iPhones wouldn't meet it?

If people buy more of a product, that's because it's better in some way. Maybe it's cheaper, or maybe it's better quality.

Oh yes, the famous Galaxy XCover 7 Pro. People are camping out in the rain waiting for their release because replaceable batteries are under such high demand.

So we're moving the goalposts from "these features can coexist" to "such a phone has to be popular"? Why don't you skip to the end and tell me where they're going to end up?

If phones are not for sale with features, how does that allow drawing any conclusion about popularity? I've yet to meet a single person who says, "I sure am glad I can't use fingerprint unlock on my iPhone anymore", but obviously it's not worth leaving the entire ecosystem

Recall also that building Android phones barely makes any money, so it's not exactly a business teeming with disruption


It'll increase the size of the case by a small amount but a battery cell is a battery cell... Rip open an old device and you'll see.

It really really really depends on how you are using it and what you are using it for.

I can get LLMs to write most CSS I need by treating it like a slot machine and pulling the handle till it spits out what I need, this doesnt cause me to learn CSS at all.


I find it a lot more useful to dive into bugs involving multiple layers and versions of 3rd party dependencies. Deep issues where when I see the answer I completely understand what it did to find it and what the problem was (so in essence I wouldn't of learned anything diving deep into the issue), but it was able to do so in a much more efficient fashion than me referencing code across multiple commits on github, docs, etc...

This allows me to focus my attention on important learning endeavors, things I actually want to learn and are not forced to simply because a vendor was sloppy and introduced a bug in v3.4.1.3.

LLMS excel when you can give them a lot of relevant context and they behave like an intelligent search function.


Indeed, many if not most bugs are intellectually dull. They're just lodged within a layered morass of cruft and require a lot of effort to unearth. It is rarely intellectually stimulating, and when it is as a matter of methodology, it is often uninteresting as a matter of acquired knowledge.

The real fun of programming is when it becomes a vector for modeling something, communicating that model to others, and talking about that model with others. That is what programming is, modeling. There's a domain you're operating within. Programming is a language you use to talk about part of it. It's annoying when a distracting and unessential detail derails this conversation.

Pure vibe coding is lazy, but I see no problem with AI assistants. They're not a difference in kind, but of degree. No one argues that we should throw away type checking, because it reduces the cognitive load needed to infer the types of expressions in dynamic languages in your head. The reduction in wasteful cognitive load is precisely the point.

Quoting Aristotle's Politics, "all paid employments [..] absorb and degrade the mind". There's a scale, arguably. There are intellectual activities that are more worthy and better elevate the mind, and there are those that absorb its attention, mold it according to base concerns, drag it into triviality, and take time away away from higher pursuits.


I agree with your definition of programming (and I’ve been saying the same thing here), but

> It's annoying when a distracting and unessential detail derails this conversation

there is no such details.

The model (the program) and the simulation (the process) are intrinsically linked as the latter is what gives the former its semantic. The simulation apparatus may be noisy (when it’s own model blends into our own), but corrective and transformative models exists (abstraction).

> No one argues that we should throw away type checking,…

That’s not a good comparison. Type checking helps with cognitive load in verifying correctness, but it does increase it, when you’re not sure of the final shape of the solution. It’s a bit like Pen vs Pencil in drawing. Pen is more durable and cleaner, while Pencil feels more adventurous.

As long as you can pattern match to get a solution, LLM can help you, but that does requires having encountered the pattern before to describe it. It can remove tediousness, but any creative usage is problematic as it has no restraints.


> there is no such details.

Qua formal system, yes, but this is a pedantic point as the aim - the what - of a system is more important than the how. This distinction makes the distinction between domain-relevant features and implementation details more conspicuous. If I wish to predict the relative positions of the objects of our solar system, then in relation to that end and that domain concern, it matters not whether the underlying model assumes a geocentric or heliocentric stance in its model (that tacitly is the deeper value of Copernicus's work; he didn't vindicate heliocentrism, he showed that a heliocentric model is just as explanatory and preserves appearances equally well, and I would say that this mathematical and even philosophical stance toward scientific modeling is the real Copernican revolution, not all the later pamphleteer mythology).

Of course, in relation to other ends and contexts, what were implementation details in one case become the domain in the other. If you are, say, aiming for model simplicity, then you might prefer heliocentrism over geocentrism with all its baroque explanatory or predictive devices.

The underlying implementation is, from a design point-of-view, virtually within the composite. The implementation model is not of equal rank and importance as the domain model, even if the former constrains the latter. (It's also why we talk about rabbit-holing; we can get distracted from our domain-specific aim, but distraction presupposes a distinction between domain-specific aim and something that isn't.) When woodworking, we aren't talking about quantum mechanical phenomena in the wood, because while you cannot separate the wood from the quantum mechanical phenomena as a factual matter - distinction is not separation - the quantum is virtual, not actual with respect to the wood, and it is irrelevant within the domain concerning the woodworker.

So, if there is a bug in a library, that is, in some sense, a distraction from our domain. LLMs can help keep us on task, because our abstractions don't care how they're implemented as long as they work and work the way we want. This can actually encourage clearer thinking. Category mistakes occur in part because of a failure to maintain clear domain distinctions.

> That’s not a good comparison. Type checking [...]

It reduces cognitive load vis-a-vis understanding code. When I want to understand a function in a dynamic language, I often have to drill down into composing functions, or look at callers, e.g., in test cases to build up a bunch of constraints in my mind about what the domain and codomain is. (This can become increasingly difficult when the dynamic language has some form of generics, because if you care about the concrete type/class in some case, you need even more information.)

This cognitive load distracts us from the domain. The domain is effectively blurred without types. Usually, modeling something using types first actually liberates us, because it encourages clearer thinking upfront about the what instead of jumping right into how. (I don't pretend that types never increase certain kinds of burdens, at least in the short term, but I am talking about a specific affordance. In any case, LLMs play very nicely with statically-typed languages, and so this actually reduces one of the argued benefits of dynamic languages as ostensibly better at prototyping.)

> As long as you can pattern match to get a solution [...]

Indeed, and that's the point. LLMs work so well precisely, because our abstractions suck. We have lot of boilerplate and repetitive plumbing that is time-consuming and tedious and pulls us away from the domain. Years of programming research and programming practice has not resolved this problem, which suggests that such abstractions are either impractical or unattainable. (The problem is related to the philosophical question whether you can formalize all of reality, which you cannot, and certainly not under one formal system.)

I don't claim that LLMs don't have drawbacks or tradeoffs, or require new methodologies to operate. My stance is a moderate one.


Yes but that’s why you ask it to teach you what it just did. And then you fact-check with external resources on the side. That’s how learning works.

> Yes but that’s why you ask it to teach you what it just did.

Are you really going to do that though? The whole point of using AI for coding is to crank shit out as fast as possible. If you’re gonna stop and try to “learn” everything, why not take that approach to begin with? You’re fooling yourself if you think “ok, give me the answer first then teach me” is the same as learning and being able to figure out the answer yourself.


I would consider this a benefit. I've been a professional for 10 years and have successfully avoided CSS for all of it. Now I can do even more things and still successfully avoid it.

This isn’t necessarily a bad thing. I know a little css and have zero desire or motivation to know more; the things I’d like done that need css just wouldn’t have been done without LLMs.

This exactly. My css designs have noticeably gotten better without me,the writer getting any better at all.

But were you trying to learn CSS in the first place?

Has apple been a serious development platform in the last 20 years?

I know a lot of devs like apple hardware because it is premium but OSX has always been "almost linux" controlled by a company that cares more about itunes then it does the people using their hardware to develop.


At least 9 out of every 10 software engineers I know does all their development on a mac. Because this sample is from my experience, it’s skewed to startups and tech companies. For sure, lots of devs outside those areas, but tech companies are a big chunk of the world’s developers.

So yea I would say Apple is a “serious development platform” just given how much it dominates software development in the tech sector in the US.


I have the feeling a lot of people take Macs because the other option is a locked down Windows, and Linux is not offered.

This. I ran Linux at work until last year, when it was finally disallowed. I went with locked-down Mac over locked-down Windows.

The hardware for a Linux laptop right now is not great. Especially for an arm64 machine. Even if the hardware is good the chassis and everything else is typically plastic and shitty.

That is a surprising sentiment. Most dell and Lenovo laptops work just fine and are usually of reasonably good build quality (non-plastic chassis etc.).

arm64 is however mostly bad. The only real contender for Linux laptops (outside of asahi) was Snapdragon's chips but the HW support there was lacking iirc.


They give us Dell Linux machines from work. They suck so bad and we have so many problems. Overheating, camera is terrible, performance is bad relatively to the huge weight of the device. Everything is a huge step down from Macs.

Whenever I see Linux people comparing Linux and Mac I'm amazed at the audacity. They are not in the same league. Not by a mile. Even the CLI is more convenient on the Mac which is truly amazing to me.


How is the Mac CLI more convenient? There isn't even a package manager in the box, they ship loads of old outdated tools too. Plus there's the whole BSD/GNU convention thing you have to watch out for.

I don't find my ThinkPad running Linux overheats, nor is it particularly heavy. And performance is comparable to the similarly priced MBP at the time. Camera sucks, but compared to my Surface so do the Macs...


Prefer my Konsole setup on KDE and I use both interchangeably all day tbh. Camera yea. The irony is heating issues become less of an issue with arm.

We are lucky in that we can choose our machines (within reason, and no real support if things get broken) and run an arch flavour.

I use thinkpad x1 carbons and have nearly 0 issues. The hardware is not quite as nice as a macbook but it does the job and is nice enough.


Recently an article on HN front page was about a guy who had to file down his MBP because the front edge of it was too sharp and resting his wrists on it hurt his hands. At least two people in the comment section noted how the sweat on their hands over time caused the sharp edge of the MBP chassis to pit and it caused it to turn in to a sharp serrated edge that actually cut their hands.

You can say other laptops are "plastic and shitty" all you want, but Apple's offerings aren't necessarily the best thing out there either. I personally like variety, and you don't get that from Apple. I can choose from hundreds of form factors from a lot of vendors that all run Linux and Windows just fine, plastic or not.


[flagged]


Well they do have the Max+ 395 - 128GB beast https://frame.work/desktop

Which is none trivial. The laptop scene is particularly difficult though.


I have a personal Framework 13 and a work-issued MacBook Pro. I love Framework’s mission of providing user-serviceable hardware; we need upgradable, serviceable hardware. However, the battery life on my MacBook Pro is dramatically better than on my Framework. Moreover, Apple Silicon offers excellent performance on top of its energy efficiency. While I use Windows 11 on my Framework, I prefer macOS.

Additionally, today’s sky-high RAM and SSD prices have caused an unexpected situation: Apple’s inflated prices for RAM and SSD upgrades don’t look that bad in comparison to paying market prices for DIMMs and NVMe SSDs. Yes, the Framework has the advantage of being upgradable, meaning that if RAM and SSD prices decrease, then upgrades will be cheaper in the future, whereas with a Mac you can’t (easily) upgrade the RAM and storage once purchased. However, for someone who needs a computer right now and is willing to purchase another one in a few years, then a new Mac looks appealing, especially when considering the benefits of Apple Silicon.


>>At least 9 out of every 10 software engineers I know does all their development on a mac

I work in video games, you know, industry larger than films - 10 out of 10 devs I know are on Windows. I have a work issued Mac just to do some iOS dev and I honestly don't understand how anyone can use it day to day as their main dev machine, it's just so restrictive in what the OS allows you to do.


It makes sense that you use Windows in a video game company. We use windows as well at work and it's absolutely awful for development. I would really prefer a Linux desktop, especially since we exclusively deploy to Linux.

I work as a consultant for the position, navigation, and timing industry and 10 of 10 devs were on Windows. Before that I worked for a big hollywood company and while scriptwriters and VP executive assistants had Macs, everyone technical was on Windows. Movies were all edited and color graded on Windows.

>it's just so restrictive in what the OS allows you to do.

The people using them typically aren't being paid to customize their OS. The OS is good for if you just want to get stuff done and don't want to worry about the OS.


Weird .. macOS is still completely open is my experience. Can you give an example?

I compile a tool we use, send it to another developer, they can't open it without going through system settings because the OS thinks it's unsafe. There is no blanket easy way to disable this behaviour.

We also inject custom dlibs into clang during compilation and starting with Tahoe that started to fail - we discovered that it's because of SIP(system integrity protection). We reached out to apple, got the answer that "we will not discuss any functionality related to operation of SIP". Great. So now we either have to disable SIP on every development machine(which IT is very unhappy about) or re-sign the clang executable with our own dev key so that the OS leaves us alone.


If it's being sent to another developer then asking them to run xattr -rd com.apple.quarantine on the file so they can run it doesn't seem insurmountable. I agree that it's a non-starter to ask marketing or sales to do that, but developers can manage. Having to sign and then upload the binary to Apple to notarize is also annoying but you put it in a script and go about your day.

But Apple being "completely open", it is not.


If SIP is kicking in, it sounds like you're using the clang that comes with Apple's developer tools. Does this same issue occur with clang sourced from homebrew, or from LLVM's own binary releases?

Yes, it kicks in even with non apple supplied clang(most notably, with the clang supplied as part of the Android toolchain, since we sometimes build Android on MacOS and having to re-sign the google-supplied clang with our own certificate is now a regular thing every time there is an update released).

> We also inject custom dlibs into clang during compilation

I am curious what you are doing


We use Unreal Build Accelerator which injects a custom dlib into clang to intercept the compilation process and distribute it to worker machines.

I’m curious why this needs to be code injection and not, like, a shell script?

Because...it's official behaviour that is fully supported by clang? If you want to add a hook on compilation start, it's literally the documented way - you include your own dlib with necessary overrides and then you can call your own methods at each compilation step. Not even sure how you'd do it with a shell script? You need to have knowledge of all the compilation and linking units, which....you have from within Clang.

What's the interface this uses? I don't think I am familiar with it

There's a very good explanation in this presentation :-)

https://static.linaro.org/connect/yvr18/presentations/yvr18-...


It is a weird situation. Apple products are consumer products but they make us use them as development hardware because there is no other way to make software for those products.

Making software for other Apple products pretty low on the reasons I use a MBP.

128GB of RAM and an M4 Max makes for a very solid development machine, and the build quality is a nice bonus.


An artificial limit on the number vms you are allowed to launch doesn't make it solid

macOS* VMs. And if you don’t care about that, is it no longer solid?

> Has apple been a serious development platform in the last 20 years?

This is one of those comments that is so far away from reality that I can’t tell if it’s trolling.

To give an honest answer: Using Macs for serious development is very common. At bigger tech companies most employees choose Mac even when quality Linux options are available.

I’m kind of interested in how someone could reach a point where they thought macs were not used for software development for 20 years.


> I’m kind of interested in how someone could reach a point where they thought macs were not used for software development for 20 years.

If you work with engineering or CAD software then Macs aren't super common at all. They're definitely ubiquitous in the startup/webapp world, but not necessarily synonymous with programming or development itself.


Most "serious" companies do not support Linux in their IT infrastructure. I've begged to run Linux, but it's a hard no from IT. They only support Windows and MacOS, and that's all. So I choose a Windows desktop, because I am not a fan of Apple. Having been forced to use Macs in past jobs, I'll choose Windows every time. I liked being able to dual-boot Windows on a MBP in the past, but that is no longer an option.

Anything being developed for the Apple ecosystem requires use of the Apple development platform. Maybe the scope could be called "unserious," but the scale cannot be ignored.

I am aware.

However having used Xcode at some point 10 years ago my belief is that the app ecosystem exists in spite of that and that people would never choose this given the choice.


For me at least, not being Linux is a feature. Linux has always been “almost Unix” to the point where now it has become its own thing for better or worse. OS X was never trying to be Linux. It would be better if we still had a few more commercial POSIX implementations.

That is fair but in my experience most devs are targeting linux servers not BSD(or any other flavour) which is helped by OSX. If OSX was linux derived it would suit them just as well.

edit: I suppose I should also note the vast majority of people developing on mac books (in my experience anyway) are actually targeting chrome.


> I suppose I should also note the vast majority of people developing on mac books (in my experience anyway) are actually targeting chrome.

Point taken. Most developers probably make do with Linux containers rather than MacOS VMs.


There is no reality that macOS could be based on Linux.

Turns out, an operating system is more than just a kernel with some userspace crap tacked on top, unlike what Linux distros tend to be.


> Turns out, an operating system is more than just a kernel with some userspace crap tacked on top, unlike what Linux distros tend to be.

This is also my opinion of OSX, let's not pretend that the userland mess is the most beautiful part of OSX.

Apple has great kernel and driver engineering for sure but once you go the stack above, it's ducktape upon ducktape and you better not upgrade your OS too quickly before they fix the next pile they've just added.


Heterogeneity is the feature. The Linux ecosystem is better off for it (systemd, Wayland, dconf, epoll, inotify are all based on ideas that were in OS X first) and not being beholden to Linux is a competitive advantage for Apple everyone wins.

> Has apple been a serious development platform in the last 20 years?

i dont think anyone asks this question in good faith, so it may not even be worth answering. see:

> I know a lot of devs like apple hardware because it is premium but OSX has always been "almost linux" controlled by a company that cares more about itunes then it does the people using their hardware to develop.

yea fwiw macs own for multi-target deployments. i spin up a gazillion containers in whatever i need. need a desktop? arm native linux or windows installations in utm/parallels/whatever run damn near native speed, and if im so inclined i can fully emulate x86/64 envs. dont run into needing to do that often, but the fact that i can without needing to bust out a different device owns. speed penalty barely even matter to me, because ive got untold resources to play around with in this backpack device that literally gets all day battery. spare cores, spare unified mem, worlds my oyster. i was just in win xp 32bit sp2 few weeks ago using 86box compiling something in a very legacy dependent visual studio .net 7 environment that needed the exact msvc-flavored float precision that was shipping 22 years ago, and i needed a fully emulated cpu running at frequencies that was going to make the compiler make the same decisions it did 22 years ago. never had to leave my mac, didnt have to buy some 22 year old thinkpad on ebay, this thing gave me a time machine into another era so i could get something compiled to spec. these techs arent heard of, but its just one of many scenarios where i dont have to leave my mac to get something done. to say its a swiss army knife is an understatement. its a swiss army knife that ships with underlying hardware specs to let you fan out into anything.

for development i have never been blocked on macos in the apple silicon era. i have been blocked on windows/linux developing for other targets. fwiw i use everything, im loyal to whoever puts forth the best thing i can throw my money at. for my professional life, that is unequivocally apple atm. when the day comes some other darkhorse brings forth better hardware ill abandon this env without a second thought. i have no tribalistic loyalties in this space, i just gravitate towards whoever presents me with the best economic win that has the things im after. we havent been talking about itunes for like a decade.


Apple had real Unix a decade before the Linux crap was made, a bad unix copy. Nextstep was much better than Linux crap. "A budget of bad ideas" is what Alan Kay said about Linux [1], he invented the personal computer.

My 1987-1997 ISP was based on several different Unix running on Apple, probably long before you where born.

Apple built several supercomputers.

[1] https://www.youtube.com/watch?v=rmsIZUuBoQs

[2] Founder School Session: The Future Doesn't Have to Be Incremental https://www.youtube.com/watch?v=gTAghAJcO1o


Alan Kay invented a dead end (smalltalk). Meanwhile Linux became the future.

Apple had a terrible Unix until they bought NextStep.


Are you talking about A/UX? That was one of the first Unix systems I was exposed to.

Yes but I had others too. BSD on both 68000 and PowerPC

Did you ever experiment with Mach Ten ? I'm thinking of putting it on an old Mac.

Yes, I ran it also on 68000 and PowerPC Macs. I preferred MacOS with all the MPW environment and tools on top, the GUI was much better: a full WYSIWYG text editor that also was the command line, so you could compose text, copy and paste and also execute it. But that was invented with the workspace in Smalltalk-76 and recreated with MPW.

Email me if you need help restoring it on your Mac, or if you need parts to revive your hardware. I have at least one of every Mac since 1982 (yes I know the Lisa was introduced in januari 1983) including all floppies, CD-ROMs, books, screens, keyboards mice, Appletalk. Although some parts have rusted or decayed beyond repair. I hope someday somebody will buy the whole museum from me.

The best quality Unix we ran was BSDi, you'll find some of that still in NetBSD, OpenBSD and maybe FreeBSD.

The coolest Unix was IRIX though, but that was because of the Grafix code, not the Unix kernel.


Yeah, they were that, and for the last 20 years they have been the iphone company.

It's interesting how "real Unix" is still thrown around as a badge of prestige when Linux basically runs the world now.

> fault tolerant distributed systems

I mean there were mainframes which could be described as that. IBM just fixed it in hardware instead of software so its not like it was an unknown field.


Even if that were actually true (it’s not in important ways) Google showed you could do this cheaply in software instead of expensive in hardware.

You’re still hand waving away things like inventing a way to make map/reduce fault tolerant and automatic partitioning of data and automatic scheduling which didn’t exist before and made map/reduce accessible - mainframes weren’t doing this.

They pioneered how you durably store data on a bunch of commodity hardware through GFS - others were not doing this. And they showed how to do distributed systems at a scale not seen before because the field had bottlenecked on however big you could make a mainframe.


I build on the exact hardware I intend to deploy my software to and ship it to another machine with the same specs as the one it was built on.

I am willing to hear arguments for other approaches.


Not the OP, but: -march says the compiler can assume that the features of that particular CPU architecture family, which is broken out by generation, can be relied upon. In the worst case the compiler could in theory generate code that does not run on older CPUs of the same family or from different vendors.

-mtune says "generate code that is optimised for this architecture" but it doesn't trigger arch specific features.

Whether these are right or not depends on what you are doing. If you are building gentoo on your laptop you should absolutely -mtune=native and -march=native. That's the whole point: you get the most optimised code you can for your hardware.

If you are shipping code for a wide variety of architectures and crucially the method of shipping is binary form then you want to think more about what you might want to support. You could do either: if you're shipping standard software pick a reasonable baseline (check what your distribution uses in its cflags). If however you're shipping compute-intensive software perhaps you load a shared object per CPU family or build your engine in place for best performance. The Intel compiler quite famously optimised per family, included all the copies in the output and selected the worst one on AMD ;) (https://medium.com/codex/fixing-intel-compilers-unfair-cpu-d...)


> Not the OP, but: -march says the compiler can assume that the features of that particular CPU architecture family, which is broken out by generation, can be relied upon. In the worst case the compiler could in theory generate code that does not run on older CPUs of the same family or from different vendors.

Or on newer CPUs of the same vendor (e.g. AMD dropped some instructions in Zen that Intel didn't pick up) or even in different CPUs of the same generation (Intel market segmenting shenanigans with AVX512).


Just popping in here because people seem to be surprised by

> I build on the exact hardware I intend to deploy my software to and ship it to another machine with the same specs as the one it was built on.

This is exactly the use case in HPC. We always build -march=native and go to some trouble to enable all the appropriate vectorization flags (e.g., for PowerPC) that don't come along automatically with the -march=native setting.

Every HPC machine is a special snowflake, often with its own proprietary network stack, so you can forget about binaries being portable. Even on your own machine you'll be recompiling your binaries every time the machine goes down for a major maintenance.


If you get enough of them they can start to look like cattle.

Still, they are all the same breed.


I'm willing to hear arguments for your approach?

it certainly has scale issues when you need to support larger deployments.

[P.S.: the way I understand the words, "shipping" means "passing it off to someone else, likely across org boundaries" whereas what you're doing I'd call "deploying"]


So, do you see now the assumptions baked in your argument?

> when you need to support larger deployments

> shipping

> passing it off to someone else


On every project I've worked on, the PC I've had has been much better than the minimum PC required. Just because I'm writing code that will run nicely enough on a slow PC, that doesn't mean I need to use that same slow PC to build it!

And then, the binary that the end user receives will actually have been built on one of the CI systems. I bet they don't all have quite the same spec. And the above argument applies anyway.


So I get you don't do neither cloud, embedded, game consoles, mobile devices.

Quite hard to build on the exact hardware for those scenarios.


What?! seriously?!

I’ve never heard of anyone doing that.

If you use a cloud provider and use a remote development environment (VSCode remote/Jetbrains Gateway) then you’re wrong: cloud providers swap out the CPUs without telling you and can sell newer CPUs at older prices if theres less demand for the newer CPUs; you can’t rely on that.

To take an old naming convention, even an E3-Xeon CPU is not equivalent to an E5 of the same generation. I’m willing to bet it mostly works but your claim “I build on the exact hardware I ship on” is much more strict.

The majority of people I know use either laptops or workstations with Xeon workstation or Threadripper CPUs— but when deployed it will be a Xeon scalable datacenter CPU or an Epyc.

Hell, I work in gamedev and we cross compile basically everything for consoles.


… not everyone uses the cloud?

Some people, gasp, run physical hardware, that they bought.


So you buy exact same generation of Intel and AMD chips to your developers than your servers and your cutomsers? And encode this requirement into your development process for the future?

No? That would be ridiculous. You’re inventing dumb scenarios to make your argument work.

It’s more like: some organizations buy many of the same model of server, make one or two of them their build machines, and use the rest as production. So it’d be totally fine to use march=native there.

You just wouldn’t use those binaries anywhere else. Devs would simply do their own build locally (why does everyone act like this is impossible?) and use that. And obviously you don’t ship these binaries to customers… but, why are we suddenly talking about client software here? There’s a whole universe of software that exists to be a service and not a distributed binary, we’re clearly talking about that. Said software is typically distributed as source, if it’s distributed at all.

There’s a thousand different use cases for compiling software. Running locally, shipping binaries to users, HPC clusters, SaaS running on your own hardware… hell, maybe you’re running an HFT system and you need every microsecond of latency you can get. Do you really think there are no situations ever where -march=native is appropriate? That’s the claim we’re debunking, the idea that "-march=native is always always a mistake". It’s ridiculous.


We use physical hardware at work, but it's still not the way you build/deploy unless it's for a workstation/laptop type thing.

If you're deploying the binary to more than one machine, you quickly run into issues where the CPUs are different and you would need to rebuild for each of them. This is feasible if you have a couple of machines that you generally upgrade together, but quickly falls apart at just slightly more than 2 machines.


And all your deployed and dev machines run the same spec- same CPU entirely?

And you use them for remote development?

I think this is highly unusual.


Lots of organizations buy many of a single server spec. In fact that should be the default plan unless you have a good reason to buy heterogeneous hardware. With the way hardware depreciation works they tend to move to new server models “in bulk” as well, replacing entire clusters/etc at once. I’m not sure why this seems so foreign to folks…

Nobody is saying dev machines are building code that ships to their servers though… quite the opposite, a dev machine builds software for local use… a server builds software for running on other servers. And yes, often build machines are the same spec as the production ones, because they were all bought together. It’s not really rare. (Well, not using the cloud in general is “rare” but, that’s what we’re discussing.)


There is a large subset of devs who have worked their entire career on abstracted hardware which is fine I guess, just different domains.

The size of your L1/L2/L3 cache or the number of TLB misses doesn't matter too much if your python web service is just waiting for packets.


Because might makes right and any entity with the power to legally put up a fight is in on the game (or wants to be)


It isnt that weird. Just look at the gemini-cli repo. Its a gong show. The issue is that LLMs can be wrong sometimes sure but more that all the existing SDL were never meant to iterate this quickly.

If the system (code base in this case) is changing rapidly it increases the probability that any given change will interact poorly with any other given change. No single person in those code bases can have a working understanding of them because they change so quickly. Thus when someone LGTM the PR was the LLM generated they likely do not have a great understanding of the impact it is going to have.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: