Hacker Newsnew | past | comments | ask | show | jobs | submit | ezoe's commentslogin

What you describe had been happened already when programming task became using search engines, passing data between libraries, and delegating coding to off-shore workers.

I doubt data in Atlassian are anywhere close to clean or organic. It was designed by hell to swallow shit to real programmer who does real works outside of Atlassian.

Programmer adjacent data can already be consumed from git repos. Atlassian has PM data.

Umm? Is there single step Atlassian did it right? It's a cancer of software development the suits force us to swallow while real development and useful documents are outside of their service because it's so stressful to use.

Typical American company behaviour, I guess.

I guess traditional moratorium period for vulnerability publication is going to be fade away as we rely on AI to find it.

If publicly accessible AI model with very cheap fee can find it, it's very natural to assume the attackers had found it already by the same method.


It’s a wrong way to look at things. Just because CIA can know your location (if they want to), would you share live location to everyone on the internet?

LLM is a tool, but people still need to know — what where how.


Not sure if that's a great example. If there's a catastrophic vulnerability in a widely used tool, I'd sure like to know about it even if the patch is taking some time!

The problem with this is that the credible information "there's a bug in widely used tool x" will soon (if not already) be enough to trigger massive token expenditure of various others that will then also discover the bug, so this will often effectively amount to disclosure.

I guess the only winning move is to also start using AI to rapidly fix the bugs and have fast release cycles... Which of course has a host of other problems.


>there's a bug in widely used tool x"

There's a security bug in Openssh. I don't know what it is, but I can tell you with statistical certainty that it exists.

Go on and do with this information whatever you want.


I think in the context of these it’s more of “we’ve discovered a bug” which gives you more information than “there is a bug”. The main difference in information being that the former implies not only there is a bug but that LLMs can find it.

If you're a random person on the Internet, I can indeed not do much with that information.

But if you're a security research lab that a competing lab can ballpark the funding of and the amount of projects they're working on (based on industry comparisons, past publications etc.), I think that can be a signal.


Wrong argument, since it's not just available to "the CIA" but every rando under the sun, people should be notified immediately if "tracking" them is possible and mitigation measures should become a common standard practice

You and I would need to know "what where how".

There are many attackers that are just going to feed every commit of every project of interest to them into their LLMs and tell it "determine if this is patching an exploit and if so write the exploit". They don't need targeting clues. They're already watching everything coming out of

Do not make the mistake of modeling the attackers as "some guy in a basement with a laptop who decided just today to start attacking things". There are nation-state attackers. There are other attackers less funded than that but who still may not particularly blink at the plan I described above. Putting out the commit was sufficient to tell them even today exactly what the exploit was and the cheaper AI time gets the less targeting info they're going to need as the just grab everything.

I suggest modeling the attackers like a Dark Google. Think of them as well-funded, with lots of resources, and this is their day job, with dedicated teams and specialized positions and a codebase for exploits that they've been working on for years. They're not just some guy who wants to find an exploit maybe and needs huge hints about what commit might be an issue.


>Do not make the mistake of modeling the attackers as "some guy in a basement with a laptop who decided just today to start attacking things". There are nation-state attackers.

The parent's point is that if those capable attackers can exploit it anyway, doesn't mean it should be given on a silver platter to any script kiddie and guy in some basement with a laptop. The first have a much smaller target group than the latter.


This ignores that by publicly releasing the patch is motivated.

> LLM is a tool, but people still need to know — what where how.

And the moment the commit lands upstream, they know what, where, and how.

The usual approach here is to backchannel patched versions to the distros and end users before the commit ever goes into upstream. Although obviously, this runs counter to some folks expectations about how open source releases work


No. You operate AS IF they know your location.

In other words, it becomes part of your threat model.


> what

> we rely on AI to find it

> where

> the upstream commit

> how

> publicly accessible AI model with very cheap fee


I guess it's time to consider ditching GitHub. Everything that are purchased by Microsoft ware destined to be rotten.


even aside from this, their reliability has been absolutely terrible since they took it over. it's down so often we had to setup slack notifications directly to the devs to try to take some of the pressure off our ops teams.

they must be migrating it to hyper-v or something. brutal.


Even aside from that, what are we doing centralizing FOSS project hosting on a closed source Microsoft platform?


It was much better than the closed source SourceForge which existed before it. A lot of small projects dont have the energy to self host. Plus for small projects the barrier of entry is an issue. I recently found a typo in an error message in Garage but since they run their own Forgejo instance and OpenID never really became a thing I never created a PR.

It is first now with Codeberg there is a credible alternative. Of course large projects do not have this issue, but for small projects Github delivered a lot if value.


Well - my perspective is the KDE project, which has a team of capable admins who take care of hosting. The project has always been more or less self-hosted (I remember SUSE providing servers) and even provided hosting for at least one barely associated project, Valgrind. I think Valgrind bugs are still on KDE Bugzilla.

It's admittedly not really practical for most projects, but it could be for some large ones - Rust, for example.


I mostly work on PostgreSQL which has always selfhosted but PostgreSQL is a big project, for smaller projects it is much less practical. Even for something decently large like Meson I think the barrier would have been too big.

But, yes, projects like Rust could have selfhosted if they had wanted to.


KDE uses Phabricator, or at least did the last time I contributed. Worked pretty well in the collaboration aspect for submitting a change, getting code owners to review and giving feedback. I was able to jump in as a brand new contributor and get a change merged. The kind of change that would have been a PR from a fork in GitHub.

However I got the distinct feeling the whole stack would not fit as well into an enterprise environment, nor would the tooling around it work well for devs on Windows machines that just want to get commits out there. It's a perfect fit for that kind of project but I don't think it would be a great GitHub replacement for an enterprise shop that doesn't have software as it's core business


KDE uses GitLab now, the change-over was mostly in 2020 with some less commonly used features staying on Phabricator a while longer.

I use a self-hosted GitLab instance in a commercial setting (with developers on Linux and Windows) as well. It's a software department of a non-software company. Fairly small. The person or persons in charge of GitLab have set up some pretty nifty time-savers regarding CI and multi-repo changes - I'd prefer a monorepo, but the integration makes it bearable.


We still use KDE's bugzilla. One of the reasons that Vagrind was initially developed was to help with KDE back when many developers didn't really understand how to use new and delete.

These days sourceware.org hosts the Valgrind git repo, buildbot CI and web site. We could also use their bugzilla. There isn't much point migrating as long as KDE can put up with us.


> It is first now with Codeberg there is a credible alternative.

There is no credible alternative, because 3rd party hosting of the canonical repo is a bad idea to start with. By all means use 3rd party hosting for a more public-facing interaction, but its about time that developers understand that they need to host their own canonical repos.


We understand, and say no thanks. The benefits don’t outweigh the costs


The benefits now don't outweigh the costs. No doubt, totally agree.

The benefits down the road, when your chosen 3rd party host has been enshittified up the wazoo ... they far outweigh the costs.


Strong agree on this. I think a lot of people who've entered software development in the past decade or so don't appreciate just how bad the available options were when Github launched.

If you blanch at the thought of a one line in a pull request just wait until you see what Sourceforge looked like, release download pages where you had to paying keen attention to what you clicked on because the legit download button was surrounded by banner ads made to look like download buttons but they instead take you to a malware installer. They then doubled down on that by wrapping Windows installers people published with their own Windows installer that would offer to install a variety of things you didn't want before the thing you did.


What are some good alternatives for closed source codebases that people have been using and enjoying?

I only ask because I already know of good alternatives for FOSS, but it's the private / work projects that keep me tethered to GH for now.


If it's for work, why do you need GitHub at all?

To me, GitHub only makes sense as a social media site for code. If you are publishing to GitHub with no intent to be open in your code, development process, and contributor roster, then I don't see the point of being on GitHub at all.

Because it's not like their issue tracker is particularly good. It's not like their documentation support is particularly good. It's not like their search is particularly good. It's CI/CD system is bonkers. There are so many problems with using GitHub for its own sake that the only reason I can see to be there is for the network effects.

So, with that in mind, why not just setup a cheap VPS somewhere with a bare git repo? It'll be cheaper than GitHub and you don't have to worry about the LLM mind virus taking over management of your VPS and injecting this kind of junk on you.


What do you use for code review and CI/CD then?


You can do it with forgejo, just have to self-host the runners


I am excited about its potential integration with jujutsu: https://codeberg.org/forgejo/discussions/issues/325


Very true. We have a private git repository running on a server that serves as our master. Works fine for us. We backup to GitHub. But it isn't used in any way in the dev workflow


GitLab is quite good, the organizational features and CI is also mostly on par with GitHub. You can use gitlab.com, SaaS or self-host.


But compared to GitHub it's much more complicated in terms of UX as it covers more enterprise use cases that GitHub doesn't.


I'm a bit confused what you mean. I have to use GitLab for work and don't see much difference. Some UI elements look a bit more complex than on GH but other than that it's working the same way. Less buggy as well.

Personally I host forgejo for my private apps and have had no issues with that either.


Why do you think this? It really isn't.


It really is… I’ve worked with Gitlab for years and moving to GitHub was like a breath of fresh air, everything is much less cluttered. Not saying it’s perfect, but GitHub just feels simpler


Self-hosting. If you really need to push remotely, push to bare repo on your own cloud vm or setup gogs or forgejo.

I now start with local repos first and whatever I deem OSS-useful, I mirror-push from local to Github or anywhere else with forgejo.

Github was never really needed to use git for private projects.


I've been thinking about this. If you have any kind of home network with attached storage at all, setting your local Git to just use that seems like a logical step.

And then if you're still paranoid do a daily backup to like Dropbox or something.


Sourcehut.

Uses the same email-based patch workflow as Linux. Takes an hour to learn, and they have helpful guides: https://git-send-email.io/. No JavaScript.


Azure DevOps <shudder/>


Forgejo is super easy to set up on a 1-2 core vm. Make a compose file and put caddy in front for tls. the whole thing is less than 50 lines and costs about $10-$15 a month.


self hosted Gitea is my recommendation. has everything one needs and is super lean and resource saving. you can run it easily on a 1GB VPS - I even ran it for a while on 512MB.


I really like the end-user experience when I stumble on Gitea repos online, too.


GitLab. We self-host ours and it's rock solid.


Codeberg seems to have legs. License is different, best read it.


Gitlab.


For personal stuff I hopped over to source hut and it's fine.

Simple, direct, and I really like the email based workflows.


Funny that people said the exact same thing back when GitHub was originally acquired [0], I wonder how many actually went through with their words and ditched it. I bet GitHub has more users today than ever before though.

[0] https://news.ycombinator.com/item?id=17227286


>Why MS cares your private repositories? give a reason? Maybe using your code to train their programming robot, lol

>Whether they will abuse the trust of having complete and total access to every private repo and all of the code inside or not remains to be seen

>MS is pushing their ads within their own OS more and more, will GitHub get the same treatment[...]?

Funny.


Just speaking anecdotally, Codeberg today feels like the Gitlab of yesteryear, except that Codeberg has projects on it. Someone who is contributing to open source will eventually need to create a Codeberg account.

The top comment of the linked thread ("If Microsoft shares SSL certs with NSA they could do MITM attacks") is something that I find much more likely today than back in 2018.


They can have the new users pushing out sloppy projects.

The serious users leaving will definitely dent profitability. And GitHub being a social network, could start a death spiral.


What profitability? I'm pretty sure GitHub is a loss leader to push people to Azure and cloud services. I also don't know anyone who actually uses GitHub as a social network even though it ostensibly has such features.


The social features were GH's early secret sauce that contributed heavily to its stickiness and why it eventually dominated. IMO.

I should have said "will dent whatever profitability." I'm not sure it exists either. From the outside, it would seem crazy that it wouldn't be profitable with all the Enterprise stuff (and it's not like you can throw 10k engineers at whatever GH is doing).


i am surprised it took them time to destroy Github. usually they manage to make acquired companies a garbage pretty fast.


Are there any obvious successor to GitHub yet?

There are a few alternatives, but none have the critical mass of users yet.


For open source I would say Codeberg looks like the most promising. There is also SourceHut but seems like Codeberg has the mind share.


> The native NVMe driver (nvmedisk.sys) replaces the legacy storage path that has routed NVMe commands through a SCSI translation layer since before NVMe SSDs existed.

What? What are Microsoft doing for a decade after NVMe available to consumer grade motherboard?


Seriously, that was my thought too. Even if we were to stretch credibility and suggest that general consumers don't care about this sort of thing, they just released this for Windows Server in the past year?

Windows really is a toy of an OS. It continues to blow my mind that people want to use it as a server OS.


Because it offers VMS niceties that UNIX clones still doesn't do, and stuff like AD, SMB, without manually going through configuration files stored somewhere, that differ across UNIX flavours.

Although I do conceed UNIX has won the server room and Windows Servers are mostly about AD, SMB, IIS, Sharepoint, Dynamics, SQL Server.

Naturally some of those can be outsourced into Azure services that Microsoft will gladly provide.


And to run windows only apps like some embedded toolchains. Although that gives a motivation for us to move on to gcc because windows is annoying to be used on CI/CD and gcc is good enough compared to that other toolchain


Which VMS niceties does it offer?


Proper file locking, asynchronous operations across everything, ACL based security, proper ABI.

Not being an OS from C to C as the main programming model.

And then on top, multiple levels of sandboxing, including virtualization of drivers and kernel modules.

Ah and RDP is much nicer than X Windows or VNC.


Other than possibly proper ABI, and yes a tiny handful of file operations that could theoretically block not available through io_uring, like ioctl and splice, Linux has the rest.


In security? Not really, unless you are doing immutable deployments with rootless containers, no shell access, which at the end of the day isn't UNIX any longer.

And which Linux exactly? Plus unless you're doing C or C++, most likely aren't using those APIs.

Anyway, the differences of bare metal servers don't matter in the days of cloud where the actual nature of the kernel running alongside a type 1 hypervisor hardly matters to userspace.


Your fanboi attitude is very welcomed on /.

And billions spent and earned clearly shows where the moniker 'toy' doesn't apply.

BTW year of Linux Desktop when?


  What are Microsoft doing for a decade after NVMe available to consumer grade motherboard?
They were adding Copilot to everything, and implementing advertising tiles, and making sure it won't work without the appropriate TPM DRM, and forcing sign-in with a MS account to install it, and so on.

But they weren't ignoring NVMe entirely, they've got Rohan the intern working on it, and as soon as someone replies to his StackExchange questions he can start coding up the driver.


I am guessing that like ntfs it's a huge legacy spaghetty codebase that nobody understands and thus doesn't want to touch


I hope so. I prefer my evil to be ineffective.


there is so much to get angry about in the world at the moment.. I'm surprised that this one even registered with me.


They all feel like they're parts of a single expansive pattern.


So Weave claims AI based development increase git conflict frequency.

Given that most git conflicts are easy to solve by person who didn't involved at changes, even for a person who don't know that programming language, it's natural to let AI handle the git conflicts.

Solving a git conflict is most often a simple text manipulation without needing much of context. I see no problem current AI models can't do it.


When you start seeing the diffs with entities instead of lines, is what interests me, you get much better semantic info.

If you have a language specific parser, you can make a merge algorithm like weave. But the bigger win isn't resolving conflicts git shows you. It's catching the ones git misses entirely. So in those cases weave is much better, and there also other things like confidence-scored conflict classification, you should try it out it improves the agents performance, especially if you are a power user.


Probably the old habit of batch processing.


It seems to me that this is just an issue of diff features. Git can extended to show semantic diff of binary files and it doesn't technically need a completely new VCS.

As git became the most popular VCS right now and it continues to do so for foreseeable future, I don't think incompatibility with git is a good design choice.


Indeed, if lix were to target code version controlling, incompatibility with git is a “dead on arrival” situation.

But, Lix use case is not version controlling code.

It’s embedding version control in applications. Hence, the reason why lix runs within SQL databases. Apps have databases. Lix runs of top of them.

The benefit for the developer is a version control system within their database, and exposing version control to users.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: