What you describe had been happened already when programming task became using search engines, passing data between libraries, and delegating coding to off-shore workers.
I doubt data in Atlassian are anywhere close to clean or organic. It was designed by hell to swallow shit to real programmer who does real works outside of Atlassian.
Umm? Is there single step Atlassian did it right? It's a cancer of software development the suits force us to swallow while real development and useful documents are outside of their service because it's so stressful to use.
It’s a wrong way to look at things. Just because CIA can know your location (if they want to), would you share live location to everyone on the internet?
LLM is a tool, but people still need to know — what where how.
Not sure if that's a great example. If there's a catastrophic vulnerability in a widely used tool, I'd sure like to know about it even if the patch is taking some time!
The problem with this is that the credible information "there's a bug in widely used tool x" will soon (if not already) be enough to trigger massive token expenditure of various others that will then also discover the bug, so this will often effectively amount to disclosure.
I guess the only winning move is to also start using AI to rapidly fix the bugs and have fast release cycles... Which of course has a host of other problems.
I think in the context of these it’s more of “we’ve discovered a bug” which gives you more information than “there is a bug”. The main difference in information being that the former implies not only there is a bug but that LLMs can find it.
If you're a random person on the Internet, I can indeed not do much with that information.
But if you're a security research lab that a competing lab can ballpark the funding of and the amount of projects they're working on (based on industry comparisons, past publications etc.), I think that can be a signal.
Wrong argument, since it's not just available to "the CIA" but every rando under the sun, people should be notified immediately if "tracking" them is possible and mitigation measures should become a common standard practice
There are many attackers that are just going to feed every commit of every project of interest to them into their LLMs and tell it "determine if this is patching an exploit and if so write the exploit". They don't need targeting clues. They're already watching everything coming out of
Do not make the mistake of modeling the attackers as "some guy in a basement with a laptop who decided just today to start attacking things". There are nation-state attackers. There are other attackers less funded than that but who still may not particularly blink at the plan I described above. Putting out the commit was sufficient to tell them even today exactly what the exploit was and the cheaper AI time gets the less targeting info they're going to need as the just grab everything.
I suggest modeling the attackers like a Dark Google. Think of them as well-funded, with lots of resources, and this is their day job, with dedicated teams and specialized positions and a codebase for exploits that they've been working on for years. They're not just some guy who wants to find an exploit maybe and needs huge hints about what commit might be an issue.
>Do not make the mistake of modeling the attackers as "some guy in a basement with a laptop who decided just today to start attacking things". There are nation-state attackers.
The parent's point is that if those capable attackers can exploit it anyway, doesn't mean it should be given on a silver platter to any script kiddie and guy in some basement with a laptop. The first have a much smaller target group than the latter.
> LLM is a tool, but people still need to know — what where how.
And the moment the commit lands upstream, they know what, where, and how.
The usual approach here is to backchannel patched versions to the distros and end users before the commit ever goes into upstream. Although obviously, this runs counter to some folks expectations about how open source releases work
even aside from this, their reliability has been absolutely terrible since they took it over. it's down so often we had to setup slack notifications directly to the devs to try to take some of the pressure off our ops teams.
they must be migrating it to hyper-v or something. brutal.
It was much better than the closed source SourceForge which existed before it. A lot of small projects dont have the energy to self host. Plus for small projects the barrier of entry is an issue. I recently found a typo in an error message in Garage but since they run their own Forgejo instance and OpenID never really became a thing I never created a PR.
It is first now with Codeberg there is a credible alternative. Of course large projects do not have this issue, but for small projects Github delivered a lot if value.
Well - my perspective is the KDE project, which has a team of capable admins who take care of hosting. The project has always been more or less self-hosted (I remember SUSE providing servers) and even provided hosting for at least one barely associated project, Valgrind. I think Valgrind bugs are still on KDE Bugzilla.
It's admittedly not really practical for most projects, but it could be for some large ones - Rust, for example.
I mostly work on PostgreSQL which has always selfhosted but PostgreSQL is a big project, for smaller projects it is much less practical. Even for something decently large like Meson I think the barrier would have been too big.
But, yes, projects like Rust could have selfhosted if they had wanted to.
KDE uses Phabricator, or at least did the last time I contributed. Worked pretty well in the collaboration aspect for submitting a change, getting code owners to review and giving feedback. I was able to jump in as a brand new contributor and get a change merged. The kind of change that would have been a PR from a fork in GitHub.
However I got the distinct feeling the whole stack would not fit as well into an enterprise environment, nor would the tooling around it work well for devs on Windows machines that just want to get commits out there. It's a perfect fit for that kind of project but I don't think it would be a great GitHub replacement for an enterprise shop that doesn't have software as it's core business
KDE uses GitLab now, the change-over was mostly in 2020 with some less commonly used features staying on Phabricator a while longer.
I use a self-hosted GitLab instance in a commercial setting (with developers on Linux and Windows) as well. It's a software department of a non-software company. Fairly small. The person or persons in charge of GitLab have set up some pretty nifty time-savers regarding CI and multi-repo changes - I'd prefer a monorepo, but the integration makes it bearable.
We still use KDE's bugzilla. One of the reasons that Vagrind was initially developed was to help with KDE back when many developers didn't really understand how to use new and delete.
These days sourceware.org hosts the Valgrind git repo, buildbot CI and web site. We could also use their bugzilla. There isn't much point migrating as long as KDE can put up with us.
> It is first now with Codeberg there is a credible alternative.
There is no credible alternative, because 3rd party hosting of the canonical repo is a bad idea to start with. By all means use 3rd party hosting for a more public-facing interaction, but its about time that developers understand that they need to host their own canonical repos.
Strong agree on this. I think a lot of people who've entered software development in the past decade or so don't appreciate just how bad the available options were when Github launched.
If you blanch at the thought of a one line in a pull request just wait until you see what Sourceforge looked like, release download pages where you had to paying keen attention to what you clicked on because the legit download button was surrounded by banner ads made to look like download buttons but they instead take you to a malware installer. They then doubled down on that by wrapping Windows installers people published with their own Windows installer that would offer to install a variety of things you didn't want before the thing you did.
To me, GitHub only makes sense as a social media site for code. If you are publishing to GitHub with no intent to be open in your code, development process, and contributor roster, then I don't see the point of being on GitHub at all.
Because it's not like their issue tracker is particularly good. It's not like their documentation support is particularly good. It's not like their search is particularly good. It's CI/CD system is bonkers. There are so many problems with using GitHub for its own sake that the only reason I can see to be there is for the network effects.
So, with that in mind, why not just setup a cheap VPS somewhere with a bare git repo? It'll be cheaper than GitHub and you don't have to worry about the LLM mind virus taking over management of your VPS and injecting this kind of junk on you.
Very true. We have a private git repository running on a server that serves as our master. Works fine for us. We backup to GitHub. But it isn't used in any way in the dev workflow
I'm a bit confused what you mean. I have to use GitLab for work and don't see much difference. Some UI elements look a bit more complex than on GH but other than that it's working the same way. Less buggy as well.
Personally I host forgejo for my private apps and have had no issues with that either.
It really is… I’ve worked with Gitlab for years and moving to GitHub was like a breath of fresh air, everything is much less cluttered. Not saying it’s perfect, but GitHub just feels simpler
I've been thinking about this. If you have any kind of home network with attached storage at all, setting your local Git to just use that seems like a logical step.
And then if you're still paranoid do a daily backup to like Dropbox or something.
Forgejo is super easy to set up on a 1-2 core vm. Make a compose file and put caddy in front for tls. the whole thing is less than 50 lines and costs about $10-$15 a month.
self hosted Gitea is my recommendation. has everything one needs and is super lean and resource saving. you can run it easily on a 1GB VPS - I even ran it for a while on 512MB.
Funny that people said the exact same thing back when GitHub was originally acquired [0], I wonder how many actually went through with their words and ditched it. I bet GitHub has more users today than ever before though.
Just speaking anecdotally, Codeberg today feels like the Gitlab of yesteryear, except that Codeberg has projects on it. Someone who is contributing to open source will eventually need to create a Codeberg account.
The top comment of the linked thread ("If Microsoft shares SSL certs with NSA they could do MITM attacks") is something that I find much more likely today than back in 2018.
What profitability? I'm pretty sure GitHub is a loss leader to push people to Azure and cloud services. I also don't know anyone who actually uses GitHub as a social network even though it ostensibly has such features.
The social features were GH's early secret sauce that contributed heavily to its stickiness and why it eventually dominated. IMO.
I should have said "will dent whatever profitability." I'm not sure it exists either. From the outside, it would seem crazy that it wouldn't be profitable with all the Enterprise stuff (and it's not like you can throw 10k engineers at whatever GH is doing).
> The native NVMe driver (nvmedisk.sys) replaces the legacy storage path that has routed NVMe commands through a SCSI translation layer since before NVMe SSDs existed.
What? What are Microsoft doing for a decade after NVMe available to consumer grade motherboard?
Seriously, that was my thought too. Even if we were to stretch credibility and suggest that general consumers don't care about this sort of thing, they just released this for Windows Server in the past year?
Windows really is a toy of an OS. It continues to blow my mind that people want to use it as a server OS.
Because it offers VMS niceties that UNIX clones still doesn't do, and stuff like AD, SMB, without manually going through configuration files stored somewhere, that differ across UNIX flavours.
Although I do conceed UNIX has won the server room and Windows Servers are mostly about AD, SMB, IIS, Sharepoint, Dynamics, SQL Server.
Naturally some of those can be outsourced into Azure services that Microsoft will gladly provide.
And to run windows only apps like some embedded toolchains. Although that gives a motivation for us to move on to gcc because windows is annoying to be used on CI/CD and gcc is good enough compared to that other toolchain
Other than possibly proper ABI, and yes a tiny handful of file operations that could theoretically block not available through io_uring, like ioctl and splice, Linux has the rest.
In security? Not really, unless you are doing immutable deployments with rootless containers, no shell access, which at the end of the day isn't UNIX any longer.
And which Linux exactly? Plus unless you're doing C or C++, most likely aren't using those APIs.
Anyway, the differences of bare metal servers don't matter in the days of cloud where the actual nature of the kernel running alongside a type 1 hypervisor hardly matters to userspace.
What are Microsoft doing for a decade after NVMe available to consumer grade motherboard?
They were adding Copilot to everything, and implementing advertising tiles, and making sure it won't work without the appropriate TPM DRM, and forcing sign-in with a MS account to install it, and so on.
But they weren't ignoring NVMe entirely, they've got Rohan the intern working on it, and as soon as someone replies to his StackExchange questions he can start coding up the driver.
So Weave claims AI based development increase git conflict frequency.
Given that most git conflicts are easy to solve by person who didn't involved at changes, even for a person who don't know that programming language, it's natural to let AI handle the git conflicts.
Solving a git conflict is most often a simple text manipulation without needing much of context. I see no problem current AI models can't do it.
When you start seeing the diffs with entities instead of lines, is what interests me, you get much better semantic info.
If you have a language specific parser, you can make a merge algorithm like weave. But the bigger win isn't resolving conflicts git shows you. It's catching the ones git misses entirely. So in those cases weave is much better, and there also other things like confidence-scored conflict classification, you should try it out it improves the agents performance, especially if you are a power user.
It seems to me that this is just an issue of diff features. Git can extended to show semantic diff of binary files and it doesn't technically need a completely new VCS.
As git became the most popular VCS right now and it continues to do so for foreseeable future, I don't think incompatibility with git is a good design choice.
reply