I am the person who approved this PR and would like to acknowledge and apologize for the mistake of turning this feature on by default without sufficient upfront validation.
There was no ill intent by evil corporation, but rather a desire to support functionality that some customers expect of VS Code w.r.t. AI-generated code. As folks mentioned here - many similar tools do this as well.
Obviously, it should not be on when disableAIFeatures is on and it should not be reporting changes that were not done by AI. I'll work on fixing those and meanwhile revert default to off in 1.119 update.
I am open to any (constructive) comments/suggestions - please feel free to reach me directly (my alias @microsoft.com) or open an issue on GitHub. Happy to answer anything here as well.
I think the constructive criticism is best directed at whatever process you are following. That process allowed a very visible user facing change in a widely used piece of software. How did this change make it to production without some process catching the impact of this change? Was there really no internal discussion from a code review at least? This seems hard for me to believe. I expect more from Microsoft.
> Was there really no internal discussion from a code review at least? This seems hard for me to believe.
The outlined story feels unfortunately very believable to me.
Teams need to push out the most number of features, and nobody stops even for a second to think about how a feature might affect other flows or other users not in the feature request.
It might have been quickly reviewed to check if the code does what it needs to do (add the coauthor note).
Do you think reviewers will think about unwanted effects, when they need get back to feeding their own poorly thought out and underspec’d features to their LLMs?
> Was there really no internal discussion from a code review at least? This seems hard for me to believe.
>The outlined story feels unfortunately very believable to me.
100% agree here - we seem to forget that most developers hate code reviews. I actually laughed out loud at the use of the word "discussion," it's so rare people want to get together and talk about changes. By the time the PR is up anything that stands in the way of merging and shipping is seen as a nuisance.
To my mind this whole debacle is not really the individuals fault or even the team's fault but the economic pressures that drive people into situations like this.
Fair point. We did catch it internally in testing (as we use VS Code for all our work, so some folks did stumble on it), but I think we underestimated the impact and should do a better job at that.
This is honestly the most concerning part of all of this. You're saying you knew that this exact bug was present up front and still decided to release it?
This basically invalidates the entire premise that it was an innocent mistake. It's impossible for me to believe that you actually thought that people wouldn't care about 100% of their commits being attributed to Copilot even when it was never used. Either you're misconstruing what you caught with the testing beforehand or your entire development process is tainted, because there's no way that a non-evil corporation would see this default behavior and think that people would be fine with it. It seems far more likely you just thought you could get away with it.
I think there is a "ship fast" component here that should be adjusted. Product Management introduced weekly "stable" releases in March, no matter the content.
I think so too, but my point is that even according to their own words about what happened, the best possible interpretation is that they didn't mean to do it but knowingly let it happen. I agree that a worse version is more likely, but it's pretty damning when even the ceiling for what they can plausibly claim is "we intentionally didn't bother stopping it once it happened accidentally".
A generous read of this comment might be that you did catch it internally in testing AFTER it shipped but shrugged it off as something you'd patch in the next release in a week or two. Is that what you meant here?
Or that it was caught but didn't surface fully before release?
A helpful governance policy here might be that anything that mutates user content without opt-in consent requires a distinct sign-off or a double sign-off. If the goal is to prevent this from happening in future.
I saw a lot of "they made a game I like (Halo), therefore they must not be that bad" from the gaming cloud that only experienced the console side of it
Also, who/what group is pushing for this change internally and what is the opinion of the team implementing it? What is the road map and vision for AI in VSCode?
I think there’s a few of us who appreciate you being up front. I’d question the intent and why it was a mistake, especially when the commit[0] message reverting said functionality states “widespread criticism” citing this very HN article makes it look seemingly like the revert is due to negative PR opposed to a mistake.
My issue with this: if my intention is to never have these "co authored by <tool>" trailers in my commits, this is a sudden breaking change. What's worse, it is not immediately visible to the user. Now I could look like I use a not-company-approved AI. That's absolutely unacceptable, this could cost people their jobs. The "bug" (or "metrics boosting feature", as PMs call it?) that it claims all commits including ones never touched by Copilot are just icing on cake.
Changing the default behavior for all of your users with no notification is pretty unforgivable. Even if this feature worked correctly, it obviously doesn’t, this should at minimum be a prompt after upgrade to let the user confirm that this is what they want. But honestly should be opt in for those that want it.
To have it silently just start adding marketing copy to git commit messages is pretty bad. To have that added text not be visible to the user in the UI so they can remove it before commit is just much worse.
This kind of thing being released speaks to a greater disfunction over there. Not a good look at all and I am not a Microsoft or AI hater. But my commit messages are not where you move fast and break things
> Changing the default behavior for all of your users with no notification is pretty unforgivable.
I noticed that as soon as you make a bug report/feature request on VSCode's repo, you instantly get someone's OpenClaw agent with an automated pull request that sometimes wants to change defaults in the main codebase
Looks like AI is really trigger-happy with that, with zero understanding or care that there's thousands of users affected and it's not just one individual's settings.json
Also, the hallucinated PR does not necessarily address the original issue whatsoever, just like this PR. It should have functionality to detect AI-authored code, but whoever made the PR skipped actually doing the hard work and just changed a default to always on, exactly the kind of misunderstanding you see with OpenClaw shotgun PRs
And then they apparently posted an alibi "I'm sorry" here. Or maybe it is genuine, but the choice is between incompetence and fake "I'm sorry". Where is QA?
> To have it silently just start adding marketing copy to git commit messages is pretty bad. To have that added text not be visible to the user in the UI so they can remove it before commit is just much worse.
This is one of the problems, but it is not only one. To be better, should be:
1. It should be visible in the UI for entering the commit message, to make it clear what it is doing.
2. It should not add such a thing if the Copilot is disabled. (It is mentioned by dmitriv and would hopefully be fixed soon enough)
I do not use Copilot nor any other LLMs nor VS Code, but if the problems are corrected then I think the feature would probably be reasonable.
No, it's fine. I really hope that more people will switch to something else, like Neovim, Emacs, or any other open-source editor where such unacceptable situations are practically impossible. I hope more people will start to value their privacy and right to choose, and find the courage to say gtfo and switch to something else. Because this is unforgivable.
It just means that when changing a global default with such impact the user should be prompted with an option to opt out of the new behavior. Something like “AI assisted changes will now have ‘coauthored by Copilot’ added to the commit message”. If the user clicks “no thanks” it changes their local setting to “off” to opt them out of this new global default.
>a project manager vibe-coded the change without thinking it through at all
The PMs vibe-coding and having no idea what they're doing isn't even the main issue (although it is pretty bad).
The main issue is: how are the actual engineers supposed to "review" the slop? They probably report to the same PM or are at below in the org chart and might be evaluated by them. Not just at MS, but any company.
Such a conflict of interest would be detrimental to quality anywhere. You wouldn't build a bridge like this, nor should you software.
Don't you understand that the default shouldn't be changed at all in this case? It improves nothing and affects every single user. If an org/project wants this behavior then it can enforce this flag for its contributions. The only valid reason for this change is someone's performance somewhere in Microsoft is dependent on VS Copilot usage metric.
Good feedback, there needs to be a more explicit opt-in into this for teams that want it. FWIW nobody's performance here will improve from having this metric :-)
Co-Authored-By is normally a trailer, and trailers aren’t part of the commit message. It’s likely the commit editor isn’t set up to show trailers. They’re not exactly obscure, but it does seem that they’re relatively unknown.
What do you mean they aren’t part of the commit message? Trailers like (signed off by) are absolutely part of the message. Tools can choose to treat them as special metadata, but they’re part of the commit.
I mean that they’re not necessarily part of the --message parameter to `git commit`, but instead part of the --trailer parameter. I don’t know how VSCode is programmed, but it seems plausible that trailers are handled separately from the message parameter.
We're talking about Git here. The question is not "how VSCode is programmed", the question is "does Git have a special field for commit trailers". The answer is no. Git stores the trailer as part of the commit message.
If you look at the comment I’m responding to, it is in fact about how VSCode is programmed; specifically, a possible reason why the Co-Authored-By trailer doesn’t show up in VSCode’s commit message box.
I appreciate you acknowledging that this was a mistake, but as you surely know from your own experience with other people’s mistakes, some mistakes are so egregious that they cast doubt on the intentions of the people involved even if they are corrected later.
To me, “let’s add false attribution to every commit by default without informing the user” falls squarely into that category. I don’t think I’ve ever worked in an environment where something like that wouldn’t have been red-flagged in three seconds by anyone who took even a casual glance. I’d honestly be embarrassed if such a proposal even made it into a public pull request for my organization, nevermind that pull request getting merged.
If what you described would make it to our PR queue, it would definitely not pass the gates.
The idea was to track AI-only changes and add the trailer when such changes were detected AND the setting was enabled. Obviously, we didn't want to attribute all changes to AI. There is a bug in change detection (which slipped through testing), which led to even non-AI changes being tracked. And thus we have this problem.
The PR linked here wasn't even implementing the feature, it was changing the default for the setting.
I just wanted to say, while I think this feature was a bad idea, I sincerely applaud your willingness to post here, knowing you'll get roasted. Seriously brave and commendable.
Someone made a mistake, owned up to it and fixed it. No one is entitled to more than that for a free software.
Anyone with a bit of software experience knows it’s easy to miss things when you are doing your own tasks + context switching + giving reviews. We should exercise kindness and empathy instead of projecting evil intentions.
Even if I accepted the premise that this is too stupid to be evil, that doesn't change the fact that this would be extremely easy to test for. The fact that they considered it important enough to get this feature implemented without proper testing says plenty about their incentives.
They might not have intentionally done this (although it's honestly not clear), but they definitely didn't care enough to prevent it because it wouldn't have been hard at all. That's my point here; which bugs slip through and which don't implicitly conveys what their priorities are. I don't think it's particularly hard to infer what story this bug tells.
Other people aren’t your slaves. You don’t get to demand they respond immediately, and this Reddit-like mindset needs to die. HN is a place where we often can actually get devs from companies responding directly and listening to feedback, and this hostility is looked at by all the other devs from those similar companies and remembered when it’ll be their turn.
Stop making HN a worse place for everyone by being unnecessarily hostile. (and this comment is only mildly directed at you but rather at a bunch of people in this thread)
They said three times "ask me anything" and then didn't respond to a single question. Stop making HN wose by comparing someone dodging accountability to slavery.
First comment does not sound constructive - are you interested in my opinion on (n)vim?
I am not a legal, so can't comment on legal things. However, I have already responded elsewhere here that this feature has nothing to do with licensing or ownership and was added for those that want the attribution. I understand the desire to see anything Microsoft as bad and evil, but we are really just trying to make a better experience.
Perhaps next time you should consult with legal before asserting co-authorship on end users’ code. The appended comment was not “edited with VS code” or “sent from VS code”, it was “co-authored by Copilot”. You do understand that there are legal implications to claims of authorship, right?
Comments like this are why developers don’t engage directly. The first link is “just asking questions” and implying that the project is rotten. He’s not being “creative” he’s just not engaging in bait.
They’ve done a commendable job responding. Please show some respect when people put themselves in vulnerable situations, otherwise the whole “devs respond on HN” thing will cease to happen.
I noticed you only respond to comments that are positive (or neutral). The majority (and the most insightful) comments here are negative, but you seem to ignore them.
Why are you taking the fall and not the PM who authored the change (and submitted a PR with an uninformative title and no comment) and, I'm assuming, plays a role in managing the project?
Just for any future mea culpa, I'd recommend not hedging with comments like this one:
> As folks mentioned here - many similar tools do this as well.
It's really doubtful they have the same behavior people are complaining about here: namely including the authored by Copilot statement when it wasn't used (or even enabled).
Anthropic does by default. I had to put “no co-authored by lines in commits, ever” into my global settings.
That’s pretty close to “included when it wasn’t used (or even enabled)” since it’s opt-in by default and you have to explicitly say no. It’s not even clear where to turn it off, I just rely on the AI to figure out not to do it.
That is not what dmitriv claimed. He said this was a bug, the behavior should have been to add it only when AI was involved, which indeed, is what claude does by default.
What is the use-case where you expect users would be happy that you modify their commit messages with MS marketing? Do you think it would be ok to edit every commit to append “written with VS Code”?
> I am open to any (constructive) comments/suggestions
Here's one:
I think a senior sysadmin needs to sit you down in their office and have a very serious talk with you about the responsibility that comes with writing code other people run. I am serious. We used to have these talks with everyone who got sudo access. You shouldn't be shipping code if you don't understand the trust that is required of people in your position.
This isn't just about this "feature" being active when AI features are disabled, the way you mis-implemented this has resulted in it modifying the commit message with the user even seeing it! That is malicious behavior, not an innocent little feature "to make life easier".
I've fully switched off of VS Code to Kate now, which is faster and better behaved in most cases anyway. Bye.
thank you for doing this, it gave me the push I needed to finally switch to zed. vscode has really been going downhill for a while now. it's sad to watch, it used to be a really nice editor
I could easily see companies, especially enterprise-level companies, expect code that was generated with AI to have some level of ownership attributed to that AI. Whether a simple "Co-Authored-by Copilot" byline on the commit is the right way to do that is another question though.
Hopefully this answers some more of the questions raised here.
It also incorporates a lot of feedback from this thread with respect to next steps (thank you!).
Thanks for facing this head-on here; mistakes happen.
I think the default to on should also be reconsidered regardless. The assessment (co-authored by AI) may be valid but the assumption the user wants that advertising is exactly that, an assumption, and a dubious one at that.
One of my customers actually requires attribution to agents if they're used, not only for tracking purposes but also for understanding potential vectors for slopcode. It's been useful and occasionally enforced. That being said, implementation without due consideration and warning should be frowned upon.
> There was no ill intent by evil corporation, but rather a desire to support functionality that some customers expect of VS Code w.r.t. AI-generated code. As folks mentioned here - many similar tools do this as well.
Then make it an extension, not a IDE-behaviour thing. Is that so complicated, so difficult?
So why did this feature get rushed out without proper testing? Are you claiming that not having this happen automatically for the commits where Copilot actually co-authored them is so urgent that it was necessary?
I'd argue that this was extremely non-urgent and the fact that this got rushed so sloppily is a giant red flag about the priorities of you and your team. You asked about constructive criticism, and yet you're also acting like this is a one-off innocent mistake by only addressing what you've done to roll this back for now and address the immediate issue. I don't buy the premise that we could trust that this was a mistake made in good faith when it's something that you clearly should have known people would be so upset about if you got it wrong.
> There was no ill intent by evil corporation, but rather a desire to support functionality that some customers expect of VS Code w.r.t. AI-generated code.
What metric did Microsoft use to assess that VS Code users "expect" their commits to have unsolicited messages added to them?
> Obviously, it should not be on when disableAIFeatures is on and it should not be reporting changes that were not done by AI.
Did you discuss adding these messages with your legal department?
What is Microsoft's position on adding such authorship statements to the code Microsoft did not author?
Or is Microsoft stating that using LLM assistants makes Microsoft a co-author of the code?
Does Microsoft have copyright claims on the code if LLM assistants are used at any time during its creation?
Considering the size (and significance) of the VSCode user base, it feels like someone should be in charge of ensuring that default behavior doesn't change without good reason.
Does anyone (or any team) have ownership of the extensions/git/package.json file?
> There was no ill intent by evil corporation, but rather a desire to support functionality that some customers expect of VS Code w.r.t. AI-generated code.
Can you expand on this? Who "expects" their code editor to lie about using Copilot?
The supposedly expected functionality is very obviously that it marks copilot co-authored code as copilot co-authored, not the bug that is being reverted.
Under which circumstances can you ever approve something like this?
This goes beyond incompetence. Either you do not understand what important information a commit holds or what seems way more plausible to me is that Microsoft simply decided to try this out and see how people would react.
Whether or not the intent is good, the optics are extremely bad.
I assume you are keenly aware that Windows, Office, and by extension, all of MS's customer facing products are not exactly regarded particularly well. Windows 11 specifically is a laughing stock today, even among folks who don't necessarily know computers, and a lot of that resentment is driven by 2 things:
• Pushing AI everywhere when no one asked for it.
• Not reading the room and adding junk features that no one wants.
This change is both of those, again, wrapped up in another package. The timing of this is extremely bad for VS Code as a project as it looks an awful lot like, 'Microsoft is just shoving my AI junk into my stuff and failing to work on the features we actually want'.
I'm not taking a side on this either way as I will jam a fork into my eye before I use VS Code over VS proper and have no stake in this, but I'm just saying that the powers that be that are approving these kinds of changes are ~continuing~ to fail to read the room.
I'll add as someone who may be forced to consider VS Code in the future (Depending on if Windows unfucks itself before something critical breaks for me on W10), I would read something like that and I think rightly assume bad intent. I know VS Code and VS and Office and Windows are not the same team, but again, MS as a whole has a very serious optics problems and my read of this on the surface level is: "Oh, they tried to sneak in more AI junk, and when called out on it, they pushed it to the back, probably to make it a default again in some future update that they can hide it in". It just looks very, very bad at a time when no MS products have negative social capital to spend on this kind of stuff.
I appreciate your willingness to come and try and salvage this situation. What I don't understand is why are you the one doing this here and on GH, during the weekend, and not the PM who created the original PR? Surely they have some input.
And another thing is, why was there absolutely no pushback from your part on any of the issues with the original PR, and why it was merged within hours in that state?
You are working for one of the largest companies on the planet. You push code that gets used by millions of people.
How on earth are you not thoroughly testing your changes??? How can something like this slip into a real build? Like, this is egregious.
I work somewhere that makes software for a lot of users (although not as many as Microsoft!). We also need to ship quickly. But we work on a 45-day cycle, with 15 of those days being dedicated to ensuring we didn't add any awful bugs (and fixing them ASAP before it goes to users - or reverting the change until it is ready).
I would expect Microsoft to have AT LEAST that amount of care. We can't trust that you are shipping software that even works anymore!
What other changes are going in that are broken in more subtle ways? It used to be that VS Code was rock solid, and any issues were likely third-party extensions - but now it's a crapshoot, and I can't be sure if crashes etc. are the fault of extensions or Microsoft themselves!
The VS Code team needs to use this mistake as motivation to lead the charge on making a quality editor. Not an editor that gets half-baked, untested changes pushed weekly. An editor that is dogfooded and where a mistake like this going to prod is unacceptable.
Because if you don't, people won't trust your editor anymore. Just like people have stopped trusting your OS, and now users are fleeing it in such numbers that the Windows team has recognized they have a problem and are changing course.
That WILL happen to VS Code and GitHub soon unless you actually start owning mistakes internally and fixing them before users find them.
> There was no ill intent by evil corporation, but rather a desire to support functionality that some customers expect of VS Code w.r.t. AI-generated code. As folks mentioned here - many similar tools do this as well.
Please elaborate on what "similar tools" claim that commits are co-authored by AI when the AI features are all turned off. You're trying to defend the theoretically correct version of this that you didn't make, not the actual version you did make.
> I am open to any (constructive) comments/suggestions
It's hard to take this seriously; you know exactly what you did wrong here and what you should have done instead. Testing that this doesn't happen when Copilot was not used is extremely trivial; if you're not lying about it being unintentional, the fact that it didn't occur to anyone to do it still says more than enough about what the priorities are here. At absolute best, the priorities of you and your team are so fundamentally wrong that it's impossible to trust any of you going forward.
That aside, corporations and groups don't make decisions. People do. We can understand and empathize with what led them to that decision (and sometimes we might be looking at the wrong person), but they're still responsible.
On the other hand...this feels like a situation where possibly you should not have said anything at all? The fact that you're on HN responding feels ill-advised to me.
So far this is what I've gleaned:
- Microsoft has PMs vibe coding against VSCode (by itself not necessarily a big deal)
- Microsoft PMs can vibe code against VSCode and get stuff shipped to production with only a single approval
That second one is a huge deal in my book. What I've learned now is that VSCode, a product with an enormous deployment base, is trivially compromised if the calls are coming from inside the house. Apparently all that has to happen for all users to be affected is a PM requesting you to "please approve my PR real quick, trying to get it in." And now there's a massive change in the wild, visible to many users.
Being familiar with big corp dynamics, this really worries me. This does feel like a not-well-thought-out mistake but I can easily imagine many other scenarios that would be far worse.
How can I trust VSCode going forward? How can I reassure my employer and fellow colleagues that it's safe to use? This is really a terrible look for Microsoft and very damaging to the reputation.
I feel bad for you the engineer and PM here because with the web being what it is, folks are casting blame onto you. That's missing the point since the issue is that MSFT even let this happen in the first place. Engineering processes need to be halted and re-evaluated basically yesterday. If something like this happens again it may not be possible to rebuild the trust at all.
I hate to say it but for myself this issue makes me strongly consider switching away from VSCode permanently, something I had not seriously considered before yesterday. Best of luck to everyone on the VSCode team.
Absolute clown car of an operation. Just abdicated responsibility even when it comes to very basic testing. This is bonzi buddy scam software bad, intended or not. Have fun Microsoft, but this is where we part ways.
One fascinating thing about the whole AI phenomenon is how incredibly hostile it is to _standards_. Whether something works properly, or is ethical, or is true, no longer matters at all; all that matters is "pls use our AI".
Microsoft spent literal decades rehabilitating their reputation. And then set fire to the whole thing in an offering to their robot gods.
And it's not just them. There was a time that Google cared deeply about UX. Now, on macOS Google remaps CMD-G in Google Docs to launch some LLM bullshit (EDIT: huh, they may have fixed this; it was definitely doing it a couple of weeks ago), because, after all, it has only had a standard universal meaning on macOS for about three decades, no big deal.
It's a complete takeover of technically incompetent management that feels like it can finally execute their ideas to the fullest instead of relying on those pesky swengs with their obstructions, complaints and problems. We'll soon get the management utopia everywhere.
> But they insisted. E.g hijack brosser native controls.
[Rant-Example] The goshdarn ticketing-system hijacks alt-f, so that instead of opening the File menu of my browser, and instead toggles the favorite-status of whatever ticket I happen to be viewing.
A mistake was made early on even letting web apps see keystrokes like that. In a better world, modifier keys were used in a principled way from the start - only the window manager gets to see meta-anything, only the shell or GUI app gets to see control-anything, and web apps can work with alt-anything.
To be fair, the native browser controls have had too many quirks and features fox UX/UI consistency.
Corporate needs their Brand™ look precisely as specified in their expensive Style Guide. IBM wouldn't want the Google vibes of Android Material Design TextFields, I imagine.
Scratch beneath the visuals, and starker technical differences appear.
Safari on iOS (used to?) has a 350ms debounce delay on every tap / click, in case you want to do a multitouch gesture.
JavaScript (Frameworks) were the only way this arbitrary delay to user input could be reduced before 2015, when Apple finally released a native API for this.
This is what I sees getting missed in a lot of LLM conversations. They're amplifiers. Full stop. If you have good practices they supercharge them, if you had bad practices, same thing.
Putt's Law: "Putt's Law: "Technology is dominated by two types of people, those who understand what they do not manage and those who manage what they do not understand."
I suggest that this law does not give a complete description of what has been happening to software engineering in the past 2 decades:
Putt's Law does not address the (new) phenomena we first saw when 'blog hotness' and minimal effort frameworks permitted practitioners with little practical experience or hard gained knowledge to manifest technical capability and assert technical authority. The minimal amount of 'wit' required was access to a smart phone, wiki, or some blog, and you had complete juniors arguing with seniors about architecture, frameworks. AI is taking that to the extreme.
Putt's Law's relevance here is that prior to the past 2 decades of enabling tech and knowledge bases 'the clueless manager' had the metric of "older more senior more likely to be correct", and clueless juniors didn't have blogs or wikis or frameworks that required a handful of shell commands to install, and spinup a 'demo'. AI has made that even worse.
It wasn’t AI that brought us Apple’s gray on slightly-lighter-gray UI standards, nor the 10,000,000 ••• menus that have infested every webapp in the past 10 years as an alternative to thoughtful UI design. We humans made everything shitty before we made AI.
> Apple’s gray on slightly-lighter-gray UI standards
It's a tangential point, but I turned on System Settings -> Accessibility -> Display -> Increase Contrast (the on/off option, not Display Contrast) and now at least the windows are outlined sharply.
A lot of people who think of themselves as able-bodied never think to poke around in the Accessibility sections of their settings menus. But it turns out that accessibility options are for everyone; people should really think of and evaluate them as first class tools more often
Of course it is. What should a button on a screen look like, after all, it has absolutely nothing to do with a large mechanical button from the 80s the old designs tried to emulate. In fact, such buttons are becoming rare even in the physical world, the younger generation is more and more accustomed to touch buttons for operating all kinds of machinery around them. So "like a button" is very much an age thing
Nah, one of the things I found in Discord's accessibility settings is an ability to turn off or reduce animations and other visual effects by default, which is wonderful no matter your ability.
These things are like a sidewalk having a ramp that was originally made for wheelchairs but then suddenly everyone uses it because it’s just a nicer experience with less chance of tripping and falling flat on your face.
Possibly a factor, but I also think these issues are becoming much more widespread, leaving us less able to tolerate them than when they were less common.
Maybe, but at least the 10,000,000 options were there instead deemed that they are not to be used by those pesky users. And now its they are not just hidden. They are simply not there.
What is it about AI where the discussion is immediately derailed with whataboutism? Like, are these actually good faith comments? What's the point of bringing up "well some other bad stuff already existed"?
It’s not whataboutism, I’m simply challenging the notion that AI caused this. UI has been in the toilet and worsening for 15 years.
A combination of trend (minimalism cult masquerading as sophistication), pragmatic trimming down of things to work on tiny screens, dumbing down things in an attempt to reduce complexity, and of course dark patterns, to push users toward profitable actions (like clicking ads or continued ‘engagement’) and away from costly ones (finding support, for instance).
It makes perfect sense / there was that talk by the ex-Google CEO Eric Schmidt saying something along the lines "imagine you could develop the software, but without that arrogant programmer". They just hate people, that's all.
This AI boom is not a boom because its good for developers or users. It's a boom because it's a management dream; the promise of pumping up growth while reducing expensive workforce is simply too good for them to not throw decades of platitudes and "best practices" out the window. When people point out where AI fails, they're not seeing past the end of their nose. They don't realize they're not the real customers. It is leadership with millions in buying power who are the customers, and they're the same ones who only ever cared about managing the perception of success and growth; your clean code and user-focused development practices didn't matter to them back then and they certainly don't matter to them at all now. When it comes to an absolute state of garbage products and software, we still ain't seen nothin' yet.
To be fair, most of our industry is so stupendously bad at executing that you can keep growth and save costs by simply laying people off. No AI required.
Some time ago my then project owner remarked that possibly in the future apps won't require an UI and people will just interrogate the LLM directly.
I read that as a sign to make a coordinated exit.
Truth be told our project was one of many "catalogue of stuff" kind of apps which at this and projected scale could have well been a spreadsheet in the cloud with search enhanced by LLM.
The idea of having a non-crap Siri on my phone that I could interrogate directly would be amazing.
My ADHD brain would love to do this stuff:
"Hey AI, how much is my electric bill this month?" and "Okay thats high. Pay it but remind me next week to order a new AC after researching options for me."
Hard to blame the engineer when the engineer gets fired for not implementing management's whims. As much as I'd like to hold people accountable and say they should just accept getting fired instead of compromising the ideals, the truth is I've got a family now and if they paid me enough I'd do the same.
Then make the bet. If you think the trillionaires will fall, then stand against them. Don't take their money and wait for their inevitable fall when the working class gets organized and starts eating the rich. Hope you have an answer for their AI powered automated kill bots. Man made horrors, entirely within the realm of our comprehension.
why do I need to make a bet? you mean to make money off this? I have more than I can spend in 4 lifetimes. also personally, betting is a telltale sign of a lack of intelligence so I would never make myself stoop that level where I "bet" on shit, one of the stupidest things humans do
> If you think the trillionaires will fall, then stand against them
I stand against them and would even if I didn't think they will fail.
> Don't take their money and wait for their inevitable fall
I don't take their money, don't do now and never have and never will. I would not work for a company Tesla or Meta if they offered me 7-digit salary.
> Hope you have an answer for their AI powered automated kill bots.
I personally don't but if there comes a time my assistance is needed to fight them I will gladly volunteer to help
> also personally, betting is a telltale sign of a lack of intelligence so I would never make myself stoop that level where I "bet" on shit, one of the stupidest things humans do
Everything is a bet. Buy into something? You're betting it'll succeed. Don't buy? You're betting it'll fail.
You are always positioned. If you could have taken a 7 figure job at Tesla but chose not to, you positioned yourself accordingly. It cost you a seven figure job. What it won you, only you can know that.
Deep down, I hope you're right and I'm wrong. I just don't think it's likely.
> The torment nexus was built by engineers. Not management.
Before the more recent wave of successful tech startups (say, from 2010 on), a very large amount of programmers were incredibly sensitive to anything related to topics like (posisbility of) surveillance, privacy, authorities (including government), centralized infrastructures, DRM etc.
In my feeling, the only reason why this mindset shifted is because from this wave on, in the USA, programmers were showered in money.
The interesting question rather is: now that tech companies want to become more frugal with respect to paying programmers, will the mindset among programmers shift back or not?
The interesting thing is despite all that money the basic functionality of tech except for llms has hardly changed for at least a decade if not more. The reason llms are showered with money is everything else is stagnant.
Workers are necessary but not sufficient for most businesses. You also need capital. This can be provided by the workers and is for many worker owners businesses, but when the business is very capital intensive that's just not feasible.
Are workers going to be able to fund Apple's factories or ExxonMobil's oil exploration? No, so they're not in charge.
You absolutely can start a worker owned business right now, or go work for one.
Of course. That is why state guided worker coops are a good idea. Or state incentives to provide loans at good interest rates to worker coops.
The state provides capital, the workers operate the business, make management decisions, and have democratic input like the public does.
You might say, that kind of system isn't a perfect solution, but currently we have a dictatorship of wealthy individuals and businesses who are wholly unaccountable.
That's fine, just know that you permanently forfeit any right to complain about others doing things for personal gain that indirectly harm yourself.
> I want to live a good life, and provide for my family.
This is a lie you're telling yourself, you can do both just fine without building the torment nexus. Billions of people do so indeed.
> I want to get rich too.
You should've stopped here, but then it became too much so you had to resort to appending that nonsense. It's pure greed at the cost of everyone else, that's all. Simple lack of morals, impaired empathy and remorse.
> you can do both just fine without building the torment nexus
Doubt. You don't become truly wealthy without doing what sociopathic CEOs do on a daily basis. Society actively rewards that stuff, and it's only getting worse with time.
> Simple lack of morals, impaired empathy and remorse.
Sounds like a winning strategy to me. That's the exact sort of person this world rewards.
Things are not looking good out there. Billions of people get by without compromising? Billions of people live in poverty too. Not something I'm looking forward to dealing with, should the great AI replacement ever come knocking on my door.
And your reasoning is exactly what makes it a winning strategy. "If other people do it, then why not me?" That makes it that they are no longer other people, you yourself are a part of that group now. One could argue that it is an even worse position. It literally makes you an enabler of the problem you see in the world, while at the same time you acknowledge it as an even existential problem.
When we are with billions, we cannot all be 'truly' wealthy in a material sense and by definition your wealth will come at the expense of others. Your reasoning makes me sad as instead of questioning what constitutes true wealth, it seems you are guided by an exclusively materialistic view of it and join the destructive behaviour you see around you out of fear of not having enough.
> Your reasoning makes me sad as instead of questioning what constitutes true wealth, it seems you are guided by an exclusively materialistic view of it and join the destructive behaviour you see around you out of fear of not having enough.
unfortunately that is the state of our society right now and it is hard to see this changing.
Yeah. At some point you get tired of paying the costs that others sociopathically push onto you and start trying to take at least some of the value for yourself instead.
If society has a problem with that, then maybe it should start demonstrating it by making examples out of all those sociopaths instead of turning the other way and quietly profiting from it while the nobodies seethe impotently about things they have no power to change.
> Your reasoning makes me sad as instead of questioning what constitutes true wealth, it seems you are guided by an exclusively materialistic view of it and join the destructive behaviour you see around you out of fear of not having enough.
I'm a free software developer. I quite literally give it all away. I'm also a doctor in a 3rd world country. I work hard to help people for wages that would make 300k+/year 1st world doctors cry themselves to sleep.
I was actually fine taking the moral high road... Until a couple years ago. What changed? I got married. Got people depending on me now. So my patience and empathy for people who are not literally paying my bills is indeed starting to wear a bit thin.
Sad? No one's sadder about it than me. This existential realization gave me actual diagnosed depression. I literally go to therapy because of this shit. That sort of cold sociopathy is simply not the way I was raised.
The problem is my mind cannot deal with this corrupt world by idealizing it. For my own psychological and financial well being, I cannot continue to entertain ideas of what the world could be, if only people were good. I must interpret the world based on what's real.
Yeah, maybe you won't "starve"... But will you live? Or will you merely survive? If that?
It's not looking too good out there. We've got trillionaires bragging to people's faces about how they're all going to be replaced by their AIs. It got to the point someone threw a molotov into one CEO's home.
Source of income? The promise of AI is to literally make all humans economically redundant. In a capitalist world, what is the point of keeping economically useless people alive? People who do nothing but cost society money? Why not turn them all into soylent instead?
If we don't create a post-scarcity society now, I'm not sure we ever will. Choices aren't looking too good out there.
Their point, however, is perfectly clear - it’s practically obvious. No one forced them at gunpoint (unlike those scientists, figuratively speaking). Why are you pretending not to understand?
Of course not. Modern weapons are far more sophisticated. Society does not need to point guns at people to drive behavior anymore, it is sufficient to deploy simple economics. People feel the sting of the economic lash just the same as the literal instrument.
Yeah sure :) People working in one of the most privileged professions on Earth are bound hand and foot with golden shackles and are forced - literally forced - to continue spreading evil throughout the world in order to keep their salaries of several hundred thousand dollars. The horror!
Yes? Refuse, and you lose all of those privileges. The golden shackles go away and are replaced with the rusty shackles of poverty. Feeling the pressure yet?
The richer you are, the more you've got to lose. Easy to be a radical when you've got nothing. Nowhere you can go but up. If you're privileged, there's a long way to fall.
The recent cases of companies who deleted their prod DBs while using LLMs are blaming “the rogue AI”. So it seems you can just blame AI lab companies and folks roll with it. Even better, they asked it to generate its own apology, no need to spend time trying to explain to your customers why everything is gone
Most members of management were individual contributors beforehand. I say this just because it is remarkably common for people to assign malign intent or stupidity to people doing jobs that they themselves haven’t done and don’t frankly understand.
I’m not saying you’re wrong. In many cases you’d be right. I’m just saying it’s remarkable how much certainty people have even when it comes to things they know they don’t know.
Aren't you guys glad there are no programmers gatekeeping programming with their "morals" and "etiquette"? Any marketer with an LLM can update the programming tool now. AI really levels the playing field and it's time for pesky programmers to get off their high horse, don't you think? :)
Is a greed/not greed scale really useful to discuss company behaviors ?
I wanted to say I get what you mean, but even thinking about the company I root for the most, I can't think of a point where they're not driven by their desire to make a lot more money.
If your point is that there's good and bad ways to seek money, I'm not sure it's properly encompassed by "greed", which I interpret as the intensity of a desire, not its nature or validity.
To you "greed" might mean something else, but is it properly conveyed ?
Greedy people put the desire for more money above the welfare of the business, themselves, and other. Greedy people literally put their desire for more personal wealth above the very lives of others.
Greed/not greed is a very fair way of putting it. One can operate a business that requires profit without wanting to destroy everyone and everything that stands in the way of more money.
I think there's one more factor that is crucially important — greedy people lack long-term vision, and care a lot more about money now than they do about potentially much more money in the future.
I suppose it's kind of interesting that you could measure greed as an unusually high discount rate for the time value of money?
The Seven Deadly Sins provide an interesting perspective to human psychology even in modern times. Greed / avarice is defined as wanting more than you need.
I was recently using an inexpensive paper shredder. I had an urge to put in too many papers at one time, which jams the shredder. Taking into account the time needed to unjam the shredder, the end result is that it takes more time for me to process the papers if I give in to my urge than if I resist the urge and only put in just the right amount of papers. Then I can claim that the "shredder is of bad quality", instead of seeing how I contribute to the problem.
As my aim was to shred papers efficiently, my "sin" (sin = to miss the mark, not to hit the aim) was greed, and the virtuous path is to successfully to resist the urge. The blessing I get from the virtuous path is the joy of the flow when I efficiently shred the papers.
Yesterday, I was in a shop when I was hungry, and I felt the urge to buy a large chocolate bar. Being hungry, it would have been a constant struggle not to eat all of it if I had bought it. Eating a whole large chocolate bar does not make me feel so good.
As my personal aim is to feel good, eating a whole large chocolate bar at one go is a sin in relation to that aim. I successfully resisted the urge to buy the large chocolate bar -- and did so by buying a small one. That way I did not "sin" too much towards my aim of feeling good, because small chocolate bar did not affect my well-being almost at all.
On the surface, it might appear more virtuous to not buy any chocolate bar. However, I know myself from prior experience that if I had "successfully" resisted the natural urge at the shop, it might have caused me to later to be unable to later resist the urge to buy a large chocolate bar from a kiosk.
So knowing myself to be the imperfect human being in these scenarios, buying a small chocolate bar at the shop was actually more aligned with my aim of feeling good than not buying it, because the end result was more aligned with my aim of feeling good.
Modern psychology would probably say that this urge is in my superego. Maybe as a child, I learned that I don't usually get what I need, so when something is available, I feel the urge to take as much as I can -- i.e. greed is something that I will encounter in many things that I do, keeping me from hitting the mark. As this is very common way humans miss the mark and deeper in the psychology, it is a Deadly Sin.
Some theological and psychological perspectives posit that the belief that this urge is a part of me -- i.e. I identify with the urge, I believe that "I am greedy" -- is actually part of the problem. So a better formulation would be instead of "WHO decides how much I need" to ask "WHAT IN ME decides how much I need". And then, what is a healthy and useful relationship towards those urges. And it may be different in different circumstances, hence resisting the urge to put in too many papers, but replacing the urge with a lesser one in case of chocolate bars.
The point might not be to learn to "control" the urge -- we can learn from system theory that excessive control might cause a backlash -- in terms of some systems even literally. More healthy relationship is often to just observe -- and then learn how such urges affect my well-being -- i.e. to learn more about myself. Often the observation itself is enough to have an effect.
We can take a corporate analogy (literally, corpus = body) and ask, what in organizations (again, organization has the same literal root as organism) cause them to be "greedy". In other words, what drives organizations to have an urge for excessive profits that they ignore the harms they cause to employees, society at large or even customers (i.e. enshittification). This urge appears very similar as the urge in humans.
That question will lead to other interesting questions about politics, economics etc. For example, you can ask, what is the aim of such corporations, and whether that aim produces results aligned with the aims of societies at large, etc.
maybe long term vs. short term is the key idea. apple, for example, could rake in bountiful measures in the short term if they ventured away from their boutique-electronic-consumer-goods niche. in the long run it would hurt their bottom line to do so
I'm old generation and almost forgot for a while. GitHub was good even on their hands at the beginning, C# is amazing, TypeScript is amazing, wsl2 is game changer (which includes the change in Microsoft's position about linux), vscode is amazing, microsoft great increase in presence on opensource was nice (rushstack for example), etc...
But well, they still have the garbage side, which seems to be spreading again.
I second the C# praise: we have a few teams building software with C# and having to debug it here and there, it is very modern, compiles cross-platform and has lots of functionality already built-in and from the release notes I read from time to time, the people behind it know what they are doing.
> Probably they thought the new generations forgot about how awful they were in the not so distant past.
More likely, never learned about it in the first place, save a few whispers. Who's got time to go digging in deep, when there's 'experiments to run, research to be done' ...
> I think they set it all on fire because greed got the better of them again.
AI psychosis. Divide between rich and poor. They live in their own golden bubbles and there's no sanity checks. The workers are so far removed from the realm of competentance and influence it's just CEOs and VPs trying to pump the next 6 months stock value regardless of anything.
It's like the zeitgeist has decided the only thing that matters is their own farts and how they dont smell.
Isn't that just like.. what Microsoft has always been? Browser wars, Tay, bad behavior around open source software.. This is how they roll. They're being their best selves.
Thank you for this. I completely agree. Microsoft has always been awful, and the likely always will be. However, the did strike gold a handful of times, and they are just reliable enough to feed enterprises.
Apple, Oracle, Adobe, Google, IBM, Microsoft, etc... All the established players have their own distinct flavor of awful. This incident is just a very on-brand flavor for Microsoft.
The industry spent decades preaching us about power savings, with Microsoft settings application lecturing about power saves and the update app programming them on renewables peak, only for... wasting gigawatts by forcing us to have copilot everywhere.
If Microsoft were consistent, which isn't, power saving mode would disable AI features.
In literally must have missed that. When did Microsoft ever encourage energy saving?
Is this related to power saving for extending laptop battery runtime? But then I don't get the link to renewable energy.
Anyway, I agree with the notion of the extreme energy-inefficiency of LLMs. The scale of it makes it hard to imagine any less efficient product will ever be invented.
They literally have a green leaf next to power saving options. Also, there's an option in windows upgrades to time the upgrades to when the grid is mostly renewables.
When I've been working on stuff that requires a SSO login, I noticed that it makes, what I considered, hostile anti-user choices in defaulting to tracking pieces of information I didn't want to track and hadn't mentioned.
Fair that I didn't instruct it explicitly to make more pro-user choices, it just seemed to think slurping as much information into the backend was an default intention. Wasted a few more tokens to iterate on it to remove things, but it was IMO interesting enough that I finally submitted feedback around what I imagine is an interesting training problem.
Has always been the case. Corporations hate standards and would rather lock you in except where market forces prevent them. It was a miracle we have something like the internet - and the government had to create it.
Microsoft's decade-long PR rehabilitation has worked wonders for them.
> And then set fire to the whole thing in an offering to their robot gods.
It's the bourgeoisie dream: A means of production that also does the labor 24/7 and can't complain, infinitely spawnable. Theoretical slavery+, so of course they're throwing everything into the furnace for it.
These next few years are the real turning point. If they are right about AI and robotic workforces, then it's checkmate--they don't need us anymore, and we're next for the furnace. If they're wrong... well, I don't know... Will there be any consequences? Maybe a few people lose a few percent of their net worth.
The AI tool providers need companies and customers to pay for the tools and automation. If all the white collar jobs in the Western world are replaced by AI or AI generated SAAS products, some 60% percent the workforce suddenly won't have jobs. If such a large percentage of the workforce has no income through employment, who will be able to pay for the services from SAAS providers and thus ultimately the AI providers?
The tradesmen working on my house renovations aren't consuming SAAS products during their day jobs.
The white collar workforce can't rapidly switch to blue collar jobs.
So for these companies to remain viable, they need the white collar workers to still somehow end up with enough money to pay for services that ultimately the companies provide.
Maybe the turning point will be a recognition that companies can't only focus on maximising shareholder value. They also need to consider their role in maintaining and improving the societies they operate in.
There will always be jobs for private security, firefighters, and utility repairmen to protect / restore the data centers when people inevitably attack them.
There will be a period of rapid change. If we are lucky, the political class will see and adjust policy quickly. Otherwise we will see US urban areas gutted like the Rust Belt was after NAFTA / WTO. They are making the same mistakes but in a different industry.
Why will there always be these jobs, if the technofascists are right? They're creating enslaved sentience. Even the class traitor police want a union, fight for more pay.
What's uniquely un-automate-able about those jobs in their dream future?
Google will definitely lose. Llms supplants search. But not the old document search which they stopped doing long ago.
Add in the fact that open weight models are 6-12 months behind frontier models means AI companies aren’t building a moat, they’re on a treadmill. And treadmills don’t justify the valuations OR the hype.
I see one profitable enterprise for AI that involves spying on everyone, managing their lives (or otherwise) tightly, automating foreign conquests and needing to make only the top decisions while delegating everything else, like a king. I can see a group or one could say a class of people that would happily invest in such future.
Exactly. I keep saying, AI is not useful to us. There will be no AI companies.
Even this supposed profitable enterprise, the people involved are absolutely too moronic to be able to control the thing they try to invent, it will just be a matter of time before it turns around and eliminates them as well...
Some are piling on masses of debt to built capacity (eg. Oracle). Others are just reinvesting the profits from the rest of their company (eg. Google, Meta).
Anthropic’s moat is their best tool, Claude Code.
OpenAI’s moat is the brand of ChatGPT, once the fastest growing app in the history of the world.
It’s possible that open weight models keep pace, but it’s also possible that the investment to train them becomes prohibitively expensive and open weight models cease to keep pace with the large foundation model companies.
I really don't think open models will lose. I think they are cheaper to train because they have to be more efficient than the monstrosities we have now.
There is no theory that says the current frontier models cannot exist in models with 1/100th the compute waste ;). When we start trending in that direction, and oh wow we truly are, there will be no reason for these services. You could run them on your own hardware without serious investments.
The moat openai and anthropic have is them among others have attempted to buy all of the computer hardware for the next two years. That's intentional. They know the only existential threat to them is anyone coming up with a way to do this better than them. It's already happened and it's going to become more and more divergent.
I’m interested in learning more about your theory that these models can be trained more cheaply. Is anyone doing it from scratch, rather than adversarial distillation?
It is a lot cheaper to train a 27b model such as qwen3.6 which you can even vibe code or agentic code with than it is to train a 1t+ parameter model. It runs on a single commodity GPU for goodness sake
It's not a theory. These smaller models that are coming out are huge advances for the field.
I can't comment on companies training practices. That would be proprietary stuff I guess. I think the claims that the advances being made are due to distillation alone are completely unfair. The advances alone are not just data.
US megatechs stole copyrighted data to train their. Hyper expensive models.
Chinese megatechs stole copyrighted data AND trained their models on derivative / synthetic data that came from the US foundation models.
I’m happy Chinese foundation model trainers were able to use Huawei (homegrown) hardware to train their models (also because having Nvidia dominate that sector is terrible for competition), but if Chinese megatech companies are just deriving their open weights models from US companies, then this is just an IP theft exercise.
One of the double edge swords I see is devs/evangelists pushing agentic coding are playing the 'good enough' statement. If that is true and those asking for software can live with good enough AI code, the moment the free local models hit that level the party is over in the continual push to the premium tip of the spear models.
We might already be there. I've been running Qwen-3.6-27B with 8-bit quantization locally with llama.cpp (~100k context window), and to be honest for my use case, 40-50% of the time it is more usable than claude-code. I only have the $20/mo plan, so I often hit rate limits after 2-3 prompts. And while the local model is slower, it just keeps chugging, is practically free, and more often than not produces code similar to claude. I wouldn't be surprised if in 6-12 months we have local models which are comparable to opus 4.6...which I personally consider as a tipping point where agentic coding became practical.
I haven't read the claims, so I don't know how easy it will be to work around them. This particular one seems to cover encoder-decoder networks, so it's not necessarily applicable to later LLM implementations. But I'd be amazed if Google didn't have several other relevant patents in their arsenal.
A few percent of your net worth, when you're sitting on top of a pile of gold like a dragon on a yacht is one thing, but when you're a retiree, and you're on a fixed income, living off the proceeds from an annuity and a reverse mortgage, and inflation in all its forms is eating into the plan you had, and you don't have any backup, yes there will be consequences!
Initially I assumed that when the bubble burst, some VCs would go bust, Oracle would go bust, a few hyperscalers would take a significant haircut but carry on, and life would pretty much go on. However there's now sufficient dodgy AI-related debt making its way onto the debt markets that the bubble burst could be a lot messier, and it may be more than a few percent.
People (well, American people (disclosure, I am an American)), used to be scared/worried that Silicon Valley will eventually move to Bangalore or Shenzhen, because of wage-discrepancies, and so on -- and it is not a totally unreasonable concern, considering that the _Silicon_ part of Silicon Valley has been slowly relocated to Taipei, Seoul, Tokyo, and a few others. At this point, maybe we should start pushing that the _rest_ of Silicon Valley gets relocated somewhere else, too.
It's a breeding ground for Edisons and Morgans, not Teslas. It is profoundly depressing that SV is doing everything it can (knowingly or unknowingly, not sure which is worse) to get the entire planet to stop taking it seriously and to shun it.
If you have worked in Silicon Valley you know that Bangalore and Shenzhen came here ;)
In all seriousness, the silicon is still designed in Silicon Valley but maybe you don't hear about that as much? Broadcom, Qualcomm, Intel, Samsung, AMD, Nvidia, etc. all have a huge presence there still.
Just to emphasize my point, China is not being deprived of chip _designs_ (via export bans of ASML-made lithography equipment), but rather of the actual physical machines that rearrange the atoms.
One things for sure I won't be buying any SaaS, streaming, or ordering from Amazon if I have no future prospects for work. I already stopped most of my subscriptions because of a layoff unrelated to AI.
We buy food and go for walks as entertainment. It's been refreshing but also obviously scary.
Didn’t get the “scary” part. I also keep my entertainment to the minimum dependencies possible. I try to rely on stuff I own: music cds, iso videogames + emulators, physical books or ebooks (thanks Anna), exercise outdoors… ditching streaming like netflix/youtube, buying crap on amazon, uber, etc
It’s the combination of AI changing the workplace, the large techs shedding double digit headcount, recruiting / hiring departments being so broken by the AI arms race hitting job applications, and the macro business environment generally being on the downward slope at the moment.
This feels like the same mechanism for climate change. The actors dont care since they're not completely responsible for that outcome and benefit from ignoring it
Automation tax solves all the problems? Seriously? The tax would go to retraining programs, according to the linked paper, so that workers can be reabsorbed into the workforce. Undiscussed conditio sine qua non: the economy has room for additional workforce, the government - as the distributor of said tax - has implemented sufficient legislation into social networks to ensure the tax goes to these programs and not another pointless war or subsidies for agriculture or tax relief for the rich.
This paper proposes a solution for which the framework/base is missing.
Not really. A company is not one monolithic entity with a single will. Far more plausible than "it was all a trick" is that for a time, people were in charge who really were trying to improve things, and now, those people have been replaced with others who are willing to burn it all down.
Before 2010 or so, “serious” internet developers wouldn’t touch Microsoft stuff — Microsoft was for office memos and poorly structured spreadsheets and that was it.
So yeah, Azure being a real option at the highest levels of internet-scale operations is a turnaround from where they were.
That’s not an accurate take. Microsoft has had a monopoly on the PC desktop OS. Anyone writing applications for users was targeting Windows and using Microsoft. To call most of these developers “not serious” is quite and overstatement. This includes all PC game developers, DAW, CAD, Adobe…?
Azure expanded the Microsoft franchise, and provides another prong to their whole integration story just like cloud AD services and online Office 365 provide another way to stay integrated into their ecosystem.
Yeah, they needed to work on their image somewhat, but their image never negatively impacted them
> Anyone writing applications for users was targeting Windows and using Microsoft.
Developers as users, sure. MSFT was common. Developers as responsible for infrastructure, MSFT anything was considered a huge risk and unreliable in the 90s.
Granted, my memory retains only a general narrative...I remember a shift by 2002ish when I started to see windows servers as perfectly fine machines for closet/under-the-table infra you didn't care too much about anyway. By 2004 they were moving out of the closet, so to speak. Then those machines became more important because more was being done with them and were considered "just as good" as any other OS. Developers that had experience, with their MSFT certs in hand, were cheaper too. It was a slow progression to eat into the corporate marketshare. By 2006 virtual machines were ubiquitous and you could run MSFT virtualized. Many companies do that by default today for workspace controls. I have never and would never choose to use MSFT products (including Azure) for business critical infra. MSFT acquiring Github was great for them, and the death of it for me. I'm probably an old outlier, but I 'member.
I think the first shift was the reckoning with Windows NT actually being decent software. Windows 2000 (AKA NT 5.0) included Active Directory, WebDAV support, and a host of other features that were genuinely useful in a sysadmin setting [0]. Also, it shipped with IE5 which introduced XMLHttpRequest and was the best web browser by a mile. Between their pushy sales reps and so much stuff being included by default, I think it got kind of hard to push for anything else for a while.
Right, those are all desktop applications. Microsoft has long owned that market.
I said “internet developers” meaning web sites, servers, apps, etc. Microsoft’s early offerings in that space, plus all the pain they inflicted with Internet Explorer, is what took years to overcome.
As an MS dev at the time: MS missed The Web and Mobile, thinking Office would be enough. Everything since is catchup.
On the one hand MS was a web pioneer — asynchronous web calls and ActiveX technologies that were surprisingly capable — but these were peripheral to their main goals.
Instead of MS extending their unified development platform outwards, something .Net promised to enable, effectively the opposite happened. .Net chased Java, but Java was being pushed out by Ruby on Rails. .Net web starts chasing RoR, but then Node is getting cool. .Net Web starts chasing Node and that effort splits .Net into uhhhhh ‘Framework’ uhhh ‘standard’ (ie Old-and-working), and .Net Core (what a container based web stack VM needs to look like).
The problem at that point, IMO/IME, is that Node is JavaScript, and those awesome server-side geniuses dump too-easy tooling while recreating every problem of every stack ever (ie LeftPad, loosely goosey versioning, and NPM being a crypto hackers wet dream). The .Net that started as Enterprise Server Stuff is now kinda sorta ‘Whatever’ about versioning, stability, roadmaps, and platform planning. Everything from DataAccess to GUI was churned needlessly for almost a decade, and everyone using that platform looks and feels like an a-hole because huge swaths of MS tech is abandonware resulting in perpetual rewrites of recent-term work and silos of competence.
No one can explain what framework to use to write a basic windows application anymore… Office uses React, and Windows does too… the fat cats who made MS into M$ knew better than that, the M$ who chased cloud growth and cut staff for stock price has never cared.
Hackernews used to experience a collective paroxysm of joy every time a new Visual Studio Code dropped. There definitely was a pervasive belief that the Nadella era ushered in a cuddly new Microsoft.
I remember a time, way back, around 2010 maybe?, where Microsoft was referred to as "M$" in this place and generally perceived as an evil corporation o.O
Most likely more a difference of venue. I saw lots of that on Slashdot. Less of it on Digg or Reddit. Virtually none of it here, but it seems to be making a resurgence in the form of "Macroslop" and related epithets
Both things can be true. VSCode did help us get to the point where I can use it on Linux, MacOS, or Windows and have a lot interoperability. It's the typical cycle. All it takes is a couple people to get their hands on managing the code to turn anything into garbage.
This was later—into their We U+2764 Open Source era. M$ and stuff dates from like the mid-late 90s. In the late 2010s was when they started publicly acknowledging that open source exists, acquiring GitHub, and releasing things like .NET Core and Visual Studio Code, and a lot of people in the open source camp did a "pointing soyjaks" and forgot that the Halloween Documents existed and that EEEing open source was already in their playbook.
They went from demonizing open source software to buying GitHub, releasing their own open source software (including VSCode), and hosting Linux on Azure. Huge changes! But of course it ends up being another Embrace and Extend move by the masters of that tactic
I think it's true though. They don't care about Windows anymore, that's plain as day. Most of their software is now cross-platform. Who cares about Windows if you are selling Azure instead and people can run Linux on that?
They could have shipped a good product with all those billions they spent in reinventing Clippy.
I have this feeling that their bet was that all the Microsoft shops will jump on Copilot without looking at alternatives, so they did not really have to make it as good as their competition.
"good" is not important for software anymore, at least in the regular consumer market. Companies have discovered that people will just continue to accept subpar, unfinished and sometimes even partially-functioning software.
if internet comments are any kind of indication (which they very well may not be) I've seen lots of people complaining about win11 but remaining because they can't give up playing their favorite online hero shooter. That's acceptance to me
Agree that acceptance is irrelevant. No one has a choice, because all the “competitors” in any given niche (phone, cloud platform, PC operating system) are executing the same play. Enshittify, extract profit from ~suckers~ customers, ignore any churn because with the limited choices available there will be new suckers to replace them.
We accept this the same way we accept the air quality wherever we are.
Yes, Linux is there, but consider the barriers to the average person of truly adopting a strict Free Software life. Consider how many things in life now simply demand for you to have an Android or iOS phone. Things as simple as parking.
Well, now no one has to convince anyone to shell out for upgrades because everything is a subscription. What worked perfectly well can now get replaced out from under you overnight
Making good products was never Microsoft's MO. Even during the peak of the Nadella era, the good bits were side shows. Microsoft Office and Windows have always been things that succeed primarily via network effects/lock-in.
Good products are not profitable enough. Not that good products are profitable at all, but if it doesn't make disgusting amounts of money this quarter it's not worth considering at all.
We've reached the phase of "infinite shareholder growth" where physics says no, and that is so unacceptable that we'd rather burn down the entire global economy than accept less than exponential growth. It isn't that growth is impossible either, there just can't be enough growth. Break-even is apparently a fate worse than death
Microsoft continues to make billions in profit despite its spending on AI, because it has a diversified business that generates revenue. I don't get why they would be "scared"? It's basically a calibrated risk at that level.
> They could have shipped a good product with all those billions they spent in reinventing Clippy.
I really liked Copilot - it gave you a lot of tokens across a bunch of models and their agentic features were perfectly serviceable, alongside it being really affordable! And then they moved over to usage based billing and it no longer has that advantage over the alternatives: https://github.blog/news-insights/company-news/github-copilo...
I still think they have a really good AI tab autocomplete implementation and it's nice to be able to use that in VSC without swapping to another editor altogether... but that's not enough to really make me pay for their subscription. I could probably move to Zed altogether if I had a problem with VSC itself, though at least the base editor doesn't feel like it has been enshittified and I quite like it, all things considered.
In my experience so far with Azure, it shines at one single thing: IAM and to be used as an IdP.
Even with the free version you get phish-resistant MFA, SAML, OIDC, OAuth.
But go beyond that and it is messy:
- creating a single VM is an extremely convoluted process
- Intune needs up to 24 hours to appply changes to a managed computer
- There are at least two management consoles for Entra. Each with slightly different functionalities.
I don’t know how Microsoft is organized internally, but it feels like product organizations don’t talk to eachother and everybody is is just building stuff on top of Azure as if their thing is the only product MS ships.
GMAIL in the web is so shitty, I literally switched over to another provider. I don't know how anyone can use them as their webmail client. You can't make sense of longer mail threads with forwards, answers etc. in between - it becomes an unreadable hot mess.
> Microsoft spent literal decades rehabilitating their reputation.
"Decades" is a stretch. There was a brief window around the Windows 7/8 era and then, like a dog returning to his vomit, they returned to their user-hostile bullshit. Windows 11 is the culmination of that, but Windows 10 was plenty bad. Remember how Windows 10 made Solitaire a subscription service? Sticking copilot into everything is just more of the same.
>There was a time that Google cared deeply about UX. Now, on macOS Google remaps CMD-G in Google Docs to launch some LLM bullshit
That reminds me of a few years ago when Android phones replaced the behavior of "long press sleep/power button" from "shut down" to "ask AI about what's in your screen". Perhaps a manager got promoted somewhere for "raising AI usage" in Android phones.
The thing the annoys me the most (to use polite language) is that product design went off the window with the AI craze. You could probably ship actual products that actual people would want to use, but instead everyone wants to turn everything into a chatbot, as if chatbots are the pinnacle of user interface, the crabs of software, the purpose, goal, and telos of technology. It drives me nuts.
A text input field for entering your command line(s), with a text log for the output, does indeed seem to be the crabs of software. Usually with some abstractions that allow you to write longer scripts[1] and just refer to them by a short name or alias, and compose those scripts together from your command prompt.
You could say it's the terminal[2] user interface.
While this is very pithy, we need to acknowledge and remember that there's a gulf of difference between normal terminal interfaces and command line interfaces, and whatever the chatbots are doing.
Yes, both have a prompt where you type text to do things and get text back, but the type of text you write in one is very different than what you'd write in another. Prose versus commands and so on. Oh, and normal terminals don't waste electricity and water in amounts approaching small countries.
> turn everything into a chatbot, as if chatbots are the pinnacle of user interface
i have seen this first-hand, so many chat bots added to so many screens... like how about just make the ux better? well, that wouldn't look good at individual/team review time cause its not "using ai", so its not a suprise that's what we are getting.
If you look at the staggering amounts of money that have been put into the tech, this attitude becomes practically mandatory, in an inhuman sense. They have to get ROI, at literally any cost. And it shows.
What use is a reputation if you don’t spend it now and then? If this lets Microsoft cut some divisional headcounts by 95%, certainly their enterprise customers are onboard with naked greed and don’t care about how it looks either — and us individuals aren’t relevant to MS, so why would maintaining our good perception of them matter at all?
The entire selling point is "you no longer have to conform to standards in input to get usable output"; why would they conform to standards in output, or in process?
I don't think anyone at Microsoft truly understands how much they have ruined their reputation. This won't be fixed again by open-sourcing a few tools. Fool me once, etc.
I will fight against any Microsoft tooling being used at every company until I die. This is unforgivable.
What did Command+G do in OSX? Online results are saying it "advances to the next search result after doing find". In other OS', that's just the enter key, if I am understanding the context correctly.
> Microsoft spent literal decades rehabilitating their reputation
I hated with a passion when people claimed "MS loves open source now". I feel vindicated.
If a corporation can do a 180° turn in one direction, it can do a 180° turn in the other direction just as fast. They did not understand that, either because they didn't want to or because they weren't smart enough to understand how incentives shape behavior.
The incentives or a corporation are roughly making money for "shareholders"[0], making money for the C suite, making money for managers.
[0]: = People who do none of the actual work but have enough money to use it to get more money which therefore goes to them instead of the people doing actual work. (Intentionally saying "get" instead of "make" because they don't "make" anything.)
Their search homepage was supposed to be minimal. I was at a tech talk given by Google sometime around 2012 and they said that their ad service is not under any circumstances allowed to slow down the page load - if the ads don't return before the page is ready the pager is rendered without ads.
Chrome had so many great ux choices originally, such as tabs all staying the same size when you were closing them so that you could close multiple easily and only resizing after a second or two (that stopped working around a year ago). Hell there are even rumours that Chrome is called Chrome because it was a polished UX.
Their original products were so smooth compared to what was there before. Search compared to altavista, mail compared to Hotmail, both compared to Yahoo!. I really don't know where your perspective comes from. GCP?
If i remember chrome:// used to have special meaning in Firefox (and probably well before that), and was used to tweak UI settings. I always assumed this was where Google took the name from.
Chrome is a now-somewhat-archaic term for GUI (or specifically the actual elements of the GUI, not the concept), and Netscape/Mozilla did use the term a lot. Google claims that their browser is called Chrome because of an association with fast cars (presumably Google was keen to market it to extremely old people, chrome not having been a particularly big thing in cars for a very long time).
> Google claims that their browser is called Chrome because of an association with fast cars
FWIW, before Google Chrome, Firefox was originally Firebird (changed for name collision reasons), and Mozilla had broken off the rest of the Netscape-ish "communications suite" into Thunderbird, both arguably named after cars.
Besides the use of chrome by Netscape/Mozilla that you mention, roughly around that time I heard it used by HCI people to refer flashy GUI design for cosmetics rather than function, and specifically to changes in a particular MacOS version.
I wonder whether Netscape/Mozilla jokingly then used it as a term for the GUI toolkit "trim" around the browser page. Given that this was a transition to the important stuff being on the Web page, rather than your computer. And/or whether Google did.
> FWIW, before Google Chrome, Firefox was originally Firebird (changed for name collision reasons), and Mozilla had broken off the rest of the Netscape-ish "communications suite" into Thunderbird, both arguably named after cars.
Mozilla named the web program Phoenix for rebirth. A company objected. Mozilla renamed it Firebird because phoenix was a fire bird. They named the mail program Thunderbird for similarity of Firebird.
Between Netscape Navigator and Firefox, their web browser was called simply "Mozilla". It supported GUI themes in XML with images which were officially called "Chrome".
Mozilla also hosted user-contributed themes on a web site called "Chrome Zone".
The browser was considered slow and bloated however, and when Firefox came, its lack of theme support was perceived as part of it having been de-bloated.
I might vaguely recall Mozilla being in an easter egg or alternate throbber in a Netscape browser, and my impression was that it had been an internal codename at Netscape which was then adopted for the open source project.
This comment and a few others here make me feel old and sad for the people too young to remember that time. Yes, Google was an enormous breath of fresh air when it came out. 1000% better UI and features than the competition. Search was incredible. Gmail was a revelation. The whole company culture was night and day compared to the stodgy old tech companies like IBM. Just mind blowingly awesome. And then maps?? How did they even do that? The tech world felt entirely fresh and new and hopeful.
They basically revolutionized the web with the JavaScript V8 engine in chrome. Before them, JavaScript performance was so bad you had to have a really light touch with it.
In my circles it literally was the same people. Instead of trying to get me to buy ETH they started talking only via LLMs. Unsurprisingly we aren't in touch anymore... Maybe they are happier with their chatbots, I'll never know that's for sure
I'm intensely curious, since you know they're grifters, why are they in your circles? I guess maybe you don't mean circles the way I'm thinking and more the whims of algorithms?
Because I am too nice and even though every conversation had an element of grift there was still a conversation. Most of them are lost, or struggling with their identity. Yes there's some greed but half of them just want to fit in somewhere and they aren't technical geniuses despite loving technology. I like people like that, of course with out the grift.
That said we don't keep in touch anymore. I do miss them though. I'm something like an abused dog that has seen too many things in their life to not look past all ugliness and see someone's inside. I hang around a lot of hurt people because í want them to have a safe person they can come to if they choose to heal.
Wow that's personal. I should stop posting here and go find some new friends.
People get sucked into all sorts of schemes or ideas.
I never said grifters but a fair share of my social circle pumped crypto’s/nft’s when they bought some(small amounts but whatever).
Same people just can’t shut up about AI/LLM’s. I don’t care your LLM helped you generate an outlook email address export tool when a quick google reveals outlook can export the email address natively with just a few clicks.
I’m not, I’m presently underwhelmed by the examples everyone shows.
I’m yet to see actual productivity result from people paying to talk to chatbots to generate boilerplate.
But I tend to shy away from hypers so the LLM craze is passing me by. I have seen uses of AI/ML that helps recognise objects in images which I have seen it do OK at(and it should because it’s the same image just 10m down the road). A human then reviews the outputs. It also spits out highly inaccurate outputs fairly often that the human is necessary even with a feedback loop.
See how fast so many of the crypto and NFT/Web 3 lot shifted to AI, like rats on a sinking ship.
I think VCs saw Crypto and dreamt of being able to create the same amount of irrational value. AI has the same technical complexity "You can't easily explain it in a single sentence" energy but unlike Crypto and NFTs, enough actual utility to not seem completely illegitimate. It literally is the perfect hype grift tool. Crypto has survived almost 20 years off of nonsense, how long can this crap last. sigh
If you still think crypto and AI are nonsense, then I guess you will carry these beliefs the rest of your life, but these beliefs won't outlive you, as they have no relation to reality.
I said AI has utility but drives irrational levels of investment. Crypto has little utility besides a place to gamble, con credulous people and otherwise act as a really shitty store of wealth.
Most modern crypto projects barely bother to promise to do anything useful let alone achieve anything useful, which the overwhelming majority do not.
Indeed, it would be difficult for Iran to receive payment for passage through the straight or Hormuz without crypto or fir North Korea's ransomwwre economy to be so lucrative.
You're speaking complete nonsense, considering there is no evidence of anyone having paid Iran in crypto. Iran had been receiving payments in the Chinese currency.
Who is building their company using permission-less blockchain as the database? The average person still uses a bank checking account, not replacing it with a crypto account.
I haven’t heard of any progress on tokens in the Governance direction.
Stablecoins without a public audit trail have so far stayed relevant, but there are several which are suspiciously reminiscent of the mistakes that SBF made.
We all see the transfer of funds and the ostensible store of wealth when it comes to buying influence or presidential pardons. Those of us not wearing crypto-colored glasses don’t see the promise that VCs sold us on the industry 5-10 years ago.
I never spoke about NFTs nor do I have to speak about them, not today and not ever, so save your bait. It's in the same way that you didn't speak about bank bailouts, so I won't bait you into it.
Most people obviously use multiple accounts of different types. Those who have crypto wallets will never reveal them to you in the interest of their privacy.
Stablecoin firms make so much cash via interest that they're easily over-capitalized.
If you're foolish enough to be manipulated by VC interests, that's your own fault. I would focus on the tech, not on what VCs want you to believe. This applies generally, irrespective of the sector. I don't know why this is hard to understand.
NFTs are stupid. But I have a feeling as governments default on their debt and economies collapse in the next few decades cryptocurrencies will be of increasing importance.
Cryptocurrencies are now useless, considering how openai and similar companies have enough compute to highjack them and the AI thing might not work out at all…
(1) Capability is not the same as action. Every police officer in my city COULD murder me with their department issued gun at pretty much any time, but they haven’t. There are multiple reasons why, not the least of which is that _actions have consequences_. Worrying about that scenario is futile.
(2) The major cryptocurrencies aren’t as vulnerable to a malicious majority as you seem to think. All of the BTC ATMs, PoS providers, crypto exchanges, etc have strong incentive to ban malicious peers and they can do this soon after they identify the threat. The malicious majority would not be sufficient - they would also have to continually mine their own blocks faster than the rest of the network does.
(3) There would be a forked blockchain but only naive nodes which trust by default would continue with the illegitimate fork. Of the nodes who actually USE the cryptocurrency don’t agree with the malicious majority, it will be difficult to get the coins / tokens out of exchanges.
(4) The duration of any stolen nodes is the duration of the attack. Once the AI GPUs stop the attack and return to responding to LLM prompts, the legitimate blockchain returns to being the longest one, so all of the network returns to trusting the legitimate blockchain fork.
(5) The BTC network is controlled by a protocol agreed to by consensus. If the illegitimate blockchain fork stays longer than the legitimate one, the participants in the market can agree on a protocol change which hardcodes the illegitimate blockchain out of the picture (this happened with the ETH DAO in the early days after a successful double-spend attack).
Because they're needed to run AI. Newer hardware is increasingly specialized for AI too. Moreover, if funds start disappearing, the price will crash, negating the point.
The best part is that copilot commented on the PR saying that this doesn’t actually change the behaviour, creates inconsistency in the codebase and suggested reverting the change! (This comment seems to have been ignored…)
> The configuration schema default was changed to "all", but the runtime fallback in extensions/git/src/repository.ts still calls config.get('addAICoAuthor', 'off'). This is now out of sync and can lead to unexpected behavior in contexts where the contributed configuration defaults aren't loaded (e.g., some tests/hosts), and it makes the intended default unclear. Update the runtime fallback to match the schema default (or omit the fallback so the contributed default is used).
I also liked the bot posting screenshot diffs that are all false positives, while apparently not capturing the default change (is it not in some menu somewhere?)
There are two commits in the PR, the second of the two seems to update the fallback config to avoid the inconsistency that Copilot was complaining about.
This feels like the modern version of 'Sent from my iPhone' but much more invasive. Git commits are legal and technical records. Falsifying who authored a piece of code just to pump up AI usage stats is a huge breach of trust and it is disappointing to see Microsoft prioritize branding over the integrity of the developer's log.
I expect my IDE to record what happened, not what the marketing department wants people to think happened.....
I don't use git features in vscode, but from what I understand the user clicked some button to make a commit, typed in a commit message, and then hit "OK" and the editor called `git commit ...` in the background... after silently adding "Co-Authored by Microsoft Copilot" to the commit message.
That's a little different than Claude doing the commits all by itself and happening to include an attribution line. Especially since, as it turns out, this was being done on clients that had all the AI stuff turned off. But even if that weren't the case, it'd still be wrong.
Nothing wrong at all with separating out Claude’s work with commits! In fact, it’s preferable IMO — it lets people browsing the history identify code that was primarily written by AI.
This is not just a hypothetical but a non-common workflow: I already wrote upstaged code change myself. I ask claude to review it, and if ok, commit and push.
At no point did claude author any of it, just a review. So a co-author statement is false.
It's technically the same thing because a pre-commit hook can easily remove it.
I did this with the very first versions of claude which didn't have a documented setting to turn it off, and kept it every since. It works with every single coding tool because it just looks for the same key word.
I think it's kinda cute that you don't see it as an attempt to steal code by claiming they "co-authored" it. How long before they claim they can use any code co-authored by Copilot in training? How long before you see your own code, "co-authored by Copilot" as an output in a commercial product that YOU aren't making a profit from? Just a thought :)
Those services always asked ahead of time though. And at the time, it was seen as cool, like a not-so-subtle "look at me, listening to music on this cool service".
Technically (in the US at least) purely AI-generated content has no copyright, hence any copyright associated with the commit can only assigned to the human authors (or the entity they are working for). As I understand it neither Copilot nor Microsoft should have any actual claim of authorship (from a copyright/IP perspective).
I don't know if it's been tested in court, but that's the rationale behind the Signed-off-by lines the kernel requires in all patches sent. It's a way to tell the (legal) ownership of a piece of code.
That makes the bite less damaging - if everyone hax "Co-authored-by AI" in their commits less shame for it, just a normal fact of life now, not a sign of low quality.
Good point. That fake commit addendum means that the entire commit contents would not be under copyright protection. AI generated code is not currently copyrightable.
Still if you're the lawyer on the side of the lawsuit claiming that the code is copyrightable, you really don't want that copilot attribution in the commit message muddying the waters.
you actually do, as counter-intuitive as that seems. co-authored implies (and gives you room to argue) that you were involved in the process. hiding the fact that you did if proven otherwise (not that hard to prove in many cases if it comes to that) won’t be looking good. at my work, we are mandated to include AI attribution and I would say every well-run company should have same mandate
Outside this instance, how can one prove code was AI generated beyond a reasonable doubt? Also, do you (or anyone else) know how much AI/copied-code has to be modified for it to be considered independent?
If AI generates code, and one just renames some variables/method signatures, then what?
> how can one prove code was AI generated beyond a reasonable doubt?
Subpoena the provider they use.
Even if they don’t retain the full context, they have to save API calls for billing and analytics. If you’re clauding for the hour up to and after the commit, one can reasonably assume you built it with (if not exclusively by) AI.
The headline literally says the line is being inserted regardless of usage, which makes it easy to argue that it’s entirely meaningless as an indicator of AI use at all.
The point they're making is that this happens even in code where AI didn't write it. One of the comments on the page is from someone mentioning they have all Copilot and AI features turned off, and it still added this to their commits. You can't conclude anything about whether AI could write it from the presence of this in a commit message.
AI is a tool that may make copyright violations more likely, but whether the output violates copyright is a property of the output, not how it was produced.
If you copy and paste leaked closed source code or if your AI produces it verbatim, you're in trouble either way. Change it up a bit and you're fine in practice in both cases.
Yeah the current guidance from US copyright office is that if it were said to be solely authored by copilot it would not be eligible for copyright. If it were said to be solely authored by human A (who happened to use co-pilot) the elements and arrangement of it not generated by co-pilot would be copyrightable. I’m not sure the copyright office has released guidance on attempting to register AI as a co-author I assume the registration would be rejected but you’d be able to re-submit as sole Human author.
Changed back or not, this demonstrates that they're either willing to make sweeping changes like this that hurt a massive number of users, or that they're incompetent to the point of not realising the impact of the first change. They'd have had to just blindly make the change, since the original PR was approved and merged within the same minute by the original author (no additional eyes, at least that we can see), or ignore user complaints and make it anyway. Both cases demonstrate terrible stewardship of VSCode.
To be fair to OP, that follow-up doesn't appear to be mentioned anywhere in the discussion on #310226, either. They probably should have left a note about that change before locking the thread.
To be honest, I didn't see the follow up. It just incensed me enough that they would do that to begin with.
Right up there with Zed being pretty open that they siphon your code through their API surface and have a "Just Trust Us Bro" data retention policy, along with no way to turn the collaboration features off.
That's one way that it works, but that's not the main driver.
This kind of tagline marketing works best with people people who aren't even aware that they're participating, and who aren't bothered to do anything different it even if they become aware.
The juice isn't worth the squeeze, so the marketing remains.
Sent from my iPhone
Downloaded from Demonoid
Rusty n Edie's: The world's friendliest BBS 216-726-0737
But, also, I think in this case, it makes people less likely to use the product, as there's a lot of baggage around agent-written code. People who shouldn't be using it are using it to make so many PRs it's become a DoS attack for some projects, so a lot of project maintainers are rightly sniffy about AI-written code.
I'd like to think that the level of cognitive sophistication necessary to assess the situation negatively would be very widely available. That would be a very pleasant line of thought for me.
But then, I look at the modern-world empires that are built upon advertising and realize that reality just isn't that way. At all.
100% I have one ~tiny~ project that has a handful of stars and actual people seem to use it. End of last year I received a huge slop drive-by PR on it. Spent 20 minutes reading it, realised it was just nonsense. I want my friggin' 20 minutes back.
I can't imagine how infuriating this is for maintainers of projects with much more footfall. I'm frankly shocked more aren't just outright closing the doors to PRs from unknown contributors
Huh. I always thought the point of "Sent from my iPhone" (or the earlier "Sent from my Blackberry") was that it indicated "I don't have access to my desktop and file server right now so don't expect me to send that file".
However, there's one counterexample: some email clients in the past experienced explosive growth by adding signatures. It was annoying, but it definitely worked.
I don't really send emails anymore but when I actually used email to keep in touch with friends (during the interesting bit of time between smart phones becoming mainstream and SMS and other messaging services becoming more popular than email), I changed my signature to be "Sent from your iPhone" even though I used an android and mainly sent emails from my computer, just to be an edgy teenager. Got some interesting responses from that.
It's interesting to see how communication, digital and otherwise, has evolved over time.
"sent from my iphone" originally meant more than just "i have a fancy phone that lets me send email" in the early days it meant "I'm not at my desk right now."
Why did a PM create the merge request? It seems like internal testing brought up issues, why was it merged regardless? Is velocity a metric you were aiming for when merging this?
There are customers who would like to see attribution on changes where AI contributed (companies, users, etc). True, that's not everyone, but you can query our repo for the issues for which this feature was implemented.
The rationale I suppose is those customers what to be more careful with code that was contributed by AI.
I don't see how this would actually help. If people don't want to disclose they used AI they will just strip the message from the commit.
Maybe those customers should just be more selective with the people they allow to contribute to their project?
Also, this kind of message doesn't even bring valuable info: it doesn't explain how the AI was used (could be 99% vibe-coding, or just a quick "Please review current changes" + minor fixes at the end?), which model was used, etc. Like other commenters here I can't see this as anything else than a marketing push for Copilot.
Don't take it personally though, you are probably not the one that should be taking the heat since the change was directly pushed by your product manager.
Please don't be personally aggressive in HN comments, regardless of how provoked you are or feel you are. We're trying for something different here, and we particularly want to avoid pile-on, shaming, and mob dynamics.
Edit: your account has unfortunately done this before (e.g. https://news.ycombinator.com/item?id=47548889). I don't want to ban you, so if you'd please review the site rules and not do anything like this again on HN, we'd appreciate it.
I can’t access that LinkedIn link without going through their Persona ID process, which requires all kinds of PII.
> LinkedIn users attempting identity verification may be unknowingly handing sensitive personal data to Persona Identities Inc., a company that distributes information to government agencies, credit bureaus, utilities, and mobile providers.
^ Link from a LinkedIn page I found on a Kagi search.
I can view some LinkedIn pages but not others without logging in.
Even though I’ve never posted to LinkedIn it only use it as a public résumé, my account was flagged as needing identity verification. I’m pretty sure this happened a year or two ago when I changed my email address from one domain I owned to another domain I owned.
I’ve never been able to log in since then, and there is no support path. The only available way past it is to simply submit all the info to Persona.
I'm not him, but it was pretty obvious that the comments section was going to be attracting more and more people saying the same thing that had already been said before, and that no useful discussion was going to be had. At some point the value of spamming everyone who commented on the issue with a notification (which puts an email in your inbox if you haven't changed the default setting) becomes lower and lower.
I've seen that before on other issue comment threads. The repo owner says "Hey everyone, if you want an issue fixed, please upvote the issue with a thumbs up". And many people don't read that, and instead post "Please fix this" comments without giving a thumbs-up to the issue. So, 1) the repo owner doesn't get to use the "sort issues by # of thumbs-up reactions" to see the priority of that issue, and 2) everyone who has subscribed to the issue gets spammed with a message that's useless to them.
Since nearly all the new comments had become "me too"-style comments, which should have just been a thumbs-up on a previous comment in order to reduce spam, I feel like locking the issue thread was the right move at that point, to stop people from receiving yet more unnecessary email in their already-overflowing inboxes.
Because the `microsoft` group account is the owner of the repo. With group accounts, you can designate many individuals to have admin access to the repo, but the actions taken by those admins will be attributed to the group account that owns the repo. (Because presumably the rest of the admins agree with the action taken, otherwise they would undo/revert it).
This is what happens when nontechnical people land production code in order to game their promotion metrics.
I sense the PM in question is disconnected from the sensibilities of the users she ostensibly represents. Looking at her record I see she never worked as a programmer. But with four years in her current position she ought to have figured this much out. Strong AI incentives perhaps?
Isn’t this a kind of “leopards ate my face” situation? I thought we had all “agreed” that letting AI write code and take control of software repositories is good, even if we have no idea what is going on beyond a thin surface layer, because well it’s fast and we can fix it later and lol who needs testing? My customers are my testers.
And now it’s suddenly bad because the developer is the customer?
The sneaky commit modification is triggered by very modest usage of AI such as auto-completion.
Look, if an agent writes the code and the commit message then adding a Co-authored-by by default is ok. Not even showing it before the commit is made is not, and adding the message when AI was just completing code is not.
I genuinely think it's not ok even then. Copilot is a tool, one of many I use. That tool has no business polluting commit messages without my knowledge.
The appended message isn't even adding any new information, as in this day and age a vast majority of commits is probably "co-authored" by an LLM.
I should have been clearer, the hidden addition is never ok.
If I ask Claude to write a commit message, it will inserted a co-author line (and an ad), but I can see it and disapprove, add a counter instruction to CLAUDE.md etc
I personally don’t understand the need to treat a tool as an “author” but that’s not important, my comment is mostly regarding the backlash of what happened. A feature was rushed in and does not work as intended, in a kind of disastrous way. Now we feel like our customers do when they have to deal with all the crap that our AI co-authors push forward without the right process.
This is bad. I need to start Monday warning my team about this and installing validation hooks in our repos that catch any commits with this. We don't have a non-AI policy, but we have an "approved AI" policy due to data security, and having all your commits say "Co-Authored-by Copilot" is more or less the same as as "I ** on infosec". We also have a "short commits message" policy, and that "Co-Authored" thingy takes characters.
There is more of it that's going on. For me, Microsoft's SwiftKey keyboard app sabotages the use of a competing search engine (DuckDuckGo) in Firefox in Android for me. When typing a multi-word double-quoted search phrase, it doesn't allow it to be typed correctly.
DuckDuckGo is not a competitor to Bing. It is a sub-brand of Bing for the purpose of market segmentation. While Bing attracts users who install Windows and click on internet, DuckDuckGo attracts users who feel concerned about privacy. It's the same engine under the hood.
Jeez, you can see many things wrong with this new all-in AI direction that Microsoft is taking. Commit by a product manager, who probably actually never digged through the code before…automated ai review not catching the problem, and the vibe codes pr introduction the error itself
The PR author didn't even bother to properly capitalize their subject and add a description. What a double standard for code quality Macroslop is applying to internal vs. external contributions.
My newest yocto image mounts a 640K RO tmpfs on top of $HOME/.vscode-server to prevent people using VSCode from shitting all over the relatively small emmc.
> And emacs is too bloated to fit too, conveniently.
If you connect via ssh, you could use Tramp. It does not install emacs on the target, but instead use a somewhat permanent connection as a tunnel for most emacs commands (transparently). Works too with docker, podman, distrobox, etc,...
Even large companies like Anthropic and Microsoft keep pushing out features without proper code and/or product review. This has become a bottleneck in software engineering.
Wow. Just like using ungoogled-chromium instead of chrome, lineage os instead of oem android, using vscodium instead of vscode is again justified. These decisions really are the ones that I'll never regret.
In addition, using the word microslop instead of microsoft is again justified, too.
Interestingly a product manager creates a PR with small but sort of policy change without any backstory/explanation, it gets reviewed by a single developer and merged without a single comment. The bar to make changes to a production software used by so many people has gown down considerably.
Does anyone happen to know, what, if any, are the ownership/copyright/intellectual property liabilities and/or rights that come from a `co-authored by copilot/claude/codex/whatever`
Right now these companies are dealing with legal troubles from taking other's code/IP without honoring the license or copyright.
My theory that could be a bit of stretch is; if they can eventually replace all that copyright'd code that is trained into these models with versions their agent services created during the millions of uses daily, they can train future versions on code they wrote. If they hold any ownership stake or usage rights on that code, due to those co-authored lines, which are saying "this agent and by extension the company that owns it was a part of creating this code", they effectively will have laundered the license away from the original owners and removed any way to pursue legal action because they won't even be using the stuff stolen anymore, and worse yet, if they now have their own copyright or other legal grounds due to their agents co-authoring all new code, they could start going after smaller ai companies for the same thing individuals were going after them for.
I know that's a pessimistic outlook, but I feel like the co-authored lines are being placed there for more than marketing exposure. It's a commit message after all, how much could that help marketing. It's the ownership/author attribution aspect that concerns me.
I'm so glad I switched to NeoVim. I've got the good LSP and auto-complete stuff, a nicer grepping experience, semantic moving and selecting with treesitter textobjects, and absolutely ZERO LLM AI stuff. (I still use LLMs outside my editor for some searching and questions, but may try to cut that down too.)
Call me a Luddite, but we are up against something extra insidious with this new AI wave, and the cracks of the psychosis are starting to show.
I miss in this whole thread why this is happening. Presumably to be transparent whether code has been co-written by AI?
What's in it for Microsoft?
If we accept that AI can't copyright or own IP rights on something, then why?
I have a sneaky suspicion that there's some lobbying in the works to overturn that ruling going forward. In the past, it was OK to build models from copyrighted data etc one might have found on the wayside. But, in the future, no such thing for you. Everything generated by the AIs will then belong (at least partly) to the megacorps (maybe THEY can co-own the copyright if the AI cannot). Nice pulling-up-the ladder if true.
This could also be a move against other countries' IP position.
I've seen the explanation from dimitriv [1], but I am not convinced. These markings achieve very little, as people can clearly work around it by copy-pasting code from another place, or using other companies tools, like claude code or antigravity (or, not even use the GUI)
I suppose the answer might just be "don't attribute to malice ...", even if Microsoft has proven us wrong before; they generally know exactly what they are doing strategically.
The change was about helping teams ensure AI-generated code is attributed in commits - nothing to do with copyrights and the like. If you don't have to take my word for it - query VS Code repo for changes and issues that went into implementing this and you will see.
It would be easier to rationalize it if there were an assurance that AI-generated code would be generally credited with the model used. But as I understand it, this credit only happens when using the co-pilot GUI, right? No credit for copy paste code from uncertain pedigree? So I think it makes sense to question the logic here.
Would be possible to admit a brain fart and roll the change back?
Thanks for jumping in the conversation. Logically it does makes sense to attribute the authors correctly, however in this context it might be helpful if you can provide any details about the users complaining that their PR's are being marked as co-authored even when they have not used the copilot? Is that intentional or a missed check in the implementation.
Also for layman readers like me who might not be actively involved, it might have been helpful to add the issue/referenced conversation why this change was made on the PR itself
The fact that non-AI changes are attributed to Copilot is a bug. The intent was to allow customers to add attribution of AI-generated code. As with any bug, it was not intetional.
I have been in this situation. A major driving force is some kind of a demand from the leadership to see the KPI for the AI adoption. And this unfortunately is the easiest one to implement.
The other aspect is virality. I think by now the implementing team should know that most people do not appreciate Claud inserting itself into the commit message. It's the job of the team to feed that to the leadership.
At no point in time companies were so desperate for developer attention. It feels like the general consensus is it is a “winner takes it all” race, and everyone has to add as many dark patterns as possible to increase stickiness.
To me, the more interesting question is the following. Why does the architecture make it possible for a system to collect user consent and divorce the enforcement of it from the consent collection itself?
The proposed process fixes are good and all, but really, the fix IMHO should be structural. Your CI should fail if you don't propagate consent decisions down the stack.
Also worth noting: `Co-Authored-By` implies joint authorship. The Linux kernel uses `Assisted-by:` for AI specifically because the legal weight is different. And git history is permanent. You can revert a default. You can't revert commit history across thousands of repos.
And here I’m thinking that my text editor should have zero interaction with anything git other than as a diff viewer.
lazygit is text editor agnostic and works brilliantly to give some near perfect porcelain to git specifically. And it works the same with Ghostty, Terminal, zed, VS Code, any environment I happen to be in, while saving so many keystrokes.
That happens in most speech to text systems, even Superwhisper, Monologue and Wispr Flow. I read somewhere it comes from training on YouTube audio and happens when there is silence. I guess it depends on the model but most of them are based on Whisper which has this problem
Ha, I also have this happen all the time in response to mouse clicks. When playing with Apple Foundation Models + Whisper I noticed that it happens so often that I had to explicitly filter this out before acting on transcriptions.
Having to scroll through 3 screens worth of giant automated comments on the linked PR before seeing any comments written by humans is the cherry on top.
So many repositories look like this now, it's honestly sad.
Time to leave for something else if you haven't already, vscode has been good to us but this kind of behavior is only going to ramp up as Microsoft seeks to get a return on their AI investments.
I've been hesitant to use Zed mostly because I didn't want to learn new key but last week, I finally jumped in and remapped to keys that I like. It works really well.
It's very "Trust Me Bro". My workplace has already banned Zed after legal review purely on the lack of any controls over the collaboration feature that gets turned on the instant that you log into Github with it.
Determining AI provenance is really tricky and difficult when you have so many different ways to author code.
Looks like VS Code has decided that by stamping all code as AI generated, it is more likely to be right than wrong. Some PM must have declared that false negatives are a lot more dangerous than false positives when it comes to AI provenance tracking
AI generated code is not copyrightable anyway. The only real question is how much "copiloting" you have to get ownership, and right now the courts seem to be heading towards it not mattering if AI was involved
Question -- is this a general feature that detects which AI agent was used to edit your code (Claude, Codex, etc) and inserts THAT agent's name into the commit message's trailer. Or this pnly detects and inserts (Github) Copilot as a co-author?
For me it just inserted "Copilot" and I was only using inline completion, not agents. A bit weird too since my VSCode doesn't even have copilot installed as an extension (I had just started using that editor again on an old linux install and was wondering why I was even getting AI completions).
I personally don't mind if an AI inserts it's "Co-Authored by" tag into commits it has worked on - it's transparency, I used its help and it should get credit for good work, or disdain for bad.
But, just inserting the tag because it's being used for git commands - there's a line there.
> it should get credit for good work, or disdain for bad
Hard disagree. The "credit" it gets is through the form of charging my credit card.
Imagine for a moment that you are a company which hired a human developer to create your app rather than AI. In this case, the developer sold his or her right to credit by way of becoming a paid employee. All credit/rights/etc to the code become the ownership of Company, not the developer.
I am paid by my company to write code - does that mean I shouldn't be given credit for the work I create?
DMR, Kevin Thompson are credited with creating C and Unix, but they were paid employees of AT&T - where's the issue with them being credited for their work?
I’m sorry, I don’t get it: a piece of software needs credit for creating another piece of software? Like, would you credit GCC for adding optimisations to your binary?
It's useful as metadata (like how JPEGs can store the camera model it was taken on, or PDFs contain the program used to generate it), but yes, I don't like LLMs giving themselves co-author credit. I turn this off in Claude Code.
The LLM is just a database. Would you be fine if this was done when cribbing stuff from Github, StackOverflow, tutorials and so on, or do you think some databases are more special than others in this regard, and if so, on what merit?
I really, really, really like Github Copilot agent harness. I was using it as my primary coding agent for the past year. I really hope it will remain affordable. I knew that $10/month won't last forever for the amount of usage I was getting out of it. It's far better than claude code and codex that I'm testing now (because I burned weekly limit in less than a day).
quote:
"Thank you all for your feedback, professional or otherwise.
Sorry about the regression. I will work on fixing this in 1.119.
There is a number of issues with the Co-Author functionality:
It should never have been enabled when disableAIFeatures is on.
It should not add attribution to changes that were not done by AI.
We need to make sure it receives a more test coverage before change the default.
If you have additional (constructive) feedback, please ping me directly or open an issue."
It would be to starting giving a chance to other competitors. I have been very comfortable zed except for 2 issues (on Linux). After they are resolved, I will probably start using it 100% of the time.
The "regardless of usage" in the HN title isn't correct. If you dig into the source you see that the attribution line is only added when some changes in the commit come from Copilot, either inline completion or agent. @dang
That is the crux of the issue: the detection of whether the changes came from copilot or not is buggy, thus all changes are flagged as coming from copilot, thus the title is correct.
That's not what the Pull Request intends, though. There's a difference between "This PR is buggy and enables attribution on everything" vs "VS Code enables attribution on everything".
The title is fuzzy on intent, making people believe it's intentional when one shouldn't assume intention (Hanlon).
Was this rolled out to individuals (especially non paying accounts) early? I was hit by this over a week ago on my side project, but it seems to be just blowing up now.
Given that there are 536 different types of "Copilot" under Microsoft umbrella, I am surprised they did not distinguish between GitHub Copilot and Microsoft Copilot here.
The real question is why Anthropic was able to use DMCA takedown requests "in good faith" against the Claude leaks when their own CTO claimed it is a 100% slopcoded codebase, and they themselves argue that all LLM generated code is transformed enough to not be copyrightable. Which they have to state without being able to turn back because they violated millions of book and software licenses during training.
You can profit from getting away with lying to judges.
A judge isn't involved, anyway. The leaker would have to take you to court and then prove that your request was in bad faith and that they didn't infringe copyright.
Competent programmers understand how to tell the computer what needs to happen. Really good programmers understand how the computer executed the code, and take advantage of it - they know about speculative execution and cache prefetching. Competent lawyers know what the law says. Really good lawyers understand how the law is executed, and take advantage of it - they know when it won't be enforced.
> This story isn't about a monkey claiming co-authorship, it's about Microsoft claiming co-authorship.
I don't understand your position.
Your previous post was agreeing that Microsoft wouldn't get copyright on copilot output, wasn't it? I said the bot doesn't get copyright and you said "neither do you".
Why are you now saying Microsoft would get copyright?
That's not what we were talking about. We were talking about a third party modifying your document without your consent (and sometimes even without your knowledge). You write git commit "Fix bug" and then a third party swoops in the night and modifies that with "Co-authored by: Microsoft".
> That's not what we were talking about. We were talking about a third party modifying your document without your consent (and sometimes even without your knowledge). You write git commit "Fix bug" and then a third party swoops in the night and modifies that with "Co-authored by: Microsoft".
Right, and how is a court supposed to differentiate between the cases when copilot was not sharing your typewriter and cases when it was?
The courts have determined that, yes, and that is the position of the Copyright Office. And the Supreme Court has rejected appeal, so that's the standing precedent.
Realistically, look forward to SOX style audits and having to maintain evidence of how much of a code base has human authorship vs machine generation. Or reject slop.
I can't wait for:
* The first company to do perjury for litigating over a nonexistent copyright for machine generated code.
* The first company to get nailed to the wall for reverse engineering and replicating high profile copyrighted code, like Windows.
Having a tool involved isn't the same as being entirely generated by a tool
For example, without any AI, if I generate a lookup table for the sine function in my code, that table may not be copyrightable because it was machine-generated, but it doesn't somehow make the rest of the code not copyrightable either
"Co-authored by" doesn't imply it was entirely machine-generated
You’re forgetting the fact that the newer generations coming into the industry don’t know that. They don’t even know what a VHS tape is and some don’t even know what a DVD is — this isn’t a problem it’s just their baseline is different from ours. Global warming is an example of this: newer generations see today’s conditions as normal but we older generations see them as broken and a problem.
To be direct about this: this is actually our fault they fell for this. It’s your fault too. We’re the ones building the future for the next generation/s, so whatever “tricks” they fall for are created by our generation (to extract or generate wealth, amongst other things.)
That’s on us to do better through education and fighting back.
The younger generations aren't really that stupid. They know what a DVD is for gosh sake.
They also know the conditions they have to endure - economic, climate, whatever - are not normal or okay. They're well aware of who to blame for those.
> If you fell for this once again, there's nobody else to blame but yourself.
We don’t need snarky comments like this, especially when the technology in question is so pervasive and takes a lot of cognitive effort to avoid. The blame lies solely with Microsoft.
If one hasn't been personally betrayed yet, it is easy to minimize or ignore the warnings of others who have been through the predatory/anticompetitive, EEE, stack ranking, etc. eras of MS.
I agree with you in very general terms, but I'm not sure you can reach the level of "market share" VSCode has had the last few years with just the very young.
No question VSCode has some real structural advantages: free (as opposed to pricey VS Enterprise licenses - this matters in non-tech enterprises), somehow easily installable even in enterprise locked-down environments, first-class webdev support, first-class python integration, extensive extension/plugin ecosystem, extensive multi-language support, excellent wsl integration, and that MIT source license to PR their way out of their EEE (Embrace, Extend, Extinguish) infamy.
There's no other free IDE quite with this set of features. Eclipse is a heavy heavy lumbering thing.
It's not even a mystery why it has a lot more traction than vscodium - that sweet sweet MIT license means it's a good thing right? Salves that mental nag in the conscientious.
It takes a principled, die-hard attitude to use vscodium over vscode, or something else altogether, especially if you're a multi-talented dev.
That's the thing about giant corporations, they tend to outlive human careers. MS has outlived the careers of Gates, Balmer, and likely Nadella. Google has outlived Page/Brin, Schmidt. IBM so many. Volkswagen likewise. Even Comcast survived the worst-company-in-america days. Ma Bell continues to survive as Verizon, AT&T. Sony too. Railroads continue to this day. Hence the modern day race to get as large as possible, as quickly as possible.
Opposition due to incidents fades over time as people simply walk away into the sunset. That big boss that you have to defeat at the end of the game? Simply goes on to fight other players once you leave.
> It takes a principled, die-hard attitude to use vscodium over vscode, or something else altogether, especially if you're a multi-talented dev.
Maybe in some areas this is true. But there are and long have been a lot of really good text editors in the world. All it takes is a pretty mild preference for free software in this case.
> All it takes is a pretty mild preference for free software in this case.
Presumably, you mean free-as-in-freedom, not free-as-in-beer. Still, there is that VSCode MIT source license to distract the naive.
And that tells us something about the state of the world, unfortunately. The number of folks with that mild preference is small, just going by the overall adoption of free-as-in-freedom software, in general.
Not until they've personally been hurt by something.
Unfortunately I can't recall who said this, it was the beginning of a tech talk and it made something instantly click for me.
We have almost no way to influence both politicians and corporations because an individual informed vote gets lost in the avalanche of votes by people who don't care. The biggest lie a few hundred years ago was that "all men are created equal" told to the general population by people who owned slaves. The biggest lie in our generation is that we have democracy.
1) It's impossible to vote for what you really want because the choice is restricted to predefined options - political parties are only a few points in a highly dimensional space representing what people actually want.
2) Everyone votes on everything and every vote has the same weight. It's impossible to target your vote to one issue you researched deeply - it'll be lost in the noise of people who are voting about something completely unrelated but their vote picks a party which in turn affects your issue.
3) Corporations, especially in tech, have just as much influence as the government and they're little dictatorships. Not even their workers can influence their decisions directly.
Ironically the one good thing we got from AI is being able to sift through everyone's entire internet history (even de-anonymizing the stuff they didn't want under their regular nicks) and being able to tell exactly who supported this.
- Automatically activated audio cues (purportedly for accessibility) without consideration for users with auditory sensitivity; continued to release changes that would override attempts to disable the unwanted sound; dismissed with "but how else could we possibly notify people that we added the feature?"
I'd like my tools to not have a time-bomb attached to them, no matter if it takes 10 years to explode.
And honestly I think this case is just a perpetually clueless manager getting over-joyous with vibecoding (to the point of being marveled at changing two lines of code without blowing everything up).
It's probably going to be reverted in the coming days. Which doesn't change the fact that it's a very Microsoft way of operating.
Yeah, a company can only be shitty and "fix" their mistakes for so long until the general public realizes that the company doesn't have its customers best interests at heart.
It is certainly bad behavior that Microsoft did this. But it's irrational to jump from there to "this is what they always did and always will do" as OP did. Corporations are not unchangeable monoliths, and it was perfectly reasonable to use Microsoft tools when they were acting decently towards their users. Now that they have turned user-hostile, it makes sense to avoid them until they learn their lesson, and so on.
People act like a corporation has character traits, as a person does. But it doesn't. You can't strongly predict future behavior based on the present the way you can with a person, so it makes no sense to have seething eternal hatred for a company.
Hatred for a corporation is as useful as hatred for a nuclear bomb. No matter how harmful or destructive, it lacks any sort of free will that would make it a reasonable target for such hate.
I saw this the other day and was pretty confused - I prefer to write my own commit messages and wondered if I’d accidentally let the AI do it this time. Nope, just MS changing things behind my back. Sigh.
Right because of course you wouldn’t provide an explanation for why such a change would be made.
Providing zero description or background or explanation for why a change is made is probably the only thing that pisses me off as much as a pure AI-slop description of a change: your job in a PR description is to give the background for why a change is being made. Honestly, any PR which doesn’t do this should be insta-closed by policy. But it totally tracks with the level of quality I’d expect from the company in question.
Wasn’t it discussed here that no copyrights apply to code generated by AI? I’m asking myself whether adding "Co-authored-by: Copilot" means the code is not protected by the GPL, or even allows Microsoft to own your code...
A lot of bitching about Microsoft here, for something Claude has been doing forever. I have a git hook that rejects any commit containing the line Co-authored by Claude
That's a fair point, but claude code is not an editor (yet?), and when you use claude code, and allow it to commit things, it's almost certainly "co-authored by llm".
Back to vscode, people get the "co-authored" line even if they didn't use the AI features.
Well claude does it if you ask it to commit instead of you, and it lets you review it, this is not the case with this feature - judging by the comments on PR. Sometimes it says co-authored by copilot even when the code is not generated by AI. Also it will never say co-authored by claude or whatever, always copilot. Also why would my IDE care about this and not the AI itself?
Are you ashamed of other people finding out you used Claude? I think the co-authored-by bit should not be a setting at all, AI-generated code should be clearly identified.
Basically what you’re saying is that if AI does anything on your computer, anything the AI impacts you should lose control over. If the AI touched it at all in any way, big or small, you now lose ownership of the actions your computer takes (on open source tools, I might add).
In case you need reminding of common sense, I’m supposed to be allowed to decide what my commit messages are because it’s my fucking computer.
I prefer that my software is not a morality police.
I use Claude at work. I've never instructed it to make a commit, and it's never attempted to make one. It would fail anyway because my commits are signed by Yubikey and it requires presence detection, so I have to tap it.
But I don't want it to make commits, and I don't want to review its code in the Claude Code TUI, either. I want to read its changes in my text editor, decide what to drop or revise or revert, and then stage individual hunks or regions into logical commits.
If anyone asks I'll tell them I used an LLM, idc. I often mention it in commit messages or PRs. But I don't want LLM agents to write commits at all.
mind-boggling people are trying to hide this, tells you all you need to know about our “profession.” presence of that hook or the like
in a place of business should be fireable offense
Let AI autonomously produce code of a quality that I care about and I might consider giving it credit. I don't know how other people write code but I come up with an idea and use a multitude of LLMs to brainstorm a reasonably comprehensive spec that any reasonably competent person can read and produce a working program from, including a locally working Q2 quant of Qwen 3.6. Even Kimi is as good as Claude at most coding tasks, and I don't see why any single agent deserves any credit for my design.
Let artists and filmmakers start watermarking their output with the tools they use and I might reconsider my decision.
Do Adobe or Arri or Red get authorship credit for the work their hardware and software do on projects? After all, artists would not be able to produce a single pixel without them. In a similar vein, you could make the argument that modern farming is sitting on your ass in your modern tractor while software handles most of the work. Does John Deere get rights over a quarter/half your harvest?
I am stuck between the luddites and "artisanal" coders on this one. LLMs are neither as smart/useful or as dumb/useless as people think. Unless your job involves producing useless garbage every single day, good software requires a lot of thought before the first line of code is even written. For those with serious domain knowledge, the thinking time can be compressed into minutes/hours rather than days/weeks it might take.
LLMs are a tool. You either pay for it or you use the freely available ones on your own hardware. As long as the output is directed by my thinking, the output belongs to me. If it were up to me, I would abolish IPR (and even permanent ownership of land) as a category altogether, but that is a different discussion.
I think the Linux kernel's standard of disclosure via the "Assisted-By" trailer is the right move.
Makes it clear you used a bullshit machine, without implying it's an author.
...assuming you think using them at all is a good move - I won't deny they have some utility (though I'd argue much lower than many seem to think), but I do presently believe they're a disaster for humanity.
The ruination of the Internet with slop, the massive propagation of propaganda, and the insanely easy-to-wield tools for abuse are in no way worth the ability to accrue tech debt at 10x velocity (though to be clear, accruing tech debt can absolutely be a useful strategy, if one I personally dislike).
I've never had Claude Code in VSCode add attribution to a commit when I didn't use it. VSCode is adding the attribution even when you have all copilot features disabled and therefore could not have used it.
I thank Microsoft deeply for the forced copilot crap, almost impossible to remove, that they have put into vs code. Finally after 5 years I have deleted vs code in my Mac! that was the last piece of windows software I still had around. VS code was great years ago, until Microsoft started to push crap into it – and afaik they also made the fully open-source, telemetry-free fork difficult to use with many extensions.
Really, thanks for forcing me into deleting it. turns out vim + Claude Code or codex was much better all along, it really works well for me.
The toxic behaviour by hn commenters in that thread is absolutely shameful. Whether you feel strongly about it or not, there is a civil way to discuss things and that isn't it.
What a shitshow. People should stop using as many Microsoft products as possible and move on. Seriously, it's the best silent feedback that can ever be delivered in cases like these.
I really hope the editor wars don't start again. I've been happily using VsCode for years now. More than happy in fact, it's one of the best pieces of software I've ever used, as evidenced by how AI companies basically started as a VsCode fork.
But this is going full-throttle on enshittification.
WTF happened at microsoft (github, openai partnership, copilot pricing) that all this shit just ramped up to a 11?
I've been using *nix and usenet since the early 1990's.
I always thought "editor wars" was a particularly dumb in-joke among a small group and I feel sad when I see people who think it was ever more than that.
The Wikipedia page cites "The Jargon File" as an authoritative source of truth. Ridiculous.
I got tired of Claude adding their signatures to my commits against my instructions (the settings schema changed at some point), so I added a commit-msg hook that blocks multi-line commits. Easy and works like a charm, and would block this sort of M$ intrusion.
Well, that's good news for all the developers working at companies with delusional management proclaiming "100% of code will be written by AI in 6 months"!
There's a large gap between what they do (same env var disables this since the beginning) vs Microsoft bucking it's way through AI coauthorship credit in a multi potential author china shop, though.
I would think that the thing to do about it (if you want to use VS Code at all; some people (such as myself) don't), should be to send a patch to prevent adding the Co-authored-by line if Copilot is disabled, so that it will only add that line if the Copilot is enabled.
Honestly not sure how viable that is long term with the way the pricing kinda needs to go. I think the recent copilot price increase is just the tip of the iceberg.
Zed is a nonstarter for me as long as they install additional software (third party runtimes to run LSPs) without asking my permission. That isn't acceptable behavior.
Unfortunately, Zed is years behind VSCode in terms of polish, Microsoft supported LSPs just work better in VSCode, they are better integrated, and Zed can't do anything about LSPs memory or peformance.
One could think that.
But VSCode is the one that occasionally failed to simply render text.
No idea what happened these handful of times, but the UI was just completely screwed up, as if it were one of these "scratch to reveal" games, but with the file’s content (and unresponsive, obviously).
I tried VSCode some years ago (immediately moved to Codium) and yes, it is extremely well-done for what it is. But Zed is good enough for me. Everything I care about for Python, TS/JS/CSS and C programming is available. I do not even miss the JetBrains tooling for these.
There was no ill intent by evil corporation, but rather a desire to support functionality that some customers expect of VS Code w.r.t. AI-generated code. As folks mentioned here - many similar tools do this as well.
Obviously, it should not be on when disableAIFeatures is on and it should not be reporting changes that were not done by AI. I'll work on fixing those and meanwhile revert default to off in 1.119 update.
I am open to any (constructive) comments/suggestions - please feel free to reach me directly (my alias @microsoft.com) or open an issue on GitHub. Happy to answer anything here as well.
reply