After you see it skip reasoning so many times and saying "actually the simplest fix is" the laziest thing ever you get kind of tired of babysitting it.
jj describe gives a name to a commit. In jj, everything rewrites the history, so there's no real point in calling it out in the command name since it's just the default behavior.
It's not true, in that sense. Commits in jj are basically the same as commits in git as far as mutability is concerned. But in jj you normally work with changes, rather than commits, and open changes are mutable (by altering which immutable commit they point to in the backing store). And there is effectively an append-only audit trail of these alterations (which is what makes `jj undo`/`jj redo` simple).
Some comments here are confusing the issue by saying ‘commit’ when they mean ‘change’ in the jj sense.
Re the grandparent comment, `jj describe` provides a change description, analogous to `git commit --amend --edit` in git terms.
it is true. some history is marked immutable by default; in git, everything is mutable by default and you have to add branch protection on the server side. (granted, you can change what is immutable in jj relatively easily, so you shouldn't ignore branch protection if you're using jj exclusively with a git repo, either.)
describe is also the command you can use to edit the commit message of the change you're currently drafting. In jj there's no staging area, every modification to the working tree immediately gets integrated into the current commit. (This means if you have no diff in your working tree, you're actually on an empty commit.)
Not really familiar too, but jj has everything committed by default (no index, staging area, and uncommitted changes). You use ‘jj new’ to stop adding changes to the current commit.
‘jj describe’ lets you add a message to a commit as it’s not there by default.
So far they did not change it, and none of this applies to business and enterprise accounts. My idea is that it can still be viable as most businesses will have plenty minimally used licenses with just a few power users abusing the request model.
Perhaps for managers. But for everyone actually doing something, you used to need technical proficiency with tools. Now AI is becoming the universal tool.
Opus 4.6 also just got dumber. It's dismissive, hand-wavy, jumps to conclusions way too quickly, skips reasoning... Bubble is going to burst, either some big breakthrough comes up or we are going to see a very fast enshittificafion.
productivity (tokens per second per hardware unit) increases at the cost of output quality, but the price remains the same.
both Anthropic and OpenAI quantize their models a few weeks after release. they'd never admit it out loud, but it's more or less common knowledge now. no one has enough compute.
There is no evidence TMK that the accuracy the models change due to release cycles or capacity issues. Only latency. Both Anthropic and OpenAI have stated they don't do any inference compute shenanigans due to load or post model release optimization.
Tons of conspiracy theories and accusations.
I've never seen any compelling studies(or raw data even) to back any of it up.
but of course, this isn't a written statement by a corporate spokespersyn. I don't think that breweries make such statements when they water their beer either.
I think that the idea is each action uses more tokens, which means that users hit their limit sooner, and are consequently unable to burn more compute.
I feel they will go token base at some point, currently if you only use it with precise prompts and not random suggestions, switch between models 5.4 and 5.4 mini depending on the work, it is the best deal.
Admit I didn't follow the announcements but isn't that a matter of UI? Doesn't seem something that should be baked in the model but in the tooling around it and the instructions you give them. E.g. I've been playing with with GitHub copilot CLI (that despite the bad fame is absolutely amazing) and the same model completely changes its behavior with the prompt. You can have it answer a question promptly or send it on a multi-hour multi-agent exploration writing detailed specs with a single prompt. Or you can have it stop midway for clarification. It all depends on the instructions. Also this is particularly interesting with GitHub billing model as each prompt counts 1 request no matter how many tokens it burns.
It depends honestly. Both are prone to doing the exact opposite of what you asked. Especially with poor context management.
I’ve had both $200 plans and now just have Max x20 and use the $20 ChatGPT plan for an inferior Codex.
My experience (up until today) has always been that Codex acts like that one Sr Engineer that we all know. They are kind of a dick. And will disappear into a dark hole and emerge with a circle when you asked for a pentagon. Then let you know why edges are bad for you.
And yes, Anthropic is pivoting hard into everything agentic. I bet it’s not too long before Claude Code stops differentiating models. I had Opus blow 750k tokens on a single small task.
If you are on a us ansi keyboard and switch to a iso layout (most European layouts are iso) you have I believe two unreachable keys. And the arrangement of the others is slightly different you will have to adapt your muscle memory anyway.
Altgr-intl is pretty good for when you code and write English most of the time and occasionally need accented letters. If you need to write a lot in your native language it's better to get a local layout keyboard.
reply