I skimmed the issue. No wonder Anthropic closes these tickets out without much action. That’s just a wall of AI garbage.
Here’s what I’ve done to mostly fix my usage issues:
* Turn on max thinking on every session. It save tokens overall because I’m not correcting it of having it waste energy on dead paths.
* keep active sessions active. It seems like caches are expiring after ~5 minutes (especially during peak usage). When the caches expire it sees like all tokens need to be rebuilt this gets especially bad as token usage goes up.
* compact after 200k tokens as soon as I reasonably can. I have no data but my usage absolutely sky rockets as I get into longer sessions. This is the most frustrating thing because Anthropic forced the 1M model on everyone.
Haha. yeah my eyes glazed over immediately on the issue. Absolutely this was someone telling their Claude Code to investigate why they ran out of tokens and open the issue.
Good chance it's not real or misdiagnosed. But it gives me some degree of schadenfreude to see it happening to the Claude Code repo.
Its your claude speaking to their claude, which is fair, but it makes this whole discussion a bit dumb since we are basically talking about two bots arguing with each other.
This was part of Sam Altman's (supposed) concerns about AI not being open and equally available. It a dystopian future it might be their cluster of 1000 agents using a GWhr of power to argue against your open weights agent who has to run on a M5.
The problem is actually because their cache invalidates randomly so that's why replaying inputs at 200k+ and above sucks up all usage. This is a bug within their systems that they refuse to acknowledge. My guess is that API clients kick off subscription users cache early which explains this behavior, if so then it's a feature not a bug.
They also silently raised the usage input tokens consume so it's a double whammi.
It depends on your account and seems to be random.
On my personal Max 5x account it’s not default and if I force it, it says I’ll pay API rates past 200k. On my other account that I use for work (not an enterprise account just another regular Max 5x account) the 1M model has been the default since that rollout. I’ve tried updating and reinstalling etc, and I can’t ever get the 1M default model on my personal account.
Based on other comments and discussion online as well as Claude code repo issues, it seems I’m not the only one not getting the 1M model for whatever reason and the issue continues to be unresolved.
Can confirm. Max effort helps; limiting context <= ~20-25% is crucial anymore.
> * keep active sessions active. It seems like caches are expiring after ~5 minutes (especially during peak usage). When the caches expire it sees like all tokens need to be rebuilt this gets especially bad as token usage goes up.
Is this as opaque on their end as it sounds, or is there a way to check?
Here’s what I’ve done to mostly fix my usage issues:
* Turn on max thinking on every session. It save tokens overall because I’m not correcting it of having it waste energy on dead paths.
* keep active sessions active. It seems like caches are expiring after ~5 minutes (especially during peak usage). When the caches expire it sees like all tokens need to be rebuilt this gets especially bad as token usage goes up.
* compact after 200k tokens as soon as I reasonably can. I have no data but my usage absolutely sky rockets as I get into longer sessions. This is the most frustrating thing because Anthropic forced the 1M model on everyone.