Hacker Newsnew | past | comments | ask | show | jobs | submit | encoderer's commentslogin

It’s abundantly clear to a uniformed soldier that they have a lot of rules to follow and “can a senator do it” couldn’t matter less.

Not the OP but I have some… similar experience. When you run a high availability service without a full ops team, reliable infrastructure is non-negotiable. Burn out has to be managed.

In my “repo os” we have an adversarial agent harness running gpt5.4 for plan and implementation and opus4.6 for review. This was the clear winner in the bake-off when 5.4 came out a couple months ago.

Re-ran the bake-off with 4.7 authoring and… gpt5.4 still clearly winning. Same skills, same prompts, same agents.md.


I know over a dozen 1-2 person SaaS, not including my own. Some of them have hired some help now but they are still more on the "lifestyle business" side. They are in many different spaces, and founders from around the world. I am not a big networker, but this is my niche and it's big enough that I just know a slice.

I don't know any. I know a couple of people who had ideas that became bigger startups, but only 1 who was a friend rather than via networking. And I know a few people who did try the small saas or other small software based business but they all failed and now have jobs.

There’s one on the front page right now - healthchecks.io

Ok? My point wasn't that they don't exist, I was just pointing out that anecdata from one person who is deeply into a group where it's a thing has large counter points. And I thought my example worth mentioning because my group is into it but nobody amongst it has made a success of it.

Just an fyi

As a point of reference, I’m a heavy cc user and I’ve had a few bugs but I’ve never had the terminal glitches like this. I use iterm on macOS sequoia.


To offer the opposite anecdotal evidence point -- claude scrolls to the top of the chat history almost capriciously often (more often than not) for me using iterm on tahoe


I've had it do it occasionally in all of Ghostty, iTerm2 and Prompt 3 (via SSH, not sure what terminal emulator that uses under the hood)


I thought I was the only one who had this problem - so annoying, and the frequent Ui glitches when it asks you to choose an option .


Wow I thought it was tmux messing up on me, interesting to hear it happens without it too


Not tmux related at all had it happen in all kinds of setups (alacritty/linux, vscode terminal macos)


Scrolling around when claude is "typing" makes it jump to the top


To be fair, iTerm is likely to be the single most common terminal emulator used by Claude Code developers, so I'd hope that it would work tolerable well there.


i will note that they really should of used something like ncurses and kept the animations down, TTYs are NOT meant to do the level of crazy modern TUIs are trying to pull off, there is just too many terminal emulators out there that just don't like the weird control codes being sent around.


Pair programming works best when you are tasked with a problem that’s actually beyond your current abilities. You spend less time in your head because you are exploring a solution space for the first time.


Yes I’ve had a lot of success with this too. I found with prompt tightening I seldom do more than 5 rounds now, but it also does an explicit plan step with plan review.

Currently I’m authoring with codex and reviewing with opus.


Good reminder: don't forget the plan review!


Your most autistic and senior engineer is now named Claude. Point him at nearly any task, pair-program with codex, and review the results.


I wonder if you've ever worked on a web service at scale. JSON serialization and deserialization is notoriously expensive.


It can be, but $500k/year is absurd. It's like they went from the most inefficient system possible to create, to a regular normal system that an average programmer could manage.

I have no idea if they are doing orders of magnitude more processing, but I crunch through 60GB of JSON data in about 3000 files regularly on my local 20-thread machine using nodejs workers to do deep and sometimes complicated queries and data manipulation. It's not exactly lightning fast, but it's free and it crunches through any task in about 3 or 4 minutes or less.

The main cost is downloading the compressed files from S3, but if I really wanted to I could process it all in AWS. It also could go much faster on better hardware. If I have a really big task I want done quickly, I can start up dozens or hundreds of EC2 instances to run the task, and it would take practically no time at all... seconds. Still has to be cheaper than what they were doing.


Curious about the workload, but as Im trying to make a tool about json, what are those files compressed with? What is the size of the average file ? What is their structure (ndjson ? Dict with some huge data structure a few level deep?)


In S3 the JSON is stored in plain-old .zip files. While downloading to local the files are unzipped to plain old JSON. It's basically an object containing tons of data about each website I manage including all fragments of HTML and metadata used on the sites. It can get quite large, some sites have thousands of pages. We often need to find things stored many levels deep in the JSON that may be tricky to find, it isn't usually a specific path, and lots of iterable arrays and objects are involved. The files range from ~20MB to ~400MB, depending on how much content each site has. And we have ~9000 total sites.


They got a 1000x speed up just by switching languages.

I highly doubt the issue was serialization latency, unless they were doing something stupid like reserializing the same payload over and over again.


Well, for starters, they replace the RPC call with an in-process function call. But my point is anybody who's surprised that working with JSON at scale is expensive (because hey it's just JSON!) shouldn't be surprised.


Well everything is expensive at scale, and any deserialization/serialization step is going to be expensive if you do it enough. However yes i would be surprised. JSON parsing is pretty optimized now, i suspect most "json parsing at scale is expensive" is really the fault of other parts of the stack


Would it be better or worse if I had that experience and still said it's stupid?


You didn't say it was stupid. If you had, I would have just ignored the comment. But you expressed a level of surprised that led me to believe you're unfamiliar with how much of a pain in the ass JSON parsing is.


I think OP’s point was surprise that a company would spend so much on such inefficient json parsing. I’m agreeing. I get that JSON is not the fastest format to parse, but the overarching point is that you would expect changes to be made well before you’re spending $300k on it. Or in a slightly more ideal world, you wouldn't architect something so inefficient in the first place.

But it's common for engineers to blow insane amounts of money unnecessarily on inefficient solutions for "reasons". Sort of reminds me of saas's offering 100 concurrent "serverless" WS connections for like $50 / month - some devs buy into this nonsense.


Because of cost basis step up at death, you can just defer forever.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: