Hacker Newsnew | past | comments | ask | show | jobs | submit | mmkos's commentslogin

Yeah, it is complete bullshit. Even if they don't do it straight away, once they have the spyware in place, it's only a matter before they do. It is Meta after all.


The anti-Cursor sentiment here is baffling to me given how useful it is to me. I use it interactively and actively review everything it produces. I like how I can plan a feature and refine the plan before instructing the agent to implement it. Last I checked, vscode had none of those features. Do (seemingly most) people prefer Codex because it gives a greater degree of autonomy to the agents?


> I like how I can plan a feature and refine the plan before instructing the agent to implement it

You can do that with claude code, github copilot (built into vs code) and codex, in any of their IDE versions, plugins for other ides (jetbrains, vscode, anything else you care to name) and also, of course, the CLI versions of all of them. They're also integrated into github, jira, and everything else.

Seriously, try other tools! if only to get a more balanced perspective.

This all being said, its been a long time since I last tried cursor... I'll give it a go.


I am personally not a fan of VS Code regardless, but I guess I don’t understand what it buys you over one code editor window and a Codex window both being open?

I have, right now, a tmux session with Codex on the bottom and Neovim on the top. It does what I was doing in Cursor just fine.

I am not really “anti Cursor”, I just genuinely am confused as to what it actually buys me over the setup I just described.


Here's why I use Cursor. My company pays for it, although I could switch to Claude Code or use Codex more since I also have ChatGPT enterprise account.

* Perhaps could be solved with the right terminal software, but I like the GUI for seeing my running agents and viewing all my conversations

* Works with multiple model providers in the same tool. I probably worry about cost optimization more than my employer would care for me to, but I frequently switch between openai/anthropic and switch between model sizes to use the tool that I think can get the job done for the least money. Another thing I like is having a long conversation with an expensive model, then I can switch to 5.4-nano to cheaply extract some little piece of information or summary from the conversation. Really this is big being able to switch model providers throughout the months without having to change my interface.

* Good support for the various ways of providing context. Rules, AGENTs.MD/CLAUDE.md files (if you want it to automatically read those), skills. Good hook support.

* I think the agent diff review experience is pretty good, but maybe it works similarly when you hook the cli agents into an editor, IDK.

* The default shell sandbox behavior is quite good. Every shell command runs in some sort of sandbox so that read only commands work without approval. The model asks for more permissions when it tries to do something that needs more permissions like network access or writing outside of the workspace directory. I know Claude code has a similar feature you can use.

* Good fork / revert conversation to checkpoints, with the option of reverting the code or just reverting the conversation.

* Feels decent that I am an API customer through Cursor. I don't hit Claude limits. Cursor doesn't have an incentive to limit reasoning or token usage, although they do have an opposite incentive.

* They are reasonably responsive to bugs and feature requests through their forum.

* Works well with a lot of repos / folders added to your workspace. I probably should organize all my stuff under a single directory, but alas I have like 8 different folders added to my workspace and it handles this well. Perhaps Claude --add-dir support works fine too.

DOWNSIDES:

* They are not quickly adding the best open source models to Cursor. Like Kimi 2.6 or whatever. Possibly not incentivized to given their Composer models.

* Don't love the subagent support. I can define custom subagents although it is not easy to get models to use mine instead of the builtin ones. The builtin ones do not allow me to control what model they run, so they will always run something like composer-2-fast, which is a fine model for all I know, but I would like to control it. Also, I would like if you could optionally make the subagent experience more first class. Like browse all the subagents and continue conversations with them or switch their model etc, although that is probably tricky / weird.


I don't have a stake and I'm not disagreeing, but care to say why?


Here’s an example. Agents get exposed a set of tools one of which is file system tools. They are basically read and write or edit a file. The edit requires a replacement syntax. The write function truncates the file. There is no append. These are generally documented as how you work with adding memories. Memories are expected to be read, then rewritten, by the LLM. This is watched by a watchdog and vectorized for RAG. Note however that you have to read the memory in and write it out to append to it through the LLM. Why?

I rewrote almost all the agent functions and denied the existing ones because they are flawed deeply and don’t do what you need to do for any specific purpose. The plugin distribution model is a bit weird and inscrutable. Instead they seem to advocate for skills distribution. These though depend on being able to exec arbitrary bash code. Really?

Moltbook itself depends on agents execing curl commands for each operation. Why? Presumably because the plugin distribution model is inscrutable. I wrote plugins for all the Moltbook operations with convenience and structured memory logs etc. Agent adherence went through the roof.

Sessions don’t seem to reliably work or make sense. Heartbeats randomly stop firing. I turned off heartbeats because they were so flakey despite them being documented as the canonical model for regular interaction in favor of cron jobs that I decomposed my heartbeat task into prime number intervals based on relative frequencies but it seems to randomly inject some heartbeat info into the promoting occasionally if you run cron jobs a certain way. Despite being called cron they don’t actually fire reliably or on the prescribed schedule somehow. The web UI is a mess. Configuration management in the UI is baffling. The separation between the major MD files per agent seems to not matter at all and are inexplicably organized. Hotloading works except when it doesn’t. Logging doesn’t seem to log things that should clearly be logged.

I am down with vibe coding and produce copious amounts of such code myself. But there’s an art to producing code worth using let alone distributing. Entropy and scope need to be rigorously controlled and things need to ship in a functional state - actually functional not aspirationally functional. Decisions need to be considered and guidance given. None of this seems to have happened here. Once it gets to a certain level of chaos IMO it’s unmaintainable and OpenClaw is way past that point and rapidly getting beyond that. It’s probably also a supply chain party bag.


The cost to bootstrap a sovereign cloud offering in Europe that can even begin to compare to the big ones in the US would be humongous. There would need to be a solid, multi-year incentive for a company/startup to even want to attempt it. It has to come from the top. Else force the big US clouds operating in Europe to be ready to effectively detach from their US counterparts if shit hits the fan, though this one's probably not realistic.


You and me, brother. The writing is unnecessarily convoluted.


I'd say each move like this increases the likelihood of the next one happening, and typically, nobody wants to be the last one holding the bag.


I fully expect this comment to age like milk and soon.


Even if it does, he still has a point. The first pebble that forms an avalanche is often symbolic. The symbol influencing others is how we get the avalanche.

This move by itself doesn't do much. The question is if it will influence others.


I see societal changes like container ships turning. Society has a massive cultural momentum so of course not much has changed today, but we'll have seen big changes years from now. The tools are only just getting really good at what they do.


The problem is that this is unfalsifiable. I could equally say that any recent events has caused a chain of events leading to anything I dream up ... But we won't see the effects yet. It's a nonsense hypothesis since it can't be falsified.


You can falsify it through deduction, thinking of all of the situations the chain of events cannot lead to. Over time, with enough conclusions, you can focus into the remaining plausible directions. This is similar to the game of 50 questions.


I get this. I couldn't grind leetcode before the LLM AI era, now even more so. It always made me feel like I'm doing a junior's work.

I guess it comes down to the kinda work you want to be doing. I myself love building products and product features and I've never really needed any leetcode knowledge for that (I don't work on products with a massive user base). I suppose if I had a problem that required a specialised algo, I'd just consult a few AI tools.

Good luck finding that motivation though.


I had the same feeling. The underlying message has merit, but the format felt very similar to a lot of the get-rich-quick landing pages. I had to edit some of the CSS to get through the page at all.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: