Hacker Newsnew | past | comments | ask | show | jobs | submit | AbrahamParangi's commentslogin

the author overestimates how much ~$5M/yr actually is. a business like uber isn't happy about that but it's not even in the top 10 of things they're wasting money on. moreover this isn't the engineer's sole fault it is more the fault of whoever actually approved the expense.

Oh I remember this quote. I thought it was quite a good one because he’s right. At least in the US, apple maps is better than google maps for most purposes.

I'm mostly curious how much of that revenue is actual ARR, which is to say contractually recurring. It is pretty dang rare for a hardware company to have nontrivial ARR.


automotive contracts are typically on 6 year cycles, so tech gets designed into a new car and it's locked in until the next vehicle generation (5-7 years depending on the automaker). year to year sales can fluctuate but are fairly predictable.


AI is less deranging than partisan news and social media, measurably so according to a recent study https://www.ft.com/content/3880176e-d3ac-4311-9052-fdfeaed56...


meta, but the comment pattern in this thread strongly suggests inorganic support for the government's position.


> Please don't post insinuations about astroturfing, shilling, brigading, foreign agents, and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data.

https://news.ycombinator.com/newsguidelines.html


They're specifically referring to the dead comments from new users in this thread, so it's not insinuation. They're pointing out a higher-than-normal quantity of shill bots flocked to this thread.

The fact that the comments are dead means the system is working as intended, but it's not unreasonable to point out the nature of the comments.


That seems shockingly naive.


The mistake is thinking that an organic entity won't reject causality when it interferes with their politics.

The interesting thing here is that this isn't an always-on feature. You can actually see the process on a person's face. I was delighted by the recent DOGE depositions because the video quality is good enough to see the guy's eyes stop moving and glaze over.


Can you be more specific? I see a lot of uninformed takes, but no specific bias. Do you mean downvotes?


If you turn on the thing that shows 'dead' comments, there is a larger than normal number here.


Indeed, and the dead comments (from new users!) overwhelmingly favor the government position.

But, this is a non-story, because those comments were correctly killed precisely so they wouldn't clog up this thread.


I wouldn't call something a non-story just because the ultimate end-goal was mitigated. The fact that it was attempted is a story, especially when it's a meta commentary on story about trying the same thing _officially_.


Eh. The actors that use these features use a shotgun approach. The result is you see a bunch of dead comments and assume the system is working as intended, while a couple of the less inconspicuous comments persist. This happens frequently on specific topics.


Do you think it's more likely a government influence operation, or a single dipshit lazily pasting LLM slop?


Could be organic dipshits with little to offer the discussion. That's the most common case in my view.

Said dipshits tend have an unnecessarily high degree of self regard.


[flagged]


Veiled slurs aren't funny and don't contribute anything to HN.


Nor do bots and one track minded posters.


I'd argue that it depends on what the track is. But yeah, bots and slurs: bad. Let's stop using them.


Are you suggesting there is a government conspiracy to influence this dusty corner dive bar of the Internet?


Are there tech workers who don't know what HN is? It's a pretty reasonably sized social media site.


At my previous job at a well known, established large tech company - I didn't find anyone who had heard of HN.

Not about people using HN. But even being aware the site exists.


I'm reminded of that episode of Portlandia where the mayor was obsessed with thinking the city was bigger and more important than it actually is.


I don't think I overstated it. Tech workers is a small piece of the global population.


Portland OR has a higher GDP than Vancouver BC.


Sadly, Portland is a backwater logging outpost and no one outside the PNW gives a shit about Portland or could place it on a map. I'm sorry, it's true.


That's how I feel about Dallas, Pittsburgh, Tallahassee, etc. I think we're all just not as familiar with regions outside our own.


I had no idea what hn was until about 2 years ago. This would be 8 years into a career in tech… there are dozens of us.


Nah I think a lot of the judicial overreach is just pissing off a lot of the regular hackernews userbase off. This fit the law to a T.

And then if it's not this it's blocking the removal of a temporary order. Just tons of garbage that was implemented without any law now all of a sudden is permanent because a judge decided.


It's definitely just to get people to fly with a valid ID without ambushing the enormous number of people who have been living under a rock and don't realize they need a real ID. Otherwise they'll have a dozen or so people freaking out at the airport every single day for years.


Respectfully I don’t think the author appreciates that the configurability of Claude Code is its performance advantage. I would much rather just tell it what to do and have it go do it, but I am much more able to do that with a highly configured Claude Code than with Codex which is pretty much just set at the out of the box quality level.

I spend most of my engineering time these days not on writing code or even thinking about my product, but on Claude Code configuration (which is portable so should another solution arise I can move it). Whenever Claude Code doesn’t oneshot something, that is an opportunity for improvement.


Heya, I'm the author of the post and I just wanted to say I do appreciate the configurability! As I mentioned in the post, I have been that kind of developer in the past.

> This is a perfect match for engineers who love configuring their environments. I can’t tell you how many full days of my life I’ve lost trying out new Xcode features or researching VS Code extensions that in practice make me 0.05% more productive.

And I tried to be pretty explicit about the idea that this is a very personal choice.

> Personally — and I do emphasize this is a personal decision — I‘d rather write a well-spec’d plan and go do something else for 15 minutes. Claude’s Plan Mode is exceptional, and that‘s why so many people fall in love with Claude once they try it.2

For every person who feels like me today, there's someone who feels like you out there. And for every person who feels like you, there's someone like me (today) who finds it not as valuable to their workflow. That's the reason my conclusion was all about getting folks to try out both to see what works for them — because people change and it's worth finding out who you really at this moment in time.

Anyhow, I do think that Codex is also very configurable — I was just trying to emphasize that it's really great out the box while Claude Code requires more tuning. But that tuning makes it more personal, which as you mention is a huge plus! As I've touched on in a few posts [^1] [^2] Skills are to me a big deal, because they allow people to achieve high levels of customization without having to be the kind of developer that devotes a lot of time to creating their perfect set up. (Now supported in both Claude Code and Codex.)

I don't want this to turn into a bit of a ramble so I'll just say that I agree with you — but also there's a lot of nuance here because we're all having very personal coding experiences with AI — so it may not entirely sound like I agree with you. :)

Would love to hear more about your specific customizations, to make sure that I'm not missing out on anything valuable. :D

[1]: https://build.ms/2025/10/17/your-first-claude-skill/ [2]: https://build.ms/2025/12/1/scribblenauts-for-software/


To be quite clear, I hate configuring my environment. I hate it. The farther I get from creating things that people can use, the less I like it. I spend most of my time on claude config not because I enjoy the experience per se but because it's SO USEFUL to do so.


To be honest that's most of my pitch for Codex in the blog post. Codex works great without any configuration, and amazingly with. If you want to spend less time configuring then maybe Codex is the right agentic system for you.

I don't want to restate my thesis too much — but I really do believe it's worth experimenting with these tools every couple of months to see if the latest updates better match your preferences.


Hey, I'm not very familiar with Claude Code. Can you explain what configuration you're referring to?

Is this just things like skills and MCPs, or something else?


Skills, MCPs, /commands, agents, hooks, plugins, etc. I package https://charleswiltgen.github.io/Axiom/ as an easily-installable Claude Code plugin, and AFAICT I'm not able to do that for any other AI coding environment.


You can do basically all that with codex, although claude might have slightly more convenient tooling. The end result will be the same anyway.


That hasn't been my experience, although I'm happy to accept that I'm the problem. Apparently they've released their skills support (?), so I should try again. https://developers.openai.com/codex/skills


OpenCode, Pi are even more configurable.


I wrote my own agentic coding harness (it's quite easy) but I use claude code because opus's competence with its own tools is very high.


Candidly, the accusation of short-sightedness doesn't really make sense when it comes to enthusiasm in a technology which often in practice falls short today but which in certain cases and in more cases tomorrow than today is worth tremendous business value.

If anything, you should accuse them of foolhardy recklessness. They are not the sticks in the mud.


Can a company like openAI be worth an estimated 1/5th of Alphabet, which offers a similar product but also has an operative system, a browser, the biggest video platform, the most used mail client, its own silicon to running that product, the 3rd most popular Cloud platform, ... ?

I think that is the recklessness in question. Throw in that there is no profit for OpenAI & co and that everything is fueled by debt and the picture is grim (IMHO)


> and in more cases tomorrow than today is worth tremendous business value

That's a nice crystal ball you have there. From where I'm standing, model performance improvements have been slowing down for a while now, and without some sort of fundamental breakthrough, I don't see where the business value is going to come from


The prerequisite for me to be wrong is that the technology needs to stop getting better entirely *right now* AND we need to discover ZERO new uses for what exists today.

That's a fairly tall order.


So if the plateau is unanimously declared to have been reached tomorrow OR just one more tiny use case exists tomorrow and all others dwindle away to nothing, than you consider yourself to be correct? What a wild assertion!


If the plateau is reached at some higher level of capability, I will remain correct, yes. If use cases are discovered that do not exist today, I will also be correct. You said it in a silly way but you're directionally correct.


No. You state that this is all that it would take to be considered as tremendous business value. You are moving your goal posts on your point. My point is that you are taking an absolute position that there is tremendous business value in its current form(as a miniscule improvement and one insignificant new use case does does not equate to tremendous business value in itself) and so that remains to be seen.


You either misread or are misrepresenting my statement and either way I am not interested in continuing this.


We don't even have good uses today. That doesn't mean there won't be good uses tomorrow, but neither does it inspire confidence.


Rushing to get on board something that looks like it might be the next big thing is often short-sighted. Some recent examples include Windows XP: Tablet Edition and Google Glass.


That's like saying that gambling is shortsighted. It depends entirely on the odds as to whether or not it's wise, but "shortsighted" implies that making the bet precludes some future course of action.


Maybe if you have near-infinite wealth like Google or Microsoft you aren't precluding future choices. For most economic actors, making some bets means not making others.

Companies that are hastily shoehorning AI into their customer support systems could instead devote resources to improving the core product to reduce the need for support.


If google bears no role in fixing the issues it finds and nobody else is being paid to do it either, it functionally is just providing free security vulnerability research for malicious actors because almost nobody can take over or switch off of ffmpeg.


I don’t think vulnerability researchers are having trouble finding exploitable bugs in FFmpeg, so I don’t know how much this actually holds. Much of the cost center of vulnerability research is weaponization and making an exploit reliable against a specific set of targets.

(The argument also seems backwards to me: Google appears to use a lot of not-inexpensive human talent to produce high quality reports to projects, instead of dumping an ASan log and calling it a day. If all they cared about was shoveling labor onto OSS maintainers, they could make things a lot easier for themselves than they currently do!)


Internally, Google maintains their own completely separate FFMpeg fork as well as a hardened sandbox for running that fork. Since they keep pace with releases to receive security fixes, there’s potentially lots of upstreamable work (with some effort on both sides…)


My understanding from adjacent threads in this discussion is that Google does in fact make significant upstream contributions to FFmpeg. Per policy those are often made with personal emails, but multiple people have said that Google’s investment in FFmpeg’s security and codec support have been significant.

(But also, while this is great, it doesn’t make an expectation of a patch with a security report reasonable! Most security reports don’t come with patches.)


Shouldn't this fork be publicly available as per GPL license?


So your claim is that buggy software is better than documented buggy software?


I think so, yes. Certainly it's more effort to both find and exploit a bug than to simply exploit an existing one someone else found for you.


Yeah it's more effort, but I'd argue that security through obscurity is a super naive approach. I'm not on Google's side here, but so much infrastructure is "secured" by gatekeeping knowledge.


I don't think you should try to invoke the idea of naivete when you fail to address the unhappy but perfectly simple reality that the ideal option doesn't exist, is a fantasy that isn't actually available, and among the available options, even though none are good, one is worse than another.

"obscurity isn't security" is true enough, as far as it goes, but is just not that far.

And "put the bugs that won't be fixed soon on a billboard" is worse.

The super naive approach is ignoring that and thinking that "fix the bugs" is a thing that exists.


If I know it's a bug and I use ffmpeg, I can avoid it by disabling the affected codec. That's pretty valuable.


More fantasy. Presumes the bug only exists in some part of ffmpeg that can be disabled at all, and that you don't need, and that you are even in control over your use of ffmpeg in the first place.

Sure, in maybe 1 special lucky case you might be empowered. And in 99 other cases you are subject to a bug without being in the remotest control over it since it's buried away within something you use and don't even have the option not to use the surface service or app let alone control it's subcomponents.


It's a heck of a lot better than being unaware of it.

(To put this in context: I assume that on average a published security vulnerability is known about to at least some malicious actors before it's published. If it's published, it's me finding out about it, not the bad actors suddenly getting a new tool)


it's only better if you can act on it equal to the bad guys. If the bad guys get to act on it before you, or before some other good guys do on your behalf, then no it's not better

remember we're not talking about keeping a bug secret, we're talking about using a power tool to generate a fire hose of bugs and only doing that, not fixing them


The bug in question revolves around support for codec that has never been in wide use, and was only in obscure use over 25 years ago.


There is no "the bug". The discussion is about what to do with the power of bug-finding tools.


"The bug" in question refers to the one found by the bug-finding tool the article claims triggered the latest episode of debate. Nobody is claiming it's the only bug, just that this triggering bug highlighted was a clear example of where there is actually such a clear cut line.

Google does contribute some patches for codecs they actually consume e.g. https://github.com/FFmpeg/FFmpeg/commit/b1febda061955c6f4bfb..., the bug in question was just an example of one the bug finding tool found that they didn't consume - which leads to this conversation.


Which codec is it?


I believe it's: sanm LucasArts SANM/SMUSH video


The bug exists whether it's reported to the maintainers or not, so yeah, it's pretty naive.


You observe that it is better to be informed than ignorant.

This is true. Congratulations. Man we are all so smart for getting that right. How could anyone get something so obvious and simple wrong?

What you leave out is "in a vacuum" and "all else being equal".

We are not in a vacuum and all else is not equal, and there are more than those 2 factors alone that interact.


Given that Google is both the company generating the bug reports and one of the companies using the buggy library, while most of the ffmpeg maintainers presumably aren't using their libraries to run companies with a $3.52 trillion dollar market cap, would you argue that going public with vulnerabilities that affect your own product before you've fixed them is also a naive approach?


Sorry, but this states a lot of assumption as fact to ask a question which only makes sense if it's all true. I feel Google should assist the project more financially given how much they use it, but I don't think Google shipping products using every codec they find bugs for with their open source fuzzer project is a reasonable guess. I certainly doubt YouTube/Chrome let's you upload/compiles ffmpeg with this LucasArts format, as an example. For security issues relevant to their usage via Chrome CVEs etc, they seem to contribute on fixes as needed. E.g. here is one via fuzzing or a codec they use and work on internally https://github.com/FFmpeg/FFmpeg/commit/b1febda061955c6f4bfb...

In regards whether it's a bad idea to publicly document security concerns found regardless whether you plan on fixing them, it often depends if you ask the product manager what they want for their product or what the security concerned folks in general want for every product :).


> I think so, yes. Certainly it's more effort to both find and exploit a bug than to simply exploit an existing one someone else found for you.

That just means the script kiddies will have more trouble, while more scary actors like foreign intellegence agencies will have free reign.


Foreign intelligence has free rein either way. The script kiddies are the only ones that can be stopped by technological solutions.


it’s not a claim it’s common sense that’s why we have notice periods


I like how some coward downvoted with no response when my counterpoint is devestating.


> it functionally is just providing free security vulnerability research for malicious actors because almost nobody can take over or switch off of ffmpeg

At least, if this information is public, someone can act on it and sandbox ffmpeg for their use case, if they think it's worth it.

I personally prefer to have this information be accessible to all users.


This is a weird argument. Basically condoning security through obscurity: If nobody reports the bug then we just pretend it doesn’t exist, right?

There are many groups searching for security vulnerabilities in popular open source software who deliberately do not disclose them. They do this to save them for their own use or even to sell them to bad actors.

It’s starting to feel silly to demonize Google for doing security research at this point.


> It’s starting to feel silly to demonize Google for doing security research at this point.

Aren't most people here demonizing Google for dedicating the resources to find bugs, but not to fix them?


And not giving the maintainners reasonable amount of time to fix. This was triggered by recent change of policy on google side.


The timeline is industry standard at this point. The point is make sure folks take security more seriously. If you start deviating from the script, others will expect the same exceptions and it would lose that ability. Sometimes it's good to let something fail loudly to show this is a problem. If ffmpeg doesn't have enough maintainers, then they should fail and let downstream customers know so they have more pressure to contribute resources. Playing superman and trying to prevent them from seeing the problem will just lead to burn out.


Is it industry standard to run automatic AI tools and spam the upstream with bug reports? To then expect the bugs to be fixed within a 90 days is a bit much.

It's not some lone report of an important bug, it's AI spam that put forth security issues at a speed greater than they have resources to fix it.


"AI tools" and "spam" are knee jerk reactions, not an accurate picture of the bug filed: https://issuetracker.google.com/issues/440183164?utm_source=...

whether or not AI found it, clearly a human refined it and produced a very high quality bug report. There's no AI slop here. No spam.


I guess the question that a person at Google who discovers a bug they don’t personally have time to fix is, should they report the bug at all? They don’t necessarily know if someone else will be able to pick it up. So the current “always report” rule makes sense since you don’t have to figure out if someone can fix it.

The same question applies if they have time to fix it in six months, since that presumably still gives attackers a large window of time.

In this case the bug was so obscure it’s kind of silly.


It doesn't matter how obscure it is if it's a vulnerability that's enabled in default builds.


This was not a case of stumbling across a bug. This was dedicated security research taking days if not weeks of high paid employees to find.

And after all that, they just drop an issue, instead of spending a little extra time on producing a patch.


It’s possible that this is a more efficient use of their time when it comes to open source security as a whole, most projects do not have a problem with reports like this.

If not pumping out patches allows them to get more security issues fixed, that’s fine!


From the perspective of Google maybe, but from the perspective of open source projects, how much does this drain them?

Making open source code more secure and at the same time less prevalent seems like a net loss for society. And if those researchers could spare some time to write patches for open source projects, that might benefit society more than dropping disclosure deadlines on volunteers.


I’m specifically talking from the perspective of everybody but Google.

High quality bug reports like this are very good for open source projects.


Except users can act accordingly to work around the vulnerability.

For one, it lets people understand where ffmpeg is at so they can treat it more carefully (e.g. run it in a sandbox).

Ffmpeg is also open source. After public disclosure, distros can choose to turn off said codec downstream to not expose this attack vector. There are a lot of things users can do to protect themselves but they need to be aware of the problem first.


Security by obscurity. In 2025. On HN.


This is comical because we used to have something called the turing test which we considered our test of human-level intelligence. We never talk about it now because we obviously blew past it years ago.

There are some interesting ways in which AI remains inferior to human intelligence but it is also obviously already superior in many ways already.

It remains remarkable to me how common denial is when it comes to what AI can or cannot actually do.


There are also some interesting ways in which bicycles remain inferior to human locomotion but they are also obviously already superior in many ways already.

Still doesn't mean we should gamble the economies of whole continents on bike factories.


I'm half joking but people who can't tell which side of a chat is an LLM aren't conscious


You are absolutely right!

But common patterns of LLMs today will become adopted by humans as we are influenced linguistically by our interactions - which then makes it harder to detect LLM output.


This is an artifact of RLHF and far better human facsimiles are trivial with uncensored / jailbroken models.


I think it's that the issues are still so prevalent that people will justify poor arguments and reasons for being skeptical, because it matches their feelings, and articulating the actual problem is harder.


It's exactly the same as the literal Luddites, synthesizers, cameras, etc. The actual concern is economic: people don't want to be replaced.

But the arguments are couched in moral or quality terms for sympathy. Machine-knitted textiles are inferior to hand-made textiles. Synthesizers are inferior to live orchestras. Daguerreotypes are inferior to hand-painted portraits.

It's a form of intellectual insincerity, but it happens predictably with every major technological advance because people are scared.


I don't completely disagree. But it's incorrect to claim that there's nothing but fear of losing jobs at the heart of the AI concern.

I think a lot of people like myself are concerned with how dependent we are becoming so quickly on something with limited accuracy and accountability.


Would your concerns be lessened or heightened if AI was more accurate? The doomsday scenario was always a highly competent AI like Skynet.


I think it would ease some of my concerns, but wouldn't make me in the camp that believes it should be raced to without thinking about how to control it and plans in place to both identify and react to it's risks.

There are two doomsdays. The dramatic one where they control the military and we end up living in the matrix. And the less dramatic, where we as human forget how to do things for ourselves and then slowly watch the AIs become less and less capable of keeping us happy and alive. Maybe in the end of both scenarios it's similar but one would take decades, while the other could happen overnight.

Accuracy alone doesn't fix either doomsday scenario. But it would slow some of the issues I see forming already with people replacing research skills and informational reporting with AIs that can lie or be very misleading.


> We never talk about it now because we obviously blew past it years ago.

It's shocking to me that (as far as I know) no one has actually bothered to do a real Turing test with the best and newest LLMs. The Turing test is not whether a casual user can be momentarily confused about whether they are talking to a real person, or if a model can generate real-looking pieces of text. It's about a person seriously trying, for a fair amount of time, to distinguish between a chat they are having with another real person and an AI.

Q: Do you play chess? A: Yes. Q: I have K at my K1, and no other pieces. You have only K at K6 and R at R1. It is your move. What do you play? A: (After a pause of 15 seconds) R-R8 mate.


A:I don’t know chess notation


Tbf, a machine is more likely to be versed in this ancient descriptive notation than a human is who is maybe just playing casually. R1 and K1 have not been around since the 80s.


Try reading Turing's thesis before making that assertion, because the imitation game wasn't meant to measure a tipping point of any kind.

It's just a thought experiment to show that machines achieving human capabilities isn't proof that machines "think", then he argues against multiple interpretations of what machines "thinking" does even mean, to conclude that whether machines think or not is not worth discussing and their capabilities are what matters.

That is, the test has nothing to do with whether machines can reach human capabilities in the first place. Turing took for granted they eventually would.


> We never talk about it now because we obviously blew past it years ago.

My Turing test has been the same since about when I learned it existed. I told myself I'd always use the same one.

What I do is after saying Hi, I will repeat the same sentence forever.

A human still reacts very differently than any machine to this test. Current AIs could be adversarially prompted to bypass this maybe, but so far it's still obvious its a machine replying.


What would you expect a human to reply?

And after you have answered that question. Try Claude Sonnet 4.5.

What is Claude Sonnet 4.5's reply?


I decided to put this to the test.

What I would expect a human to reply:

"Um... OK?"

What Claude Sonnet 4.5 replied:

"Hi there! I understand you're planning to repeat the same sentence. I'm here whenever you'd like to have a conversation about something else or if you change your mind. Feel free to share whatever's on your mind!"

I don't think I've ever imagined a human saying "I understand you're planning to repeat the same sentence", if you thought this was some kind of killer rebuke, I don't think it worked out the way you imagined- do you actually think that's a human-sounding response? To me it's got that same telltale sycophancy of a robot butler that I've come to expect from these consumer grade LLMs.


That's mostly because of the system prompt asking Claude to be a helpful assistant.

If you try with a human who works in a call center with that system prompt as instructions on how to answer calls, you will likely get a similar response.

But honestly, believe in whatever you wanna believe. I'm so sick of arguing with people online. Not gonna waste my time here anymore.


Maybe don't take such a maximalist interpretation of other people's comments, my point that it doesn't pass that test doesn’t mean it isn't extremely useful for many things. It's just that the test is undefined so I find it funny people say they truly cannot tell it's not a real person. I could've been more crass and said it also doesn’t reply to insults like a real person. There's so many ways in which it doesn't behave like a human, but it's still pretty useful.

What I read from your reply is that you adjoin the above statement with "and therefore they are useless" but there's no need to read it like that.


Is this an ad for Claude Sonnet 4.5?


No, this is Claude Sonnet 4.5 recalibrating its response.


> This is comical because we used to have something called the turing test

It didn't go anywhere.

> which we considered our test of human-level intelligence.

No, this is a strawman. Turing explicitly posits that the question "can machines think?" is ill-posed in the first place, and proposes the "imitation game" as something that can be studied meaningfully — without ascribing to it the sort of meaning commonly described in these arguments.

More precisely:

> The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

----

> We never talk about it now because we obviously blew past it years ago.

No. We talk about it constantly, because AI proponents keep bringing it up fallaciously. Nothing like "obviously blowing past it years ago" actually happened; cited examples look nothing like the test actually described in Turing's paper. But this is still beside the point

> There are some interesting ways in which AI remains inferior to human intelligence but it is also obviously already superior in many ways already.

Computers were already obviously superior to humans in, for example, arithmetic, decades ago.

> It remains remarkable to me how common denial is when it comes to what AI can or cannot actually do.

It is not "denial" to point out your factual inaccuracies.


>obviously already superior in many ways already.

And yet you didn't bother to provide a single obvious example.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: