Hacker Newsnew | past | comments | ask | show | jobs | submit | pingou's commentslogin

Most batteries do not use rare earth metals. Even if they did and it was an issue, we would find alternatives if that was necessary, just like rare earth free motors were developed to avoid all the downsides of that come with those.

Have a look at CATL’s sodium-ion batteries, they do not use anything expensive, rare, or particularly damaging to extract from the environment.


Seems promising but it would have been nice to have some figures and the estimated cost at scale (or even just costs for the prototype).

> The Nexus project, a 1.6 MW solar installation on the canals of the Turlock Irrigation District (TID) in California, is now complete and operational. The $20 million state-funded pilot is presented as a model for agricultural regions affected by water stress.

It's the web, follow the links to related pages and you usually find more information.



AI improving itself (or at least the architecture it runs on), the singularity is near as they say.

Do we have other examples of AI being used to improve the LLMs, apart for the creation of synthetic data and the testing of the models?


There is an apples and oranges difference between AI improving itself (becoming more capable) and AI optimizing software that happens to be used for AI training or inference.

A more efficient transformer just costs less to run.

"AI improving AI" would be if one generation of AI designed a next-gen AI that was fundamentally more capable (not just faster/cheaper) than itself. A reptilian brain that could autonomously design a mammalian brain.

Even when hooked up into a smart harness like AlphaEvolve, I don't think LLMs have the creativity to do this, unless the next-gen architecture is hiding in plain sight as an assemblage of parts than an LLM can be coaxed into predicting.

More likely it'll take a few more steps of human innovation, steps towards AGI, before we have an AI capable of autonomous innovation rather than just prompted mashup generation.


I don't think there is a fundamental divide between implementation speedups and optimization and algorithmic/architecture optimizations

A speedup that changes nothing else is just that: a speedup that changes nothing else.

> Do we have other examples of AI being used to improve the LLMs

Yes, last year when they revealed AlphaEvolve they used a previous gemini model to improve kernels that were used in training this gen models, netting them a 1% faster training run. Not much, but still.


I feel like the most viral lately is https://github.com/karpathy/autoresearch

Self improving, doesn’t necessarily imply singularity right?

There still could be hard constraints to make singularity intractable or just such a long time horizon it’s not practical right?


> AI improving itself

This is the thing to look for in 2027, imho. All the big AI labs have big projects working on research agents, also specifically into improving AI (duh) and I expect a lot of that to get out of the experimental phases this year.

Next year they actually get to do a lot of work and I think we will see the first big effective architectural change co-invented by AI.


And then on 2028 we will be selling ice cream at the beach.

Shameless plug: https://huggingface.co/spaces/smolagents/ml-intern

It’s a simple harness around Opus, but with tight integration to Hugging Face infra, so the agent can read papers, test code and launch experiments


What are the benchmarks for this, in terms of costs of computation and error; cost to converge?

Re: hyperparameter tuning and autoresearch: https://news.ycombinator.com/item?id=47444581

Parameter-free LLMs would be cool


Singularities are a sign that you have a broken model.

The hard part about this is for every few 'WOW', there's a lineage of 'you dumbass'.

I mean, if you can create aharrness to filter these two, sure, singularity away; it's really hard to see how someones gonna do that.


Not sure why you are downvoted but I agree. Additionally, perhaps LLMs are just like another higher programming language as the author said, and they still need someone to steer them.

I'm sure it was very difficult to program in machine code, but if now (or soon) anyone can just write software using a LLM without any sort of learning it changes everything. LLMs can plan and create something usable from simple instructions or ideas, and they will only get better.

I think LLMs will be (and already are) useful for many more things than programming anyway.


> they will only get better.

I don't buy that's true. The "only" part, anyway. Look at how UX with software has evolved. This is gonna be an old man yells at clouds take, but before smartphones, there were hotkeys. And man, you could fly with those things. The computers running things weren't as fast as they are today, but you could mash in a a whole sequence thru muscle memory, and just wait for it to complete. Now, you have to poke at your phone, wait for it to respond, poke at it some more. It's really not great for getting fast at it. AI advancement is going to be like that. Directionally generally it will be better, but there's going to be some niche where, y'know what, ChatGPT-4o really had it in a way that 5.5 does not. (Rose colored glasses not included.)


> they will only get better.

Then came the new Claude update, which many people say is worse. Even Anthropic says it got worse.[1] HN discussion back on April 15th: [2]

Some of this is a pricing issue. Turning "default reasoning effort" down from "high" to "medium" was a form of shrinkflation. Maybe this technology is hitting a price/performance wall.

[1] https://www.anthropic.com/engineering/april-23-postmortem

[2] https://news.ycombinator.com/item?id=47778035


> I'm sure it was very difficult to program in machine code, but if now (or soon) anyone can just write software using a LLM without any sort of learning it changes everything. LLMs can plan and create something usable from simple instructions or ideas, and they will only get better.

Did you read the section "Power to the People?" ? In it, the author dismantles your thesis with powerful, highly plausible arguments.


I read that section but I disagree with it.

1. You don't have to be an LLM expert to get good, consistent results with LLMs.

My best vibe-code process after years of using LLMs is to have Claude Code create a plan file and then cycle it through Codex until Codex finds nothing more to review, then have an agent implement it. This process is trivial yet produces amazing results.

It's solved by better and better harnesses.

2. You don't have to write technical specs. The LLM does that for you. You just tell it "I want the next-tab button to wrap back to the first one" and it generates a technical plan. Natural language is fine.

3. Software that seems to work only to fail down the line in production is already how software works today. With LLMs you can paste the stacktrace or user bug email and it will fix it.

This is why vibe-coding works. Instead of simulating how an app will run in your head looking at its code, you run the app and tell the LLM what isn't working correctly. The app spec is derived iteratively through a UX feedback look.

4. I don't understand TFA's goalposts, but letting people create software that are only interested in the LLM process (rather than the software craftsmanship) would be a huge democratization of software.


This sounds like someone who have never had to write serious software.

> 1. You don't have to be an LLM expert to get good, consistent results with LLMs.

You don't get good consistent results with LLMs, expert or not

> 2. You don't have to write technical specs. The LLM does that for you. You just tell it "I want the next-tab button to wrap back to the first one" and it generates a technical plan. Natural language is fine.

Try this, have Claude write a section in your specs titled "Performance Optimizations" and see the gibberish it will come up with. Fluffy lists with no actually useful content specific to the project. This is a severe problem with LLM-driven speccing I have encountered uncountable times. I now rarely allow them to touch the specs document.

> 3. Software that seems to work only to fail down the line in production is already how software works today. With LLMs you can paste the stacktrace or user bug email and it will fix it.

And pretty soon you have a big ball of mud. But I guess if the rate of bugs accelerate, the LLMs can also "fix" them faster

> This is why vibe-coding works. Instead of simulating how an app will run in your head looking at its code, you run the app and tell the LLM what isn't working correctly. The app spec is derived iteratively through a UX feedback look.

I should tell you about the markdown viewer with specific features I want, that I have wanted to build only with LLM vibe-coding, and how none of them are able to do it.


> This sounds like someone who have never had to write serious software.

Why the insult? You never know who you're talking to on HN.

Your points have to do with process failure, not intractable LLM limitations. Most of which already apply to human-conceived software.

Your "Performance Optimizations" bit exemplifies this since you baked in the assumption that it will have no connection with your project. Well, why not? You need to figure out how to use your source code and relevant data as ground truth when working with LLMs.

A markdown viewer is on the simpler side of things I've built with LLMs, so this too suggests that you have a weak process. A common mistake is to expect LLMs to one-shot everything (the spec, the plan, or the actual impl). Instead you should use LLMs to review-revise-cycle one of those until it's refined, ideally the spec/plan since impl is derived from it. You will have much better and consistent results.

I recommend finding an engineer you respect/trust that has found a way to build good software with LLMs, and then tap them for their process.


Thanks for your response. I did not mean to insult; my mild jab was meant to draw attention to the idea that using LLMs for serious production software is a whole different game than using them for casual software.

You said > Your "Performance Optimizations" bit exemplifies this since you baked in the assumption that it will have no connection with your project. Well, why not?

OK, I am talking from experience. Using LLMs for speccing is almost useless above certain complexity levels; what you get is an assemblage of the most average points you can imagine, the kinds of things almost every project in the category you are working on will address without any thought. Ask it to spec auth for a specific design, and all you'll get is: cookie-based login, input validation, password hashing, etc, etc. Which you don't need an LLM for. Nothing like an actual in-depth design. Even asking them to update specs based on discussions is hit or miss.

> A markdown viewer is on the simpler side of things I've built with LLMs, so this too suggests that you have a weak process. A common mistake is to expect LLMs to one-shot everything (the spec, the plan, or the actual impl). Instead you should use LLMs to review-revise-cycle one of those until it's refined, ideally the spec/plan since impl is derived from it. You will have much better and consistent results.

But what you are describing is NOT vibe-coding. I have no doubt I could build the viewer I want (which by the way is not your usual plain vanilla markdown viewer, but one with some very specific features) with LLM assistance. My point is: if you can't even vibe code your way to this specific viewer, how are you supposed to vibe code serious software?

Indeed, the declining quality of Claude Code is, I suspect, testament to the fact that vibe-coding any sufficienly complex piece of software does not work in the long run.


Oh, I see. I'll grant whatever you take vibe-code to mean since that seems to be the hang-up -- vibe-code prob suggests there's no process at all.

My point is that the planning phase and implementing phase are basically unsupervised, and all the work goes into the planning phase.

Yet I've noticed that over time, I'm not even needed in the planning phase because a simple revision loop on a plan file produces a really good plan. My role is mostly to decide what the agents should do next and driving the revision loop by hand (mostly because it's the best place for me to follow what's happening).

I've been getting really good results, though I've also developed a simple process that ensures that LLMs aren't relying on their model but rather external resources which is critical.


While I think the author is entirely right about 'natural language programming' in the current day, if LLMs (or some other AI architecture) continue to improve, it is easy to believe touching code could become unnecessary for even large projects. Consider that this is what software co. executives do all the time: outline a high level goal (software product) to their engineering director, who largely handles the details. We just don't yet know if LLMs will ever manage a level of intelligence and independence in open-ended tasks like this. And, to expand on that, I don't know that intelligence is necessarily the bottleneck for this goal. They can clearly tackle even large engineering tasks, but often complaints are that they miss on important architectural context or choose a suboptimal solution. Maybe with better training, context handling, documentation, these things will cease to be problems.

I have indeed missed the arguments that are so powerful that they dismantles my thesis.

Would there even be a debate in the tech community if such unassailable arguments existed? The author is entirely entitled to his opinion, just as I am allowed to disagree with him (not sure why I am also downvoted). The good thing is, if I'm right, we will see it in less than 10 years.



This is assuming there will be no competition. But why wouldn't there be? Especially since you can use open source models, which are not too far from frontier models (from now).

"One moringa seed can treat about 10 liters of water, the scientists found"

What does it mean? You have the discard the seed after 10 liters? If yes, sorry to be so negative but it seems completely useless.

Edit: a tree can produce up to 25 000 seeds per year, so perhaps it isn't that bad.


Typical planned obsolescence.


You can still buy Oligosol Lithium in Europe.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: