Hacker Newsnew | past | comments | ask | show | jobs | submit | et1337's commentslogin

The saddest part about Kubernetes is… after you set it all up, you still need a hacky deploy.sh to sed in the image tag to deploy! And pretty soon you’re back to “my dear friend you have built a Helm”. And so the configuration clock continues ticking…


Claude Code has essentially fixed this perpetual annoyance for me. Doesn't matter if it's a hacked up deploy.sh that mixes sed, envsubst and god knows what or a non-idiomatic Helm chart that was perpetually on my backlog to fix... today I just say "make this do this thing and also fix any bash bugs along the way" and it just does it. Its effectiveness for these thousand-little-cuts type DevOps tasks is underrated IMO.

Now the actual CI/CD/thing-doers tools that all suck... I'm still stuck with those.


I agree, I'm not great at devops, but my setup.sh and deploy.py have been game changers. Just vibe coding those was good enough.

Same with build.sh and doing it in such a way that I can use all the build.sh in my ci.yml for Github Actions.


I have been using Kubernetes for 7 or 8 years now, and have nearly 100% stayed away from Helm.

Some Kustomize, a little bit of envsubst and we're good to go thank you very much.


This is why we don't take advice from randos on internet.

I manage 100+ variations on a single helm chart and 50+ such helm charts at work daily for 7 years across 11 datacenters/kubernetes clusters. And I have team members who swear by kustomize. The number of kustomize typo errors and issues that I have deal with is unimaginable. Whereas if I test and deploy a helm chart, I know it will work everywhere in every variation.

Kustomize is just plain terrible and backwards as a solution. It doesn't scale, it is half assed. It tries to basically require you to build your own compiler and parser and transform. With kustomize + envsubst: dear friend, you have built helm.


How do you handle cleanups and hooks? The best way to do helm, at least for me, seems to be about limiting its use to simple templating use cases; if you end up needing an if, you've probably done something terribly wrong.


That's my main gripe with Helm.

For the simple use case you're describing, Helm is not required. Plenty of other solutions around.

For use cases where it starts getting useful, we both agree that something has gone terribly wrong.

I still don't know why Helm exists. It's a solution that created lots of problems that didn't exist.


My personal theory is that Helm may be ok for distributing a pre-packaged solution to other people. Then people mistook it for a tool that should be used in-house to deploy a company’s own systems, where it makes much less sense.

It makes absolute sense. You can use no variables and still deploy helm chart. It is a directory of plain old yaml objects. And add customization when you need as you evolve. Good luck doing that with kustomize.

> And add customization when you need as you evolve.

Using one of the most horrible templating languages since ASP. Helm is what happens when a devops team decides to yolo into software development.

What's the issue with kustomize? It works well for us.


You can rely purely on kubectl with something like:

cat manifests.yaml | kubectl apply -f - --server-side --field-manager "$FIELD_MANAGER" --prune --applyset "$APPLYSET" --namespace "$NAMESPACE"


Seems to be a case of the XY problem. What do you need cleanups and hooks for?


Cleanups: I want to do a `helm uninstall` and have all the manifests go away at once instead of looking around for N different resources.

Hooks: I want to apply my database migrations and populate the database with static datasets before I deploy my application, without having my CI connect to the database cluster (at places I've worked, the CI cluster and K8s cluster were completely separate).


Regarding cleanups: I'm using flux CD with kustomize. It tracks resources that it created. If I delete manifest from my repository, flux will delete resources that were created from these manifests. For me that's pretty much the ideal workflow.

Regarding hooks: I don't know. All applications that I've used, implemented migrations internally (it's usually Java with Flyway), so I don't need to think about it. One possible approach could be to use flux CD with Job definition. I think that Flux will re-create Job when it changes. So if you change image tag, it'll re-create Job and it'll trigger Pod execution. But I didn't try this approach, so not sure if that would work for you.


> Cleanups: I want to do a `helm uninstall` and have all the manifests go away at once instead of looking around for N different resources.

    kubectl delete -f <manifests.yaml>

    kubectl delete -k <kustomization_directory>
> I want to apply my database migrations and populate the database with static datasets before I deploy my application, without having my CI connect to the database cluster

A Job feels like a good fit for this. CI deployes the Job without connecting to DB, Job runs migrations using the same connectivity as the application.


> apply my database migrations and populate the database with static datasets before I deploy my application

You could a) have the app acquire a lock in the db and do its own migrations, or b) create a k8s job that runs the migration tool, but make sure the app waits for the schema to be updated or at least won't do anything bad.


There are a multitude of cases of operations which need to be performed before and after specific actions in K8s. It depends on the resource, operator, operational changes, state, bugs, order of operations, and more.


This is a BS claim with no proof. This is the strength of helm.

Going on 10 years now for me, tried Helm a bit and yep - all I've really needed was a package.json deploy script with sed to bump the image version.


Or if your colleagues are "smarter" than you they make it in Clojoure instead, with an EDN-but-with-subroutines config language, so that not only yaml-aware editors are useless, but EDN-aware editors cannot make heads or tails of the macros.

Fun times.


I don't understand you.

For very simple deployments, you don't need anything at all. Just write manifests and use `kubectl apply`. You can write `deploy.sh` but it'll be trivial.

If you want templating, there are many options. You can use `sed` for the most simple templating needs. You can use `cpp`, `m4`, `helm` or `kustomize`. I, personally, like `kustomize`, but `helm` probably not the worst template engine out there.

Kustomize is even somewhat included into basic kubernetes tooling, so if you want something "opinionated", it is there for you. It works.


Anyone remembers the GitOps thingy called flux? Weave was the company name.

Git and Kubetnetes configuration cannot go hand in hand. You cannot go back in past indefinitely because cluster state might not be that reversible. If so, git is useless.

And no, doesn't apply for database migrations. You can mostly run migrations backwards if each migration was written carefully.


I'm starting to use it for my self hosted services.

I have a "simple" representation of services using CUE, that generates the yaml manifests and flux deploys them.

I hesitated a while before going the k8s route but before that I had a overly error-prone Ansible configuration and I got sick of manual templating (hence the move to CUE for type safety).

There's also the fact that I wanted my services to be as plug and play as possible, so for example automatically generated openid credentials and very easily configurable central SSO, along with the easily configurable reverse-proxy.

If anyone thinks that k8s is not the best tool for this, I'm always interested in advice.

(Also a lot of complexity in my setup is due to self hosting, I have Istio, MetalLB, proxmox CSI, and all other kind of stuff that your cloud provider would already have, and these are the things that take most of the configuration files in my repo)


I've used it in the past and personally loved it. Just bumping a yaml file in a git repo to the image tag I wanted deploying was a godsend and nearly automated. I can't speak to your experience though which I am certain is valid and a real problem. We just never had those kind of issues so we could either revert to an earlier tag that worked or publish a new image with the required resolution steps.

https://fluxcd.io/ + helm + with a CI pipeline that pushes the docker images to a registry means that after the setup, anytime you push a new image and tag, k8s can automatically update without needing to do anything manual.

And if you want your Helm to run on certain deploys, and maintain a declarative set of the variables given to charts over time, thinking you can use Helmfile and some custom GitHub Actions… “my dear friend you have built a GitOps.”

(I tend to think this one is acceptable in the beginning, but certainly doesn’t scale.)


Use a CD solution like Spinnaker, BunnyShell, or Kargo.


skaffold. It'll also wait for rollout to stabilize.

If few lines of scripting is your problem you shouldn't be programming


Jujutsu has a concept of mutable vs immutable commits to solve this. Usually everything in a remote branch is immutable. To work on a branch, I track it and that makes it mutable.


Thousands of keystrokes saved by not having to type “man syscall”… and millions of hours lost by confused folks like OP (and myself)


Luckily hours lost by the incompetent don't amount to much.


Needless obscurity is not a virtue


I’ve been driving Bluefin DX for a year or two. On the plus side, it works absolutely flawlessly. This is the longest I’ve ever run a Linux distro without a Nvidia driver update causing the whole thing to explode. It truly is the year of Linux on the desktop.

But I can’t say I recommend it for dev work. It wants you to do everything inside devcontainers, which I like in theory but in practice come with so many annoyances. It wants you to install Flatpaks but Flathub is pretty sparse. I ended up downloading raw Linux binaries into my home directory (which actually works surprisingly well. Maybe this is the future, hah)

I think next time I’ll just go with vanilla Fedora.


I also think there’s an interesting effect when cool functional language features like currying and closures are adopted by imperative languages. They make it way too easy to create state in a way that makes you FEEL like you’re writing beautiful pure functions. Of course, in a functional language everything IS pure and this is just how things work. But in an imperative language you can trick yourself into thinking you’ve gotten away with something. At one point I stored practically all state in local variables captured by closures. It was a dark time.


I'm actually fascinated by what you wrote. Why was it a dark time?


No encapsulation… huge functions with tons of local variables shared between closures… essentially global state in practice. I think ant the time, objects with member variables felt “heavy” and local variables felt “light”. But the fact that they were so lightweight just gave me more opportunities to squirrel away state into random places with no structure around it. It really wasn’t all that horrific, and it helped me ship something quickly, but it wasn’t maintainable. These days I think the “heavy boilerplate” of grouping stuff into structs and objects forces me to slow down and think a bit harder about whether I really want to enshrine a new piece of state into the app’s data model. Most of the time I don’t.


I think the worst case is actually that the LLM faithfully implements your spec, but your spec was flawed. To the extent that you outsource the mechanical details to a machine trained to do exactly what you tell it, you destroy or at least hamper the feedback loop between fuzzy human thoughts and cold hard facts.


Unfortunately even formal specifications have this problem. Nothing can replace thinking. But sycophancy, I agree, is a problem. These tools are designed to be pleasing, to generate plausible output; but they cannot think critically about the tasks they're given.

Nothing will save you from a bad specification. And there's no royal road to knowing how to write good ones.


Right, there’s no silver bullet. I think all I can do is increase the feedback bandwidth between my brain and the real world. Regular old stuff like linters, static typing, borrow checkers, e2e tests… all the way to “talking to customers more”


Turn off your watch history. It disables the front page and shorts, but you can still watch any video you want and also follow your subscriptions. You still get recommendations next to each video but I find those much less problematic personally.


Unfortunately, with watch history off, YouTube still pushes Shorts in the subscriptions page (at least on mobile web, which is where I primarily use YouTube).


I find that a lot less problematic as there's just very few shorts on my feed, I've never been able to scroll through more than 5 or so without just going into ones I've seen before.


The Unhook browser extension gets rid of that. And optionally other things.


This was a fun one today:

% cat /Users/evan.todd/web/inky/context.md

Done — I wrote concise findings to:

`/Users/evan.todd/web/inky/context.md`%


Perfect! It concatenated one file.


To be fair, it was very concise


Based on my experience writing many games that work great barring the occasional random physics engine explosion, I suspect that trigonometry is responsible for a significant proportion of glitches.

I think over the years I subconsciously learned to avoid trig because of the issues mentioned, but I do still fall back to angles, especially for things like camera rotation. I am curious how far the OP goes with this crusade in their production code.


Yes, for physics engines I think that's a very good use case when its worth the extra complexity for robustness. Generally I think if errors (or especially nan's) can meaningfully compound, ie if you have persistent state, that's when its a good idea to do a deeper investigation


Your response is well-grounded--trig is trouble. Angles are often fine, but many 3rd party library functions are not.

Have you ended up with a set of self-implemented tools that you reuse?


You can definitely handle camera rotation via vector operations on rotation matrices.


This video is a really cool dive into EUV for the uninitiated (me) https://youtu.be/MiUHjLxm3V0?si=kEPSicC2WXYhcQ6L


Or this video, which came out before Veritasium's

https://www.youtube.com/watch?v=B2482h_TNwg


https://youtu.be/NGFhc8R_uO4

Or this presentation which came out way long ago.


This is worth the (re)watch every time it comes up.


"I didn't want my name associated with this on the internet"


I didn’t, still don’t, but that’s a lost cause.

I’ll note that this video is way out of date…both in content and my skills as a speaker :P


Thanks for your presentation, watched ut several times over the years. If your presentation skills are better now hopefully you csn make a new one.


Thanks for the informative presentation!


thanks for the HN community - the video is how I ended up here and its one of the few social media-esque sites I bother visiting. Taught me a pile of things about coding and CS that weren't in my mechanical engineering degree.


Glad to see Branch Education represented here.


I thought this video was a lot better than the Veritasium video. The Veritasium video was awkward. I think they tried to follow the formula from the (excellent) blue led video that performed so well, but it just didn't work.


Disagree, I thought the Veritasium video was fantastic. You understand how the machine works in depth, the history of its development and challenges it encountered, and hear from people actively working on it. It’s a science lesson and history lesson. Like usual, they keep the video engaging and focused on the story, while still keeping a lot of depth with the science. It’s a great format


Or this Asianometry video which came out even sooner.

https://youtu.be/MXnrzS3aGeM


> Thanks for mentioning ASML sponsoring this. I was about to buy an EUV machine from another vendor

lol


The whole “exploding tiny drops of metal” in the middle of this is just Loony Toons. This machine is literally insane and two of the companies I am long-long on would be completely fucked without it.


You forgot WITH LASERS, and IN A VACUUM


IIRC from the Veritasium video[0] there is actually some hydrogen gas flowing at quite a high speed though the laser chamber to carry away the tin debris so that it does not accumulate on the mirrors.

[0] https://www.youtube.com/watch?v=MiUHjLxm3V0


They account to every single tiny atom somehow too, but I think I fell asleep last time I watched the video.


The old SemiAccurate article https://semiaccurate.com/2013/02/13/euv-moves-forward-two-st... was very funny.


Seeing this news story made me briefly fear that they’d found a way to replace this glorious mechanism. Thankfully not. In fact, they’re going to shoot more droplets, more often!

So much more fun than LEDs.


Yes it was crazy when I first heard about it "wait what? they shoot it in mid-air?" and that was before I found out they did that like 30k times a second.

But now 100k times a second apparently. Humans are amazing.


You have a machine that’s basically a clean room inside and one of the parts is essentially electrosputtering tin but then throwing all the tin away and using the EM pulse from the sputter to do work.

Oh and can you build it so it can run hundreds or thousands of hours before being cleaned? Thanks byyyyyyyyeeeeee!


The inside of those machines are far, far cleaner than the inside of any clean room ever entered by a human. They have to be molecularly clean.


Which isn't easy considering they explode tin droplets in the machine. I think that's the point the other commenter wanted to make.


Think about the purity requirements that places on the tin.


> We are going to spray expensive stuff in an extremely fine and precise line. Then we're going to shoot a laser at each droplet.

< Why?!

> To make a better laser.

< Yes, of course you are.

> 100,000 times per second.

< [AFK, buying shares.]


I have shares in one of their biggest customers, and one of their customer’s biggest customers.

We are quickly leaving the realm of dependent variables still looking anything like diversification.


> We are quickly leaving the realm of dependent variables still looking anything like diversification.

What does that mean?


It seems like you want someone to ask you what the two companies are. So - what are the two companies?


Nvidia and an AI company


Don't forget that they are hitting each droplet 3 times.


That is why each machine costs a few hundred million eurodollars.


The thing I didn't understand after watching that video was why you need such an exotic solution to produce EUV light. We can make lights no problem in the visible spectrum, we can make xray machines easily enough that every doctors office can afford one, what is it specifically about those wavelengths that are so tricky.


The efficiency of X-ray tubes is proportional to voltage, and is about 1% at 100kV voltage. This is the ballpark for the garden variety Xray machines. But the wavelength of interest for lithography corresponds to the voltage of only about 100V, so the efficiency would be 10 parts per million.

The source in the ASML machine produces something like 300-500W of light. With an Xray tube this would then require an electron beam with 50 MW of power. When focused into a microscopic dot on the target this would not work for any duration of time. Even if it did, the cooling and getting rid of unwanted wavelengths would have been very difficult.

A light bulb does not work because it is not hot enough. I suppose some kind of RF driven plasma could be hot enough, but considering that the source needs to be microscopic in size for focusing reasons, it is not clear how one could focus the RF energy on it without also ruining the hardware.

So, they use a microscopic plasma discharge which is heated by the focused laser. It "only" requires a few hundred kilowatts of electricity to power and cool the source itself.


The issue isn't in generating short wavelength light, it's in focusing it accurately enough to print a pattern with trillions of nanoscale features with few defects. We can't really use lenses since every material we could use is opaque to high energy photons so we need to use mirrors, which still absorb a lot of the light energy hitting them. Now this only explains why we need all the crazy stuff that asml puts in it's EUV machines to use near x-ray light, but not why they don't use x-ray or higher energy photons. I believe the answer to this is just that the mirrors they can use for EUV are unacceptably bad for anything higher, but I'm not sure


Photoresist too. XRays are really good at passing through matter, which is a bit of a problem when the whole goal is for them to be absorbed by a 100 nanometer thick film. They tend to ionize stuff, which is actually a mechanism for resist development, but XRay energies are high enough that the reactions become less predictable. They can knock electrons into neighboring resist regions or even knock them out of the material altogether.


It really is the specific wavelength. Higher or lower is easier. But euv has tricky properties which make it feasible for Lithography (although just barely it you have a look at the optics) but hard to produce with high intensities.


Specifically, what makes x-rays easy to generate are these: https://en.wikipedia.org/wiki/Characteristic_X-ray In essence, smashing electrons into atoms allows you to ionize the inner shell of an atom and when an electron drops down from an outer shell, the excess energy is shed as high-energy photons. This constrains the energy range of X-ray tubes ("smash electron into metal") to wavelengths well below 13.5nm.

(These emission lines are also what is being used in x-ray spectroscopy to identify elements)


You can also generate broad spectrum bremsstrahlung radiation easily, this is widely used for medical X-rays.


Any source to this? I am hearing this for the first time.


ITs easy to make X-rays, you just hit a metal target with electrons: https://en.wikipedia.org/wiki/X-ray_tube


You can hit metal the same way for EUV.


No you can't, or rather you only get a tiny amount in the correct wavelengths


I assume this doesn't work well otherwise everyone would be doing it.


There is such a thing as X-ray lithography, but it comes with significant challenges that make it not really worth it compared to EUV.


I'd like to hear more about these challenges


There are no normal x-ray mirrors. The only way to focus them is to use special grazing mirrors where the x-rays hit them almost parallel to the surface.

https://science.gsfc.nasa.gov/662/instruments/mirrorlab/xopt...


As I understand it, primarly because due to the high energy level of x-rays, light x-ray interacts very differently with materials[1]. Primarily they get absorbed, so very difficult to make mirrors or lenses, which are crucial for litography to redirect and focus the light on a specific miniscule point on the wafer.

The primary method is to rely grazing angle reflection, but that per definition only allows you a tiny deflection at a time, nothing like a parabolic mirror or whatnot.

[1]: https://en.wikipedia.org/wiki/X-ray_optics


All of these problems or equivalent still exist in EUV. Litho industry had to kind of rethink the source and scanner because it went from all lenses to all mirrors in EUV. This is also why low NA and high NA EUV scanners were different phases.

As I hear it, the decision had large economic component related to Masks and even OPC.


100%. EUV barely works. XRay litho takes all the issues with EUV and cranks them up to 11. It will take comparable effort to EUV, if not more, to get XRay litho up and running, and I'm not aware of anyone approaching this to anywhere near the level of investment that ASML (and others) have pumped into developing EUV tech. We may get there eventually as a species, but we're a ways off.


If you think it barely works now, you should've seen it when we first started. Availability of a machine was "fuck you"% and the whole system was held together by duct tape, bubblegum and hope. Compared to that the current system is entirely controllable.


Oh, for sure, via herculean effort and investment we have created ourselves a functioning and economical process!

We do actually have functioning processes for XRay litho today, but we'll need that same level (or more) of investment and effort to make it economical.


Stochastic effects become a bigger and bigger problem. At some point (EUV) a single photon has enough energy to ionize atoms, causing a cascade that causes effects to bloom outside of the illumination spot.


Here's your link without the surveillance

https://www.youtube.com/watch?v=MiUHjLxm3V0


With slightly less surveillance


Touché. Here's the link without surveillance

https://yewtu.be/watch?v=MiUHjLxm3V0


try duck player


https://www.youtube.com/watch?v=5Ge2RcvDlgw

Asianometry has lots of videos on ASML, this one is specifically about the light sources.


> https://youtu.be/MiUHjLxm3V0

PSA: the si (along with pp) parameter is used for tracking purposes:

    ?si=kEPSicC2WXYhcQ6L
consider cutting whenever possible.


Asianometry has half a dozen or so videos of you want some really deep dives on the tech and industry (with sources, since we’re on HN)


Okay this is weird.

> The key advancements in Monday's disclosure involved doubling the number of tin drops to about 100,000 every second, and shaping them into plasma using two smaller laser bursts, as opposed to today's machines that use a single shaping burst.

This is covered in that video. Did they let him leak their Q1 plans?


That has been covered before in other videos[0] that this is their roadmap to higher power, so I'm also not sure what they have announced now that wasn't previously announced.

[0]: https://www.youtube.com/watch?v=MXnrzS3aGeM


From the first video I thought they had already shipped this, but it sounds like they were describing what their new model was.

This seems like a product with a very very long sales pipeline, so I wonder if they work on pre-orders with existing customers but announce delivery milestones only as they come?


Highly recommend this video as well, he has a bunch more worth watching. https://youtu.be/rdlZ8KYVtPU?si=wgjkkNDSzuuS3lVK


One of those odd moments where a YouTube title looks like clickbait but is actually, factually correct.

+1 for this video, and the Branch education one. Well done to both teams.


As shown with that terrible speed of electricity video, Veritasium prefers "technically correct" over factually correct.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: