Hacker Newsnew | past | comments | ask | show | jobs | submit | pkos98's commentslogin

Maybe the Two Pizza rule:

No team at Amazon should be larger than what two pizzas can feed (usually about 6 to 10 people).


The ‘design everything as a publicly accessible API’ directive seems to play to this as well. If all your data / services are available and must be documented then a lot of communication overhead can be eliminated.


For anyone who doesn't know what you mean, here's an archived copy of Steve Yegge's post about this directive + other musings comparing Amazon vs Google (which is how a lot of us came to find out about this, via Yegge's write-up): https://news.ycombinator.com/item?id=3102800

Copied the most relevant snippet below

---

So one day Jeff Bezos issued a mandate. He's doing that all the time, of course, and people scramble like ants being pounded with a rubber mallet whenever it happens. But on one occasion -- back around 2002 I think, plus or minus a year -- he issued a mandate that was so out there, so huge and eye-bulgingly ponderous, that it made all of his other mandates look like unsolicited peer bonuses.

His Big Mandate went something along these lines:

1) All teams will henceforth expose their data and functionality through service interfaces.

2) Teams must communicate with each other through these interfaces.

3) There will be no other form of interprocess communication allowed: no direct linking, no direct reads of another team's data store, no shared-memory model, no back-doors whatsoever. The only communication allowed is via service interface calls over the network.

4) It doesn't matter what technology they use. HTTP, Corba, Pubsub, custom protocols -- doesn't matter. Bezos doesn't care.

5) All service interfaces, without exception, must be designed from the ground up to be externalizable. That is to say, the team must plan and design to be able to expose the interface to developers in the outside world. No exceptions.

6) Anyone who doesn't do this will be fired.

7) Thank you; have a nice day!

Ha, ha! You 150-odd ex-Amazon folks here will of course realize immediately that #7 was a little joke I threw in, because Bezos most definitely does not give a shit about your day.

#6, however, was quite real, so people went to work. Bezos assigned a couple of Chief Bulldogs to oversee the effort and ensure forward progress, headed up by Uber-Chief Bear Bulldog Rick Dalzell. Rick is an ex-Armgy Ranger, West Point Academy graduate, ex-boxer, ex-Chief Torturer slash CIO at Wal*Mart, and is a big genial scary man who used the word "hardened interface" a lot. Rick was a walking, talking hardened interface himself, so needless to say, everyone made LOTS of forward progress and made sure Rick knew about it.

Over the next couple of years, Amazon transformed internally into a service-oriented architecture. They learned a tremendous amount while effecting this transformation. There was lots of existing documentation and lore about SOAs, but at Amazon's vast scale it was about as useful as telling Indiana Jones to look both ways before crossing the street. Amazon's dev staff made a lot of discoveries along the way. A teeny tiny sampling of these discoveries included:

- pager escalation gets way harder, because a ticket might bounce through 20 service calls before the real owner is identified. If each bounce goes through a team with a 15-minute response time, it can be hours before the right team finally finds out, unless you build a lot of scaffolding and metrics and reporting.

- every single one of your peer teams suddenly becomes a potential DOS attacker. Nobody can make any real forward progress until very serious quotas and throttling are put in place in every single service.

- monitoring and QA are the same thing. You'd never think so until you try doing a big SOA. But when your service says "oh yes, I'm fine", it may well be the case that the only thing still functioning in the server is the little component that knows how to say "I'm fine, roger roger, over and out" in a cheery droid voice. In order to tell whether the service is actually responding, you have to make individual calls. The problem continues recursively until your monitoring is doing comprehensive semantics checking of your entire range of services and data, at which point it's indistinguishable from automated QA. So they're a continuum.

- if you have hundreds of services, and your code MUST communicate with other groups' code via these services, then you won't be able to find any of them without a service-discovery mechanism. And you can't have that without a service registration mechanism, which itself is another service. So Amazon has a universal service registry where you can find out reflectively (programmatically) about every service, what its APIs are, and also whether it is currently up, and where.

- debugging problems with someone else's code gets a LOT harder, and is basically impossible unless there is a universal standard way to run every service in a debuggable sandbox.

That's just a very small sample. There are dozens, maybe hundreds of individual learnings like these that Amazon had to discover organically. There were a lot of wacky ones around externalizing services, but not as many as you might think. Organizing into services taught teams not to trust each other in most of the same ways they're not supposed to trust external developers.

This effort was still underway when I left to join Google in mid-2005, but it was pretty far advanced. From the time Bezos issued his edict through the time I left, Amazon had transformed culturally into a company that thinks about everything in a services-first fashion. It is now fundamental to how they approach all designs, including internal designs for stuff that might never see the light of day externally.

At this point they don't even do it out of fear of being fired. I mean, they're still afraid of that; it's pretty much part of daily life there, working for the Dread Pirate Bezos and all. But they do services because they've come to understand that it's the Right Thing. There are without question pros and cons to the SOA approach, and some of the cons are pretty long. But overall it's the right thing because SOA-driven design enables Platforms.

That's what Bezos was up to with his edict, of course. He didn't (and doesn't) care even a tiny bit about the well-being of the teams, nor about what technologies they use, nor in fact any detail whatsoever about how they go about their business unless they happen to be screwing up. But Bezos realized long before the vast majority of Amazonians that Amazon needs to be a platform.

You wouldn't really think that an online bookstore needs to be an extensible, programmable platform. Would you?


> You wouldn't really think that an online bookstore needs to be an extensible, programmable platform. Would you?

Well, we were making it a platform in small ways long before that edict from Bezos. But because it used to be only an online bookstore, the footprint was a lot smaller.

1. the external interface was ... HTTP

2. the pages were designed to be easily machine parsable

3. you could queue up search queries that amzn would run on its own hardware, and notify you of the results asynchronously.

Sure, this didn't look anything like the things Yegge is describing, but the idea that "it's a platform, dummies" was some new revelation is misleading.


I haven’t read this in years and it was delightful to see it posted here.


I have always been amazed at that rule because it implies developers either do not like pizza or they happen to be on a diet.


It's better incentive for smaller teams, that way each peson gets more pizza :)


As a German, I genuinely cannot comprehend this short-sightedness and ignorance:

Our current Chancellor (Merz) publicly boasts that Germans work too few hours and calls on them to work more [0] implying this would generate more tax revenue. Yet working has arguably never been less rewarding for workers: Germany currently has the 2nd highest tax wedge among all OECD nations (≈48% for a single worker, nearly 13 percentage points above the OECD average) [2][3]. This is compounded by demand-side welfare measures for low earners such as Wohngeld (housing benefit) and pension supplements like Mütterrente ("Mothers' pension"), creating a massive redistribution from working people to non-working people.

Meanwhile, the German government has spent years failing to fully prosecute the CumEx/CumCum tax fraud scandal, a scheme through which banks and investors systematically robbed the German state of an estimated €36 billion in tax revenues [4][5]. The contrast could not be more glaring: squeeze workers harder while letting financial fraudsters off easy.

I've handed over my resignation for my FAANG job and am looking for a job in other countries as I don't see myself building a future here.

- [0] Merz urges Germans to work more CGTN (Feb 2026): https://news.cgtn.com/news/2026-02-28/Merz-says-Germany-must...

- [1] EUFactCheck Merz's claim rated "Mostly False": https://eufactcheck.eu/factcheck/mostly-false-we-need-to-wor...

- [2] OECD Taxing Wages 2025 Germany: https://www.oecd.org/content/dam/oecd/en/publications/report...

- [3] Tax Foundation Tax Burden on Labor, OECD 2024: https://taxfoundation.org/data/all/global/tax-burden-on-labo...

- [4] CumEx-Files Wikipedia: https://en.wikipedia.org/wiki/CumEx-Files

- [5] Stanford GSB CumEx and CumCum Scandals: https://casi.stanford.edu/news/germanys-cumex-and-cumcum-fin...


The Mütterrente is a good idea, because it rewards having children which end up paying the pensions. This leads to sustainable population replacement. If anything, this should be expanded to fathers as well (I don't mind mothers receiving more).

What's not such a great idea is paying the Mütterrente retroactively. Pensioners can't have children retroactively to stock up the tax base, so all this does is increase the tax burden for the current generation, which discourages them even more from having children.


From the OECD report you cited:

> In Germany, the tax wedge for the average single worker decreased by 5 percentage points from 52.9% to 47.9% between 2000 and 2024. During the same period, the average tax wedge across the OECD decreased by 1.3 percentage points from 36.2% to 34.9%.

Sounds like Germany is getting better.


> I've handed over my resignation for my FAANG job and am looking for a job in other countries as I don't see myself building a future here.

You quit your job at a large US company because you do not see yourself building a future in Germany?


Not only, but also due to this. Relocation through switching teams is not possible. Compensation took a big hit due to dollar depreciation.

Worst case I'll end up being on unemployment insurance for a year, ~ 2800 EUR per month while travelling the world in my late twenties...

When property costs 1 million+ (the case in Berlin/Munich), financially it really doesn't matter whether I net 6500 EUR month working 50+ hours for FAANG or 4500 EUR working 35 hour weeks for a German corporate, even though the gross salary for the FAANG job is twice the German job.


I never understand those calculations. You can buy houses in Berlin from around 350k. Maybe not in that area you are looking but still. With something like 600 to 800 sqm ground, house around 100 sqm, quiet neighborhood, 10 min walking to S-Bahn (i.e my grandmothers neighbors house that was sold a few month ago). Probably add 100k for renovation. But with 3.5k of savings a month (from 6.5k easily possible), you have paid it of in ~10 years.


Close to S-Bahn but then more than 1h commute?


I understand that everything and everyone in Berlin is a 1hr commute.


Are you being serious or saracstic?


Is what Berliners always say, isnt it?


IDK, I don't live in Berlin, that's why I asked.


Unemployment office doesn't require you to constantly seek new jobs, prove that and keep going to unemployment office (to prevent exactly this? We have this here in Switzerland. They do give more here but costs are massive compared to Germany, and economy and society is way more nimble than glacial Germany it seems.

If that is lacking, German population mentality is worse that I thought, less efficient, more incompetent social-state-feed-lazy-me model, which is of course unsustainable. Ungood in global times, very ungood.

I have a friend who works quite high in sales for BMW directly in Munich, and even despite his general politeness he... isn't happy where company and Germany overall is heading. Was a big proponent of green deal before when everything was rosy, finally understood what a shoot-your-own-feet idiocy that was. Eastern wing of EU was screaming all this since beginning since this is by far the #1 issue they have with EU, but nobody in Brussels or Bonn gave a nanofraction of a f#ck..


6500 EUR net in Berlin/Munich would equate to ~140k EUR gross. For a FAANG salary, considering that startups pay these figures for similar expertise, I would expect more. What level is that if you don't mind sharing?


Intermediate lvl, not Amazon. And indeed I’ve also observed this to be the case, too startups pay such base salaries (eg Gitlab and Neon did) for similar lvl. But there aren’t too many such openings.


Yes, there aren't and it's a quite competitive market too.


> When property costs 1 million+ (the case in Berlin/Munich), financially it really doesn't matter whether I net 6500 EUR month working 50+ hours for FAANG or 4500 EUR working 35 hour weeks for a German corporate

Financially in the first case you can afford a mortgage on said property (barely, with some help from parents/partner, maybe aiming for something slightly out of the very city centre), in the second case you cannot. Also, 4500 net for a 35-hr week is something you will not easily find in a German corporate: at that level, levels.fyi only lists non-German multinationals. Unless you become a contractor, or rise really high on the corporate ladder.

But I agree on the rest of your comment, and I have also left Germany because of the massive amount of money that the government feels entitled to take from the pockets of the so-called “top earners” (i.e. anybody making the equivalent of 70'000 $) while giving back barely anything in terms of services.


AI slop. The internet is dead.


Holy hell you’re right, scrolling through the post history of this “person” is crazy wtf.


Coming from Elixir, I gave Gleam a try for a couple of days over the holidays. Reasons I decided not to pursue:

- No ad-hoc polymorphism (apart from function overloading IIRC) means no standard way of defining how things work. There are not many conventions yet in place so you won’t know if your library supports eg JSON deserialization for its types

- Coupled with a lack of macros, this means you have to implement even most basic functionality like JSON (de)serialization yourself - even for stdlib and most popular libs’ structs

- When looking on how to access the file system, I learned the stdlib does not provide fs access as the API couldn’t be shared between the JS and Erlang targets. The most popular fs package for erlang target didn’t look of high quality at all. Something so basic and important.

- This made me realise that in contrast to elixir which not only runs on the BEAM („Erlang“) but also runs with seamless Erlang interop, Gleam doesn’t have access to most of the Erlang / Elixir ecosystem out of the box.

There are many things I liked, like the algebraic data types, the Result and Option types, pattern matching with destructuring. Which made me realize what I really want is Rust. My ways lead to Rust, I guess.


> Gleam doesn’t have access to most of the Erlang / Elixir ecosystem out of the box.

Gleam has access to the entire ecosystem out of the box, because all languages on the BEAM interoperate with one another. For example, here's a function inside the module for gleam_otp's static supervisor:

    @external(erlang, "supervisor", "start_link")
    fn erlang_start_link(
      module: Atom,
      args: #(ErlangStartFlags, List(ErlangChildSpec)),
    ) -> Result(Pid, Dynamic)
As another example, I chose a package[0] at random that implements bindings to the Elixir package blake2[1].

    @external(erlang, "Elixir.Blake2", "hash2b")
    pub fn hash2b(message m: BitArray, output_size output_size: Int) -> BitArray

    @external(erlang, "Elixir.Blake2", "hash2b")
    pub fn hash2b_secret(
      message m: BitArray,
      output_size output_size: Int,
      secret_key secret_key: BitArray,
    ) -> BitArray
It's ok if you don't vibe with Gleam – no ad-hoc poly and no macros are usually dealbreakers for certain types of developer – but it's wrong to say you can't lean on the wider BEAM ecosystem!

[0]: https://github.com/sisou/nimiq_gleam/blob/main/gblake2/src/g...

[1]: https://hex.pm/packages/blake2


Isn’t this the proof of my point - How does the need of writing „@external“ annotations by hand not contradict the point of being „out of the box“ usable?

Hayleigh, when I asked on the discord about how to solve my JSON problem in order to get structured logging working, you replied that I’m the first one to ask about this.

Now reading this: > It's ok if you don't vibe with Gleam – no ad-hoc poly and no macros are usually dealbreakers for certain types of developer

Certainly makes me even more feel like gatekeeping.


I don't think Hayleigh was trying to gatekeep, just noting that some developers prefer features that Gleam intentionally omits.

As for the @external annotations, I think you're both right to a degree. Perhaps we can all agree to say: Gleam can use most libraries from Erlang/Elixir, but requires some minimal type-annotated FFI bindings to do so (otherwise it couldn't claim to be a type-safe language).


How does it contradict it? Without any modification/installation you can interop with Erlang/Javascript. How is that not out of the box usability of the Erlang/JS ecosystem? Syntax isn't as seamless as Elixir, but we need a way to tell Gleam what types are being passed around.

Why do you feel like a gatekeeper? Your opinion is valid, it's just that the interop statement was wrong.


That's FFI bindings. I need to provide the function signature of every API, because Erlang isn't statically typed. It's okay if some library provides it (like the linked , but I don't want to write this by hand if I can avoid it. And it's definitely not out of box, someone has to write the bindings for it to work

It would be different if I didn't have to write bindings and Gleam integrated automatically with foreign APIs. For Erlang that's probably not possible, but for the Javascript ecosystem it could make use of Typescript signatures maybe. (it would be very hard though)


Yeah, it's there out of the box but it's certainly not seamless. For an Elixir dev, it is more friction than you're used to. It is the cost of static types.


This is the same as Elixir, you need to specify what Erlang function to use in that language if you want to use Erlang code. The only difference is that Gleam has a more verbose syntax for it.


In Elixir you just call the Erlang function directly. It's basically the same as calling an Elixir function, just with a different naming convention.

In Gleam, you first have to declare the function type and THEN you can call the function directly.

This is probably the lightest way you can bridge between statically and dynamically typed languages, but it's not the same as Elixir.


Sorry, I've been unclear.

The runtime behaviour and cost of calling an Erlang function is the same in Elixir and Gleam, however the syntax is more verbose in Gleam as it asks for type information, while in Elixir this is optional.


I'm a bit torn on ad-hoc polymorphism. You can definitely do cool things with it. But, as others have pointed out, it does reduce type safety:

https://cs-syd.eu/posts/2023-08-25-ad-hoc-polymorphism-erode...


The same point holds of interfaces. And it’s not clear what the alternative is. No type system I’m aware of would force you to change all occurrences of this business logic pattern, with or without ad hoc polymorphism.

But at least ad hoc polymorphism lets you search for all instances of that business logic easily.


ML languages have a "types, modules, types-of-modules, and functors" approach to ad-hoc poly. It's a bit strange compared to what other languages do. I am wondering whether it's ever been seen outside of SML and OCaml.

For JSON deserialisation, you would declare a module-type called "JSON-deserialiser", and you would define a bunch of modules of that module-type.

The unusual thing is that a JSON-deserialiser would no longer be tied to a type (read: type, not module-type). Types in ML-like languages don't have any structure at all. I suppose you can now define many different JSON-serialisers for the same type?


The article provides a contrived example and doesn't prove that ad-hoc polymorphism reduces type-safety. Even when `Maybe [a]` is being folded via `Foldable f` the claimed type-safety isn't reduced, it's the context of the folding that's being changed from `[a]` to `Maybe a`, and everything is type-safe. Secondly, if you really want to distinguish between the empty list and disabled allow-lists within your type-system you do define your own data type with that representation, and you don't declare it foldable, because the folding algebra doesn't make sense for any practical use-case of the disabled allow-lists. The language actually provides you with the means to reduce evaluation contexts your types can be part of.


I’ve been doing Elixir for 9 years, 5 professionally. Nobody cares about ad-hoc polymorphism. The community doesn’t use protocols except “for data”. Whatever that means. Global singleton processes everywhere. I’m really discouraged by the practices I observe but it’s the most enjoyable language for me still.


>I’ve been doing Elixir for 9 years, 5 professionally. Nobody cares about ad-hoc polymorphism.

That’s true for Elixir as practiced, but it’s the wrong conclusion for Gleam.

Elixir doesn’t care about ad-hoc polymorphism because in Elixir it’s a runtime convention, not a compile-time guarantee. Protocols don’t give you universal quantification, exhaustiveness, coherence, or refactoring safety. Missing cases become production crashes, not compiler errors. So teams sensibly avoid building architecture on top of them.

In a statically typed language, ad-hoc polymorphism is a different beast entirely. It’s one of the primary ways you encode abstraction safely. The compiler enforces that implementations exist, pushes back on missing cases, and lets you refactor without widening everything into explicit pattern matches.

That’s exactly why people who like static types do care about it.

Pointing to Elixir community norms and concluding “nobody cares” is mixing up ecosystem habits with language design. Elixir doesn’t reward those abstractions, so people don’t use them. Gleam is explicitly targeting people who want the compiler to carry more of the burden.

If Gleam is “Elixir with types,” fine, lack of ad-hoc polymorphism is consistent. If it’s “a serious statically typed language on the BEAM,” then the absence is a real limitation, not bikeshedding.

Static types aren’t about catching typos. They’re about moving failure from runtime to compile time. Ad-hoc polymorphism is one of the main tools for doing that without collapsing everything into concrete types.

That’s why the criticism exists, regardless of how Elixir codebases look today.


Well, for the specific example I gave (JSON serialization), you certainly do care whether Jason.Encoder is implemented for a struct.


Yes, I just ranted, sorry. I share your view about Gleam.


IMHO this is an education problem.


Problem which plagues 90% of the people? How to overcome it?


It's an education problem on two fronts. People inside the ecosystem need to know about it. And also people too deep in the elixir ecosystem who don't know how ad-hoc polymorphism is supposed to be used in a statically typed language.

Both overcome it by admitting they don't know and need to learn.


I think this is just an extension of "Fuck you money"


I think you're very close to being right...

But I think "Fuck you money" implies, "I honestly don't have to worry about money, ever again."

Now, we all have different definitions for that, but the kind of thing I was talking about is definitely not "Fuck you money," to me.

I think if I had "Fuck you money," my best friends and close family would all have their medical debts paid off. I think my parents and in-laws would have their mortgages paid off.


That is what they call "fuck me money". As in, fuck me I'll just pay it.

FUM is the freedom to walk away. FMM is the power make your own terms.


It’s more than just money, it’s how you set up your life to be resilient to contingencies. For example finding a compatible life partner. For example finding happiness without lifestyle inflation and breaking free from the hedonic treadmill. Or perhaps having a good lifestyle business for some people. Or having extended family support nearby. I call these things unfuckwithability. Money is a big part of it, but may not be the biggest missing piece for many people.


Or you ask Gemini to do this for you (timestamps were removed when formatting into markdown)

Based on the podcast "Microsoft: Powering Israel’s Genocide? | Hossam Nasr," here are the main human rights issues alleged against Microsoft:

1. Complicity in Military Operations - The podcast claims Microsoft is a key tech provider for the Israeli military, specifically using the Azure cloud platform to run combat and intelligence activities. - It alleges Microsoft sells AI services (including OpenAI models) to military units like "Mamram," which are linked to automated targeting systems used to accelerate lethal strikes.

2. Surveillance and Infrastructure - Microsoft is accused of hosting roughly 13.6 petabytes of data used for mass surveillance. - The "Al-Munassiq" app, used by Palestinians to manage movement permits, reportedly runs on Azure and is described as a tool for collecting vast amounts of surveillance data. - The company reportedly sells technology directly to illegal settlements in the West Bank.

3. Internal Labor Rights & Suppression - The speaker alleges a double standard and discrimination against Palestinian and Arab employees. - Microsoft is accused of "weaponizing" HR policies to fire workers (including the podcast guest) for organizing vigils or protesting the company's military contracts.

4. Historical Context - The discussion references Microsoft's history of providing tech to ICE (Immigration and Customs Enforcement) in the US as part of a broader pattern of supporting "systems of oppression."

Source: https://www.youtube.com/watch?v=A95asBbCNZo

Prompt: “ According to this podcast: https://www.youtube.com/watch?v=A95asBbCNZo

What are the main human rights issues of Microsoft?”

Used Gemini 3 (Thinking) via WebUI


> It's like they are building tech for made up in corporate conference room use cases.

Totally felt the same during the live-translation demo, when these two casual business folks were talking about "the client will love the new strategy". Dystopian corporate gibberish.


The lack of authentic examples diminishes the impressive tech. Great design is all about function. Why is it so hard to show how this would actually be used in the real world?


I've been writing Elixir on-and-off since 2017 for personal projects and since 2024 professionally, at a big tech company.

The two experiences couldn't be more different. While I loved the great development speed for my personal projects, where I am writing more code than reading it, joining an existing project needs the opposite, reading more code than writing it. And I can only repeat what many people say, dynamic typing makes this so much more difficult. For most code changes, I am not 100% certain which code paths are affected without digging a lot through the code base. I've introduced bugs which would have been caught with static typing.

So in my conclusion, I'm bullish on gleam, but also on other (static) languages embracing the cooperative green-thread/actor model of concurrency like Kotlin (with JVM's virtual threads). (On another note, I personally also dislike Phoenix LiveView and the general tendency of focusing on ambiguous concepts like Phoenix Context's and other Domain Driven Design stuff)


I worked a year or so at an Elixir shop, and this mirrors my experience. I had to navigate to call sites to understand what I was being passed and type hints were not sufficient. Dynamic typing fails larger orgs/teams/codebases.

Fun to develop and solo administer. Small teams with a well known codebase can do amazing things. I work at orgs with multiple teams and new hires who don't know the codebase yet.

For me, the sweet spot is Go.


Out of curiosity, at the tech company, did the team use typespecs and dialyzer?


The lack of type hint for function parameters is definitely a productivity killer (in my case).


There are type hints for function parameters. With some care and guards, Dialyzer can be somewhat helpful.

What actually drove me nuts was absence of guards and meaningful static analysis on return values. Even in my small but nontrivial personal codebase I had to debug mysterious data mismatches after every refactor. I ended up with a monad-like value checking before abandoning Elixir for my compiler.


Fully agree with this based on similar experiences. IME most devs hired without previous Elixir/Phoenix experience don’t end up liking the tech stack very much, even though they become productive quite quickly and don’t struggle too much with Elixir. A lot of Elixir/Phoenix fans make the mistake of thinking that everyone is going to love it as much as they do once they get up to speed.


Isn't there some kind of optional typing in Elixir?

What you're describing are the same uncertainties I've used to have writing PHP a long time ago, but since using optional types and PHPStan checker, it kind of serves as a compiler pass that raises those issues. The benefit being that I still can be lazy and not type out a program when I prototype the problem on my first pass.


>Isn't there some kind of optional typing in Elixir?

It’s in the works and recent versions of the compiler already catch some type errors at compile time, but it’s nothing remotely close to what you get from Typescript, or from any statically typed language like Go, Rust, Java, etc.


> Isn't there some kind of optional typing in Elixir?

Sort-of. Developers provide typespec which is like a hint and use dialyzer to find issues before runtime.


This is why Gleam exists.


Also, Gleam fully supports Elixir packages.


I am now super curious which big tech company is betting on Elixir?


Nubank, Latin America's most valuable bank, relies heavily on Elixir and even acquired Plataformatec, the company where Elixir was created.

A blog post by them about this: https://building.nubank.com/tech-perspectives-behind-nubanks...


Wasn't Nubank the Clojure posterchild not long ago?

Are they moving from Clojure to Elixir, or adding it?


The referenced post is from 2020, and nubank still posts clojure content and sponsors the big clojure conference, so I’d be shocked if they were dropping clojure.

Their tech stack is probably enormous, it wouldn’t surprise me if they’re using both for different things


Also the referenced post is about them acquiring plataformatec, not about using elixir. Jose Valim (the creator of elixir) left plataformatec after the nubank acquisition in order to continue developing elixir, I've never heard of nubank using elixir, afaik they're solidly a clojure shop with no plans on changing.


it's just an internal I/O-bound project, where BEAM concurrency makes lots of sense. Grown from an engineer's side project as it was useful and working well, not a company-wide effort to bet on Elixir


Having worked as Cloud Solution Architect at Microsoft Germany/Azure, let me tell you:

Nope this gap can not be closed by any US company alone due to the Us Patriot Act - which forces any US company (including e.g. a German subsidiary) to allow access to all data for national security purposes.


Having worked at AWS, no, it's a separate partition under a separate legal entity, and the EU framework is specifically designed to counter Patriot Act, CLOUD Act and the like. It's gonna be similar to AWS China, and potentially more restrictive in some senses. That leaving aside regions we're not allowed to talk about.


> Having worked at AWS

This should be disclaimer at your first message when you compared AWS with UPCloud.

TBH, I would not trust AWS with countering the Patriot Act.


> This should be disclaimer at your first message when you compared AWS with UPCloud.

Fair, my bad. Still obviously misleading.

1. DB instances "starting at $144", I have a $63 in my basket at the moment, and also Aurora Serverless charges on resources used and can be potentially cheaper depending on the workload.

2. "$82.8 /mo" for a 2 core 8GB server is actually just under 50.

3. European DC locations: 8 for both. Unsure what UpCloud means for them here[0], they look like actual, individual DCs, but AWS has 8 European regions. Each region has normally 3 AZs which are physically separate DCs (which can be in proximity or not) and can be composed of multiple DCs each. Plus there are localzones depending from certain regions, each with at least one DC (and there are 11 of those). So the AWS number is certainly over 30 if we compare apples to apples.

The rest I don't have time to dive in, or are just opinions (certifications needed for proficiency? really?)

>TBH, I would not trust AWS with countering the Patriot Act.

AWS China wouldn't have happened if they didn't offer enough safeguards. Complying with Patriot Act will guarantee enormous fines for AWS in the EU, so I'm sure legal and finance did their homework for AWS not to end up between a rock and a hard place.

[0] https://upcloud.com/data-centers


> AWS China wouldn't have happened if they didn't offer enough safeguards.

AWS China vs. AWS EU: Data centers in China are managed by Chinese companies, whereas DCs in the EU are managed by USA companies.

From a regulatory perspective, it's two different worlds. The Patriot Act can happen in the EU, not in China.

This is why GDPR does allow that EU user data is transferred to non-EU countries, but not to the USA.[0]

Furthermore, a discernible trend has emerged, attributable to the inadequacies in privacy regulations and suboptimal Trump geopolitical strategies with the EU, the EU is actively seeking better cloud services [1].

[0] https://gdpr-info.eu/issues/third-countries/

[1] https://www.wired.com/story/trump-us-cloud-services-europe/


From the thread:

  @discardableResult
  public init(priority: TaskPriority? = nil,
      operation: sending @escaping @isolated(any) () async -> Success)
> Take just the operation argument. It's a closure that is sending, escaping, declares any isolation (I don't understand this part very well yet), it's async and it returns Success. That's a whole bunch of facts - 7 to be precise - you need to know about just one parameter of this constructor. I understand that all 7 make sense and there's nothing you can do about it within the current strict concurrency model.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: