Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the "O" in ORM doesn't really share all that much relation to OOP. Many can be called TRM ("Type-relational mapping") as well. "Object" is one of those words that's so overloaded with different meanings I try to avoid using it.

Event sourcing is pretty complex; I'm hesitant to use it unless I have a clear and specific reason for it. It's not mutually exclusive with a ORM either; I've seen people use an ORM to query and even build their projections for example.



I don't disagree with anything of what you say really.

But I think perhaps that "event sourcing is pretty complex" may be due to programming languages and ORM having the object-oriented mindset as the "default" usage pattern. If we had 30 years of tooling evolution and education exposure for event sourcing I do not think it would be "complex".

(Also I mean event sourcing as in "how does one model data in the SQL database", not whether one is doing event-driven architectures with distribtion, async, event brokers etc -- that IS adding a lot of complexity but is something else..)


I'm not so sure about that; you either:

1. You have a table with records; to change something it's one "update record".

2. You have an append-only log of everything that happens, and a table of records which represents the current state after every log item is played back for efficient querying. To change something you do "insert log + update projection_record".

It seems to me the second item is fundamentally more complex, no matter what you do. You can abstract some of that away with good tooling, but things like migrations will probably forever be a right pain with event sourcing.

Sometimes all of that is worth it, but often it's not. That's probably why it's not the default way to write applications.


When I said development of tooling of 30 years, I didn't mean a bunch of scripts or libraries, it would include things such as databases and programming languages. Certainly things like automating database migrations is things can be fixed in that timespan on tooling -- and probably WILL be fixed in coming 30 years.

The thing about "update projection_record" is that any business logic in there can be done fully declarative/functional, making it easier to develop, reduce bugs, evolve it etc etc.

Sure if you actually have to program "insert log + update projection_record" it's more fragile and complex. But you are basically doing things the compiler+database should have been doing for you transparently if it was the kind of event sourced tooling I describe.

If you can just say in one part of the program "insert log"..

..and independently of that say "I need to know this to make a decision, it can be computed from the logs like this" -- and change those questions as you like, and it is the job of some combination of DSL/declarations/compiler/database to make sure that the right projections are maintained to efficiently evaluate the expression, then I don't think that is more complex.

This is the main idea in the Out of the Tar Pit paper I linked to. Also e.g. the Materialize database has some ideas like this (it can't be used in many situations, just an example of a database that has this kind of idea).




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: