The web IS the duck typing equivalent at the network boundary! That’s why plenty of alternative service providers can and do implement eg object storage APIs that work with aws s3 client libraries, or LLM APIs that work with Claude Code. The reasons these use cases are standardized (while others remain fragmented) are economic, not technical (lock-in isn’t as profitable for these alt services as raw adoption)—and so a purely technical solution like this is unlikely to address the crux of the problem.
Even purely on the technical level, this seemingly hasn't internalized the lessons of https://xkcd.com/927/
On the web being the duck typing equivalent. That's ad-hoc duck typing at the wire format level. S3-compatible APIs exist because companies reverse-engineered S3's surface and matched it. You have to replicate the paths, headers, and response shapes closely enough that the client libraries can't tell the difference, or they break. There's no spec that declares "I satisfy the S3 interface" and no way for a tool to verify compatibility without running requests.
OBI operates at the contract level instead. Two services with different wire formats can satisfy the same interface as long as their operation shapes match. The binding executors handle the wire differences. That's the duck typing analogy. Match the shape, not the implementation.
You're right that the drivers are partly economic. But technical standards and economics aren't isolated from each other. Terraform, Kubernetes, and OpenAPI itself are technical solutions that enabled economic behavior that wasn't viable before them. Lowering the cost of interop changes what's economically rational to pursue.
On the xkcd, the post addresses this. OBI structurally can't replace OpenAPI, gRPC, or MCP. An OBI without sources and bindings pointing to them is an unbound contract, not an actionable interface. The dependency runs one way. Those specs are inputs, not competitors.
None of the non-AWS copies of S3 implement exactly the S3 protocol. And even if they did, the next update -- coming at any arbitrary time -- would invalidate the full compatibility.
Fair correction. Wire-level copies don't even fully match, and upstream changes can invalidate whatever match exists. That's the weakness of wire-level duck typing. Contract-level compatibility doesn't fully solve the upstream changes problem (nothing does), but when you regenerate the OBI from an updated source spec, a compatibility check tells you structurally what changed. Wire-level clones find out by breaking.
The flip side is worth noting too. If you own the API and want to preserve contract compatibility, OBI lets you change your implementation (paths, payload shapes, even protocols) behind the same contract. Transforms bridge shape differences. Binding swaps change protocols. Consumers targeting the interface don't need to care.
Ad-hoc duck typing—“if it looks like a duck[…]”—is the only kind that exists! The point of the term “duck typing” is that it doesn’t require explicit declaration of the contract by implementers, it’s not synonymous with polymorphism or with interfaces in general. Haskell type classes are not duck typed, nor are Swift protocols; Go interfaces are.
OBI is a boilerplate generator for the adapter pattern for service communication; its contracts are just another set of their own paths, payload shapes, and protocols. The distinction between “protocol” and “contract” in this context is nonsense.
Pushing back on the terminology point. Duck typing is defined by how compatibility is determined, not by the absence of declaration. In Swift and Haskell, declaration is what makes a type conform. In Go, structure does, whether you add an optional assertion or not. OBI works the same way. Shapes determine compatibility. Roles and satisfies are optional declarations for explicitness, not a different mechanism.
Duck typing also requires being able to see a duck. Go has methods you can introspect. Raw REST doesn't. OpenAPI and gRPC reflection give you a structural surface, and OBI unifies those into one comparable form above the transport. What you're calling ad-hoc duck typing at the network boundary is humans reading docs and guessing whether the shapes match. That's a weaker version of the same idea, without the mechanical verification that makes duck typing useful. On "just another set of paths and protocols," OBI operation keys are transport-neutral. `tasks.create` has no route baked in. Bindings say how to reach it: REST at POST /tasks, gRPC at TaskService.Create, MCP as a tool. Same operation, multiple bindings, one identifier.
You’re still not getting it. Duck typing comes from the phrase “if it looks like a duck, and quacks like a duck, then it must be a duck”. The Wikipedia page contrasts it with nominative typing that requires a declaration, and calls out that duck typing does not need the adapter pattern. What you are calling duck typing is just “interfaces”.
Also “tasks.create” is morally a route. Eg grpc has web transports and autogenerated CLIs too; even without those, there’s no particular reason why the OBI “client layer” couldn’t just be, say, gRPC under the hood, with the OBI specs just being used to codegen adapters from the source protocol to gRPC. It would then immediately have a much more widely understood and supported client layer, with better performance, while remaining as simple to implement as the adapters to this new custom OBI runtime protocol.
You're right that what OBI does is structural typing or just interfaces, not duck typing. Transforms and bindings do use the adapter pattern. The critique lands and I can admit it. The duck typing framing was rhetorically appealing but imprecise.
On the gRPC-as-client-layer point, I want to make sure I understand. You're suggesting that between the OBI client SDK and the source service, there should be a gRPC layer, with generated gRPC adapters fronting each source? Or something else?
Worth noting either way: the spec doesn't dictate implementation. It defines operation contracts, binding sources, and the executor role. How an executor actually reaches a service is left open, so gRPC-based routing is a valid strategy within the spec regardless.
Thanks for sticking with this. This is exactly the kind of discussion I was hoping for. I really do appreciate it.
My gRPC point is that it already gives you a language for describing interfaces and support for generating client types from it, and already abstracts the connection management[1]. If OBI just generated a gRPC ClientConn implementation to map the Invoke calls to eg REST paths, it’d inherit the large existing gRPC client ecosystem. That ClientConn interface can already technically capture what OBI calls binding executors. People don’t do this though because it’s not worth it.
Thanks for the clarification. That's a valid implementation strategy for client consumers that want a gRPC feel. The spec doesn't require any particular client pattern (what's on the site is guidance, not normative), so an OBI-conformant toolchain could be built that way. I went with a less prescriptive approach, but that's a preference, not a mandate, and I think it's the right choice specifically for leaving the door open to alternative client implementation patterns like what you're proposing. I appreciate you sticking with this across the thread.
If you're open to it, I'd be interested in staying in touch as OBI evolves. I know you're not sold on the project, but critical voices willing to engage constructively for this long are rare and valuable regardless. If you ever want to poke at something or tell me where you think it's going wrong, I'm at https://x.com/clevengermatt, and there's the GitHub discussions.
The functional programming take is that “the result of foobinade-ing an and b” IS “foobinade applied to two of its arguments”. The application is not some syntactic pun or homonym that can refer to two different meanings—those are the same meaning.
Let us postulate two functions. One is named foobinade, and it takes three arguments. The other is named foobinadd, and it only takes two arguments. (Yes, I know, shoot anybody who actually names things that way.)
When someone writes
f = foobinade a b
g = foobinadd c d
there is no confusion to the compiler. The problem is the reader. Unless you have the signatures of foobinade and foobinadd memorized, you have no way to tell that f is a curried function and g is an actual result.
Whereas with explicit syntax, the parentheses say what the author thinks they're doing, and the compiler will yell at them if they get it wrong.
> Unless you have the signatures of foobinade and foobinadd memorized, you have no way to tell that f is a curried function and g is an actual result.
Yes, but the exact FP idea here is that this distinction is meaningless; that curried functions are "actual results". Or rather, you never have a result that isn't a function; `0` and `lambda: 0` (in Python syntax) are the same thing.
It does, of course, turn out that for many people this isn't a natural way of thinking about things.
> Yes, but the exact FP idea here is that this distinction is meaningless; that curried functions are "actual results".
Everyone knows that. At least everyone who would click a post titled "A case against currying." The article's author clearly knows that too.
That's not the point. The point is that this distinction is very meaningful in practice, as many functions are only meant to be used in one way. It's extremely rare that you need to (printf "%d %d" foo). The extra freedom provided by currying is useful, but it should be opt-in.
Just because two things are fundamentally equivalent, it doesn't mean it's useless to distinguish them. Mathematics is the art of giving the same name to different things; and engineering is the art of giving different names to the same thing depending on the context.
Not when a language embraces currying fully and then you find that it’s used all the fucking time.
It’s really simple as that: a language makes the currying syntax easy, and programmers use it all the time; a language disallows currying or makes the currying syntax unwieldy, and programmers avoid it.
But arguably your intent would be much more clear with something like `map (printf "%d %d" m _) ns` or a lambda.
I don't think parent is saying that partial application is bad, far from it. But to a reader it is valuable information whether it's partial or full application.
Not really when reading `iter (printf %"d %d" m) ns`, I am likely to read it in
three steps
- `iter`: this is a side-effect on a collection
- `(printf`: ok, this is just printing, I don't care about what is printed, let's skip to the `)`
- ns: ok, this is the collection being printed
Notice that having a lambda or a partial application `_` will only add noise here.
> But to a reader it is valuable information whether it's partial or full application.
This can be a valuable information in some context, but in a functional language, functions are values. Thus a "partial application" (in term of closure construction) might be better read as a full application because the main type of concern in the current context is a functional type.
Fine, it's a regular type. It's still not the type I think it is. If it's an Int -> Int when I think it's an Int, that's still a problem, no matter how much Int -> Int is an "actual result".
And the compiler immediately tells you that you are wrong: your type annotation does not unify with compiler’s inferred type.
And if you think this is verbose, well many traditional imperative languages like C have no type deduction and you will need to provide a type for every variable anyways.
I spent the last three years on the receiving end of mass quantities of code written by people who knew what they were writing but didn't do an adequate job of communicate it to readers who didn't already know everything.
What you say is true. And it works, if you're the author and are having trouble keeping it all straight. It doesn't work if the author didn't do it and you are the reader, though.
And that's the more common case, for two reasons. First, code is read more often than it's written. Second, when you're the author, you probably already have it in your head how many parameters foobinade takes when you call it, but when you're the reader, you have to go consult the definition to find out.
But if I was willing to do it, I could go through and annotate the variables like that, and have the compiler tell me everything I got wrong. It would be tedious, but I could do it.
Doesn’t that just imply that your tooling is inadequate? In LINQPad (and, I assume VS though I have done it in a while), when you hover over a “var” declaration a tooltip tells you the actual type the compiler inferred.
If 0 and a function that always returns 0 are the same thing, does that make `lambda: lambda: 0` also the same? I suppose it must do, otherwise `0` and `lambda: 0` were not truly the same.
In a non-strict language without side-effects, having a function with no arguments does not make sense. Haskell doesn't even let you do that.
You can write a function that takes a single throw-away argument (eg 0 vs \ () -> 0) and, while the two have some slight differences at runtime, they're so close in practice that you almost never write functions taking a () argument in Haskell. (Which is very different from OCaml!)
Yes, and that becomes more intuitive when you "un-curry" the nested lambdas into a single lamba with twice the number of arguments. The point is that the state of a constant does not depend whatsoever on the state of the (rest of the) world, how much ever of that state piles on.
Sure—but that’s a property of the inferred types moreso than the mere application syntax. It can be hard to revisit or understand the type of JS or unannotated Python expressions, too—but unlike those cases, the unknown-to-the-reader type of the Haskell code will always be known on the compiler/LSP side.
1. The collaboration and notation app for rock bands that I’d wished existed already: https://bandwith.rocks/about
2. A “runtime scheduler for humans” that I wished existed, too (think morning routines, travel checklists, and pomodoros in the same abstraction—but also a lot of support for ad-hoc rearrangement and addition of the task queue).
I agree that the font and emoji hops aren’t great for complexity or performance, but the problem in the post was in the rendering of a tiny SVG; serving it directly would not have avoided the problem.
reply