Inside Google (where everything is protobuf) there were a handful of tools like this. But the killer feature is that there was an automatically-added RPC service for introspecting the other RPC services any given server exposed. This means that there was an `ls` command for listing possible services, methods on those services and the protocol buffers for the request and response. So you didn't need to manage the source protos. You could just do something like `protocurl 10.1.2.3 MyService.MyMethod 'foo: 8 bar: "foo"'` and it put all of the pieces together.
The introspection could be a little slow if the server was far away (it added a couple round trips) but the ability to avoid knowing where the schemas were was invaluable. (I wonder if there could have also been some sort of caching to make it a bit better).
The readme in your link mentions how they are different:
> How is protocurl different from grpccurl? grpccurl only works with gRPC services with corresponding endpoints. However, classic REST HTTP endpoints with binary Protobuf payloads are only possible with protocurl.
For my purposes, gRPCurl was a good fit. Maybe to others as well.
If you have a gRPC service, you'd use `grpcurl`. This one is for RESTish HTTP 1 API's where the req/resp body is protobuf - something that `grpcurl` can't handle. In other words, you'd use this if your API uses traditional HTTP methods and responds with binary encoded protobuf blobs. I would imagine this is extremely niche.
nice. A lot of the problems people have with protobuf are down to deficiency in the opensource tooling.
For example, I miss an easy way to stuff all my (and my team's) protobufs in a registry and a tool that autodetects what schema a protobuf is (or leverages the type in Any), so that I can avoid passing complicated path flags to all these tools that decode protos.
1) Having worked for a company that did this (after learning that it was an anti-pattern at a previous company)... Putting all Protobuf files in a single "registry"/repo is definitely an anti-pattern. You should put them in the repo that implements the service. That service is responsible for maintaining API compatibility between versions (i.e. keep field numbers "stable" and deprecate/update them so as not to break clients).
(If you want a "registry," the better approach would be to have something that uses all the services as dependencies to consolidate their protos.)
2) Going along with this approach, gRPC has a reflection service; most server implementations can expose this (I have personally done it with Tonic/Rust but I know Golang and Java bindings, probably Ruby and others, support it). If you use something like gRPCurl against a server with reflection, the only "path flags" you have to worry about are, like... just the method names. It can't really get more terse than it is with gRPCurl and gRPC reflection, though autocomplete would be nice to have I guess.
The basic intent of gRPC - indeed, its advantage over JSON - is to promote composable, decoupled services. Unless you're monorepoing all your services, putting all protos in a "registry" type repo, that everything depends on, only makes things harder for everyone that needs to do things with those protos.
We also switched to Buf. It works really well, and it's nice to have a documented, less obscure wrapper over regular Protobuf commands too. Definitely better than the organically grown Makefiles we had before.
The introspection could be a little slow if the server was far away (it added a couple round trips) but the ability to avoid knowing where the schemas were was invaluable. (I wonder if there could have also been some sort of caching to make it a bit better).