Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Unfortunately this caching is still per-path. For example:

    GET /v1/document/{document-id}/comments/{comment-id}
For every new document-id or comment-id, there will be a new pre-flight request.

Alternative hacks: Offer a variant of your API format that either

1. Moves the resource path to the request body (or to a header that is included in "Vary"). Though the rest of your stack (load balancing, observability, redaction) might not be ok with this, e.g. do your WAF rules support matching on the request body? You also will no longer get automatic path-based caching for GET requests.

2. Conforms to the rules of a CORS "simple" request [1], which won't trigger a pre-flight request. This is what we did on the Dropbox API [2]. You'll need to move the auth information from the Authorization header to a query parameter or the body, which can be dangerous wrt redaction, e.g. many tools automatically redact the Authorization header but not query parameters.

[1] https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS#simpl...

[2] https://www.dropbox.com/developers/documentation/http/docume... (see "Browser-based JavaScript and CORS pre-flight requests")



3. Don't allow cross-platform requests in the first place; have your API consumers go through a server-side proxy on the same domain instead, or host it on the same domain in the first place.


This is the only valid solution and the easiest one to implement. However, for some reason unknown to me - younger devs and various organizations simply refuse to go down this route and make up reasons why it doesn't work for them, opting for more time-consuming alternatives.


Many devs (young or not, age doesn't matter) simply have no idea how CORS works and don't understand the "same origin" policy. I've seen hundreds of hours wasted on CORS / OPTIONs request implementations that could've been saved with a reverse proxy, if only they knew what one was.


CORS is one of my favorite interview questions (front-end/react dev) as it has the potential to tell me if the interviewee is the person who has researched the problem and implemented solutions. There is a lot of potential discussion from how it works, why it's necessary, to how it is solved in production vs development.


CORS is something I 'fixed' once, five years ago. Hard to talk in detail about that anymore. I wish we would have the time to implement it safely, but alas. We still can't produce an allowList of allowed domains :/


> CORS is something I 'fixed' once, five years ago

I'm careful that I don't demand anyone go into depth on any particular subject. CORS is just one opportunity that seems to need a fix with every new project. It also has a variety of solutions, which is again opportunity to show what they know.

There is little point in looking for what a client doesn't know. I've got that covered.


I'll bite. I'm working on an application that uses Firebase, so the front-end is hosted on Firebase hosting (probably with some kind of CDN before that) and available through a firebase-supplied domain, and the back-end runs on Cloud Run behind a cloud-run-supplied domain. The domains are different, so CORS happens.

I'd like you to back up your claim that a server-side proxy or using the same domain is the easiest solution, and in particular, easier than the solutions suggested by cakoose.


There's 1 way to solve this, and it's to have the same domain that the end client (browser, app) uses. It means you'd create 1 domain that's exposed to client and on the web server level you perform routing (proxying) to appropriate firebase/cloud run domains.

As for whether that's easier - I've been doing it like this since forever, so I'm biased and it's easy for me. It's easy because I don't have to worry about problems related to cross origin resource sharing and because I know how to write the necessary configs. If you don't have to walk through mine field, you never worry about mines. That's the easy part I refer to. I don't even need to test whether CORS is set up properly, worry about preflight and what not etc. - it works, forever.

Whether it will be as easy for you, I can't tell that, you can have completely different opinion and be correct about it, but the fact remains that proxy between 2 resources removes the problem. I consider problem removed as something easy, you might not.


It's would argue its all about just not throwing a bunch of buckets and rakes on the floor during the networking design-phase that your entire dev, qa and then dev ops teams will proceed to walk into in every other stage that follows.

If you're dealing with a bunch of separate black boxes (as our firebase poster is) then maybe you do have to wrangle CORS but if you're developing your own applications then there is no good reason to introduce these issues into your pipeline.


Somehow, I think putting a proxy in front of Firebase is not the right solution here at all. The same goes when the website is served by any other CDN. Great, we have this thing that can serve nearly infinite number of requests has DDOS protection and is always close to the customer, let's put a proxy in front of it because we don't know how to setup CORS.

CORS ain't that hard.


Why put anything in front of Firebase hosting? Why not use Firebase hosting as the front?

This confirms my initial comment - people making up reasons.

The solution is always the same, it's always as easy and there's never the need to introduce something new since you can use what you already have.


> on the web server level

Does that mean that I'd have to introduce a web server in front of Firebase Hosting that I'd have to maintain and scale, not to mention that it negates all advantages of using FB hosting in the first place?

I'll stick with CORS, thank you.


You're jumping to conclusion too fast. You already have the web server and you can already achieve no-CORS without "introducing" anything new. I'm not trying to convince you, but since you're someone who works in this field - you're drawing wrong conclusions. Stick to what you know, sooner or later you'll realize that CORS is not as simple and as benign as it may seem.

There's a comment below mine that tells you you can achieve this kind of proxy with the actual Firebase itself. And your comment, again, proves my hypothesis - people just make reasons up with improper arguments.


I've worked with Firebase Hosting for around three years, using Firebase Functions instead of Cloud Run, but in your particular example, you can configure your Firebase Hosting to rewrite requests against your Cloud Run instances as documented in https://firebase.google.com/docs/hosting/full-config#rewrite....

This doesn't require handling an OPTIONS request for every endpoint, which I agree could be fairly simple, but Firebase Hosting configuration could very well be less invasive than this change in the rest of the codebase.


Thanks a lot, I'll have a look into this.


If you serve your static files from a CDN it's simply not possible to do so.

It's a very common case.


Can you provide an example, just so we can be on the same page? You do an XmlHttpRequest or fetch() to a static asset and it's a non-trivial request to CDN, I just wonder how it looks like and why it exists in the first place.


Say that you want to serve your API from the same origin as your static files (HTML, CSS, JS, etc.) but you also want said static files to be served by a CDN.

Basically that won't work besides making your CDN somehow proxy requests for which not static file exists.


It's an extremely common scenario. The obvious and straightforward solution is to configure path-based cache rules exactly like you describe, to proxy requests which don't correspond to static assets. Cloudfront allows you to do this out of the box, as do other major CDN providers. If your CDN doesn't support this, consider switching - or introduce an edge proxy to facilitate it.


We use Fastly (the Hosts feature) to do exactly this. Basically it routes requests to different backends based on the URL path. If it starts with /assets, the backend is S3. If it starts with /api, the backend is our application. If it starts with /blog, the backend is Wordpress. All on different hosting platforms.

(In case it’s not clear, Fastly is also caching and serving these requests as a CDN.)


So that would indicate that you don't do end-to-end TLS on your infrastructure for the API which means that Fastly man-in-the-middles your API.

In a lot of sensitive businesses that wouldn't be allowed.


Are you sure about that? Have you talked to a good lawyer about it?


This is not necessarily only a legal issue it's also an information security issue.

You can't guarantee the integrity and privacy of the full request and response chain.

This goes against various security standards and ISOs.

You might be able to get away with it and trust the CDN but that's an awful lot of trust.


Unless you are terminating TLS entirely on owned hardware, you are paying a 3rd party to manage TLS for you.

A lot of people seem to think that there is a big difference between paying a lessor (e.g. Hetzner) for a server on which you terminate TLS, paying a cloud host (e.g. Amazon) to terminate TLS, and paying a CDN (e.g. Fastly) to terminate TLS. Legally there is no difference aside from the specific language of the contracts, which you can review and negotiate in advance.

The difference security-wise is entirely down to the operations of each company, which again you can review and discuss in advance. Strictly speaking a CDN should have lower risk than a host since they are not persisting sensitive data (if you set your cache headers correctly). And as discussed above, using one domain helps avoids cross-domain security concerns.


You're putting a lot of trust in your CDN, anyway. If your CDN gets hacked, what's stopping your frontend code from being updated to send your API requests somewhere else? Maybe they get rerouted to a proxy, then back to you...


Serving said JS on CDN implies that the JS performs xhr/fetch, which means I have control over said CDN

Given the fact I have control over domain(s), I'd have http://domain.tld/api for API and serve static js from http://domain.tld/static/*.js

Given the fact I have the need for CDN, it means I've got enough traffic that justifies the bill incurred


> you want to serve your API from the same origin as your static files (HTML, CSS, JS, etc.)

What is the benefit or purpose of doing this?


You don't have CORS configuration to worry about. It simplifies development and deployment.


You don't need to set up CORS to load assets on a page from a CDN's origin. Cross-origin taint only matters when trying to read asset data from a script.

I think we may be talking past each other and one of us is misunderstanding OP.


Yes, but you need it if your API isn't on the same origin as your static assets.


Only if you need to read those assets from Javascript with a cross-domain request… right? (nb: you can always insert cross-origin assets to web page, with e.g. a video or img tag, you just can't read the data from JS)

Say you stand up a website with an API at https://example.com. You host your static assets on the CDN, at https://example.myfastcdn.com/. So example.com's home page looks like:

    <head>
      <script src="https://example.myfastcdn.com/app.js"></script>
    </head>
    <body>
      <div id="site-root"></div><!-- app.js renders some application here -->
    </body>
The origin when you load your site is https://example.com, so the scripts hosted on the CDN can still make API requests to https://example.com.

What am I missing here?


IIRC Cloudflare has page rules, which makes this possible? In fact, with page rules, Cloudflare can proxy a subpath (like /api) to a completely different domain.


As I mentioned above this doesn't work for all CDNs and also involves trusting your CDN to MitM your API


This kind of comment falls under what I posted originally - people making up reasons why they can't go with proxy solution.


I don't think that security requirements are a "made up" restriction.

It's like saying that a house built without a lock is a made up issue and that no lock pr door is needed.


They're not, but you're blatantly refusing to read what's being written.

Public CDN should never be trusted. If you use a CDN in the first place and have strict security requirements, then you create your own private CDN. And if you can control that private CDN, you have all the ingredients to avoid CORS.

It's really that simple. No one is saying you are wrong, but you're refusing to look at the entire picture and you focus only on a subset, in which - of course - your argument works.


So your point is that there is no reason to not serve everything behind the same origin, it only requires setting up a full fledged CDN to do so.

I'm sorry but that's simply not an acceptable constraint.


I'm sorry that we ended up discussing this because all you did was invent situations and argued with people who didn't even state any of what you managed to read.

No one is telling you not to deal with CORS your way. Fact of the matter is that you can avoid it, but you're making up reasons why you can't. The only reason you can't is because you won't. You're free to use whatever approach you like, there's no police here, just don't state that I or anyone else wrote what we didn't. It'd be grown up thing to do. Thanks and best of success with your projects.


Well, sure. But, as a non-expert, cors kind of makes sense to me in development. What would you suggest? It's an honest question.


A web server (nginx) that can proxy your request to the other domain, but your browser sends everything to one domain, thus avoiding CORS.

Example: you have http://ui.localhost and you have http://api.localhost

UI speaking to API = CORS

But, instead of doing fetch('http://api.localhost/resource'), you do fetch('http://ui.localhost/api/resource')

In the nginx config for ui.localhost domain, you create a rule that says "everything that starts with /api, intercept it, remove /api at the start of the path and send the rest to http://api.localhost, ending up with http://api.localhost/resource"

I do frontend and backend development and I have this setup with docker-compose, the config for nginx is really trivial and widely available in many tutorials.


Scenario: In production where assuming ui.example.com is only for static resources/SSG and api.example.com is for dynamic api endpoints, we usually protect the api domain with WAF in CDN which will cost extra and typically unnecessary for the UI domain. So in this case by doing this reverse proxy, we will bypass the WAF layer or atleast feed WAF incorrect data (our server is requesting instead of the user directly). Since WAF usually has extra (significant) costs, what would you suggest in this case?


Forward the correct data, then it makes no difference to WAF if it's you or user requesting.

That's why we have various controls with proxies, such as including the original requester's IP etc.

It's irrelevant who actually asks for data if you pass the HTTP request info unaltered (except path parameter), the WAF can do its job. That's the beauty of HTTP and its stateless nature. You can scale infinitely and do various actions such as this one and get the expected result.


Running a proxy on localhost isn't very difficult; it requires maybe 10 or 20 lines of nginx config, less with Caddy? Certainly something that can be stuffed into the README for developers.


Or better yet, into the devcontainer


Authentication with a SAML provider can make this a pain in the ass. Especially if you don't have control over the provider.


Alternatively, stick to "simple requests". That's HEAD, GET, and POST, without any custom headers or non-form content-type set. This adds some further limitations (no ReadableStream being one of them). If the backend responds with an appropriate access-control-allow-origin then the request will just succeed.


> 3. Don't allow cross-platform requests in the first place; have your API consumers go through a server-side proxy on the same domain instead, or host it on the same domain in the first place.

That works for first-party JS. Doesn't work for a public API used by others.

Edit: Specifically purely client-side apps. For someone hosting a static HTML+JS app, it's annoying to have to set up and run a server-side route just to circumvent CORS.

(Maybe not so bad with something like Next.js, where it's easy to add a backend route to your primarily static website.)

And it adds an extra hop of latency to every request.


Yes, CORS is best avoided when your own back end is the "third party." (In some cases, it may be impossible though.)


I feel like this is overkill when you have something like OPTIONS which is very minimalist in its response. Unless I'm missing some obvious drawback with calling OPTIONS?


Well, you double the latency for every. single. request.


CORS does not double latency for every single request. It adds the additional overhead of a separate OPTIONS request for every single request. Unless the responses are as trivial to compute/serve as OPTIONS responses are, that will be way less than a doubling in latency.


What I mean by latency is the transport latency, even assuming 0ms OPTIONS compute time, you still need twice the round trip time.


> Offer a variant of your API format that either: 1. Moves the resource path to the request body

GraphQL

ducks


> GraphQL

And now you have two problems


I got N+1 problems but CORS ain't one


But you have a nice schema for your problem :)

GQL is a bit ugly, but works well, kind of standardized, etc.

Is there something similar for providing a batch endpoint for OpenAPI requests?


  GET /blog/1?with=comments,author&only=title,body,created_at,comments.body,author.name


> Moves the resource path to the request body

JMAP is very well suited to CORS due to this: https://www.rfc-editor.org/rfc/rfc8620.html


Yeah. For example, the Meilisearch search engine recommends submitting idempotent searches over POST and not GET due to this: https://docs.meilisearch.com/reference/api/search.html

I wish they'd standardize HTTP QUERY soon: https://datatracker.ietf.org/doc/draft-ietf-httpbis-safe-met...


> Conforms to the rules of a CORS "simple" request [1], which won't trigger a pre-flight request.

I was about to ask if OPTIONS would be sufficient, but it looks like some of the MDN URLs suggest just that.


To hijack the thread a bit, if you are still with Dropbox, could you get them to implement what you did in #2 in the official Dropbox JS SDK? Right now it still does a pre-flight request for everything.


No, I left Dropbox 5 years ago.

But it might be easy to add? https://github.com/dropbox/dropbox-sdk-js/blob/main/src/drop...

Make sure to always set the URL parameter "reject_cors_preflight=true", which will make sure you're not inadvertently triggering pre-flight requests.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: