Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> To terminate SSL

To make sure that your connections can be snooped on over the LAN? Why is that a positive?

> To have a security layer

They usually do more harm than good in my experience.

> To load balance

Sure, if you're at the scale where you want/need that then you're getting some benefit from that. But that's something you can add in when it makes sense.

> To have rewrite rules > To have graceful updates

Again I would expect a HTTP library/framework to handle that.



> To make sure that your connections can be snooped on over the LAN? Why is that a positive?

No, to keep your app from having to deal with SSL. Internal network security is an issue, but sites that need multi-server architectures can't really be passing SSL traffic through to the application servers anyway, because SSL hides stuff that's needed for the load balancers to do their jobs. Many websites need load balancers for performance, but are not important enough to bother with the threat model of an internal network compromise (whether it's on the site owner's own LAN, or a bare metal or VPS hosting vlan).

> Sure, if you're at the scale where you want/need that then you're getting some benefit from that. But that's something you can add in when it makes sense.

So why not preface your initial claims by saying you trust the web app to be secure enough to handle SSL keys, and a single instance of the app can handle all your traffic, and you don't need high availability in failure/restart cases?

That would be a much better claim. It's still unlikely, because you don't control the internet. Putting your website behind Cloudflare buys you some decreased vigilance. A website that isn't too popular or attention-getting also reduces the risk. However, Russia and China exist (those are examples only, not an exclusive list of places malicious clients connect from).


> So why not preface your initial claims by saying you trust the web app to be secure enough to handle SSL keys, and a single instance of the app can handle all your traffic, and you don't need high availability in failure/restart cases?

Yeah, I phrased things badly, I was trying to push back on the idea that you should always put your app behind a load balancer even when it's a single instance on a single machine. Obviously there are use cases where a load balancer does add value.

(I do think ordinary webapps should be able to gracefully reload/restart without losing connections, it really isn't so hard, someone just has to make the effort to code the feature in the library/framework and that's a one-off cost)


> > To terminate SSL

> To make sure that your connections can be snooped on over the LAN? Why is that a positive?

Usually your "LAN" uses whole link encryption, so that whatever is accessed in your private infrastructure network is encrypted (being postgres, NFS, HTTP, etc). If that is not the case, then you have to configure encryption at each service level, which is both error prone, time consuming, and not always possible. If that is not case then you can have internal SSL certificates for the traffic between RP and workers, workers and postgres, etc.

Also you don't want your SSL server key to be accessible from business logic as much as possible, having an early termination and isolated workers achieves that.

Also, you generally have workers access private resources, which you don't want exposed on your actual termination point. It's just much better to have a public termination point RP with a private iface sending requests to workers living in a private subnet accessing private resources.

> > To have a security layer

> They usually do more harm than good in my experience.

Right, maybe you should detail your experience, as your comments don't really tell much.

> To have rewrite rules

> To have graceful updates

> > Again I would expect a HTTP library/framework to handle that.

HTTP frameworks handle routing _for themselves_, this is not the same as rewrite rules which are often used to glue multiple heterogeneous parts together.

HTTP frameworks are not handling all the possible rewriting and gluing for the very reason that it's not a good idea to do it at the logic framework level.

As for graceful updates, there's a chicken and egg problem to solve. You want graceful update between multiple versions of your own code / framework. How could that work without a third party balancing old / new requests to the new workers one at a time.


You terminate SSL as close to the user as possible, because that round trip time is greatly going to affect the user experience. What you do between your load balancer and application servers is up to you, (read: should still be encrypted) but terminating SSL asap is about user experience.


> You terminate SSL as close to the user as possible, because that round trip time is greatly going to affect the user experience. What you do between your load balancer and application servers is up to you, (read: should still be encrypted) but terminating SSL asap is about user experience.

That makes no sense. The latency from your load balancer to your application server should be a tiny fraction of the latency from the user to the load balancer (unless we're talking about some kind of edge deployment, but at that point it's not a load balancer but some kind of smart proxy), and the load balancer decrypting and re-encrypting almost certainly adds more latency compared to just making a straight connection from the user to the application server.


Say your application and database are in the US West and you want to serve traffic to EU or AUS, or even US East. Then you want to terminate TCP and TLS in those regions to cut down on handshake latency, slow start time, etc. Your reverse proxy can then use persistent TLS connections back to the origin so that those connection startup costs are amortized away. Something like nginx can pretty easily proxy like 10+ Gb/s of traffic and 10s of thousands of requests per second on a couple low power cores, so it's relatively cheap to do this.

Lots of application frameworks also just don't bother to have a super high performance path for static/cached assets because there's off-the-shelf software that does that already: caching reverse proxies.


It depends on your deployment and where your database and app servers and POPs are. If your load balancer is right next to your application server; is right next to your database, you're right. And it's fair to point out that most people have that kind of deployment. However there are some companies, like Google, that have enough of a presence that the L7 load balancer/smart proxy/whatever you want to call it is way closer to you, Internet-geographically, than the application server or the database. For their use case and configuration, your "almost certainly" isn't what was seen emperically.


You usually re-encrypt your traffic after the GW, either by using an internal PKI and TLS or some kind of encapsulation (IPSEC, etc).

Security and availability requirements might vary, so much to argue about. Usually you have some kind of 3rd party service you want to hide, control CORS, Cache-Control, etc headers uniformly, etc. If you are fine with 5-30 minutes of outage (or until someone notices and manually restores service), then of course you don’t need to load balance. But you can imagine this not being the case at most companies.


Tell me you never built an infrastructure without telling me you never built an infrastructure

The point being that all the code on the stack is not necessarily yours


I've built infrastructure. Indeed I've built infrastructure exactly like this, precisely because maintaining encryption all the way to the application server was a security requirement (this was a system that involved credit card information). It worked well.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: