Hacker Newsnew | past | comments | ask | show | jobs | submit | VladVuki's commentslogin

That's a scary loophole...


The point is that a full scale disruption isn't necessary for an impact to be felt around the globe - tr.im is the prime example for that. It doesn't even have to be a malicious change.

It's more about the structure that has evolved - an inverse pyramid of communication vs. the traditional web. At the bottom we have services like Facebook and Twitter with a plethora of applications on top. This adds greater pressure for those foundational services to protect themselves and some have done a better job than other (i.e. Facebook vs. Twitter).


But URL shorteners aren't really that much more fragile than non-shortened URLs. Sure, the scale of the problem could be larger, but URLs go away all the time. I doubt even a small fraction of the shortened URLs that would go away as a result of the shortening service going offline would have out-lived the shortening service anyway.

And how do additional layers create more fragility, when the most fundamental layers (e.g. ISP infrastructure, DNS) are just as vulnerable to attack as the application frameworks implemented on top of them?


If a URL shortener goes down - it's likely that all of the enabled URLs go down as well. I don't think we've had a system like that before. If bit.ly went down - half of Twitter would become meaningless.

A DNS or ISP disruption can usually be more localized. I would argue that it's much easier to disrupt a Twitter or a Facebook than it would to take down multiple DNS servers or multiple ISPs. It would take less resrouces and time to create a global disruption by taking down Twitter than it would to take down an ISP.


What about the days when large-scale free hosting provides ruled the internet? Geocities?

An infrastructure disruption at a large colocation provider would probably cause just as much inconvenience to users. Attacks against root name servers can (and have) cause huge amounts of disruption.

I understand your point is about people building upon single points of failure, but my point is that there have always been single points of failure within the internet. I don't think conceptualising facebook as a single point of failure is particularly accurate either, given the distributed nature of its implementation. You're more likely to get localised failures of facebook than a complete outage, which would appear to be exactly the same amount of risk as any other situation which I've described.

As for URL shorteners, it seems apparent that there might be a business opportunity in client-side hashing with a server-side implementation for those without client support. That way you remove the point of failure, which I agree is a good thing, even if we disagree about how important that point of failure is.


Geocities is a good example of a past bottleneck - we agree there.

Facebook might not be a perfect example - I think Twitter is probably better since it's more of a platform.


That could work - Yahoo is US, Microsoft is UK (and traditional Western Europe). AT&T could be Italy, Motorola is Eastern Europe. Nokia makes a lot of sense as Japan since they seem to have their own set of priorities but are kinda cooperating with Android (Google)...


It's a playful representation of one key lesson.


You scared me for a second. The Japan-equivalent is tough to figure out. Maybe HTC, Sony, Nokia...not sure yet.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: