I have to say that I feel as if the problem here isn't competent programmers though. It's competent testing that is lacking.
Sure, it's possible to make a lot of stupid mistakes in C, but most of these mistakes should be found in testing, before the code is deployed. You'll probably say that "We don't even have those problems in language X." and I agree, but you should run the same tests no matter the language, because you want to find as many problems as possible, even if you write your code in language X.
I've worked in "IT security" as a C programmer for about 10 years. I both agree and disagree with this article.
A competent C/C++ programmer will have a lot less of problems like buffer overflows and crap like that, I don't think a buffer overflow has been found in any code I've written during my 10 years as a C programmer.
I have still written code that has security issues though, most of them stem from poorly designed code and are not necessarily a language problem.
I'm not claiming to be a super human here, I've had my fair share of gotchas, like off by one errors and issues with pointer arithmetic when refactoring code and so on and we might actually reduce the time needed to verify that C code is safe if we change to another "safer" language, but I'm 100 % sure that you still have the issue with poorly designed code even with a "safe" language. And from my experience it's a lot harder to find those problems, since you need to understand how the code base works and how it fits together to find those issues.
This is a rebuttal against "use a safe language and all your security problems go away entirely" but that is not generally the argument being advanced. The argument that is generally advanced is "use a safe language and some of your security problems go away entirely".
Put another way, people are arguing for airbags to become much more common, and your rebuttal is "I've gotten into some accidents, and I've gotten hurt in ways an airbag would not have helped". That's entirely possible, but irrelevant to the argument at hand (unless you also state that you never get into accidents where and airbag would help)..
Edit: Stating your position as a rebuttal may have been overstating it a bit. It's entirely possible you're just attempting to add information to the argument, in which case please read my comment as attempting to do the same.
I agree, but I still believe that one issue here is the lack of understanding of what the process around software development should be.
We would get rid of some issues if we used a safer language, but the real issue is that we don't find the issues, the attacker does instead. So people are finding the issues, but why are not the people writing the software finding them?
I believe that you should have a development team that make sure there are no issues to be found, no matter the language you are writing your application in. That means you run the same tests, no matter the language, so in the end it doesn't matter what language you write it in. And you chose a language that fits the problem, you don't make the language fit the problem.
So I think your analogy of an airbag is wrong in some sense. The issue isn't weather we have an airbag or not, the issue is that we don't test if we have an airbag and then go "Whoops, the airbag didn't deploy in the crash and somebody died".
We as programmers like to think of ourselves as engineers, but we don't treat the profession as engineers, we very often deploy code we know are not tested, we might even know it is buggy, you open yourself up to a lot of damage if you do that as a bridge builder (even though it has happened).
I'm tired and this turned into a rant, but I hope that my point comes across.
EDIT: I don't mean that we should write bug free code, I mean that we should strive for code without security issues. It can be done, I work at a place where we have written code for 15 years, not only C code, or more without any remote exploitable holes.
> We would get rid of some issues if we used a safer language, but the real issue is that we don't find the issues, the attacker does instead. So people are finding the issues, but why are not the people writing the software finding them?
Attackers aren't finding all the issues. They are finding issues in the small subset of software they actually bother to examine. That programmers aren't finding the issues I think illustrates both the effort it takes to always be correct as well as the different skills that are leveraged differently. It doesn't take a good C systems or application programmer to find a lot of the common C unsafety errors in question. It takes someone that knows the C memory model and has the knowledge of how to leverage that commonly. In some cases it takes someone applying new fuzzing techniques to expose certain edge cases more consistently that prior fuzzers did.
> That means you run the same tests, no matter the language, so in the end it doesn't matter what language you write it in.
The important point you are assuming is that the language (or current implementation of it) won't change out from under you in a way that makes that assumption invalid. See my other comment in this discussion regarding DJB, and the HN comment I link to for a good discussion about why that's so.
> And you chose a language that fits the problem, you don't make the language fit the problem.
In what case when you have two languages roughly similar in capability but one allows errors the other doesn't is the one that allows the errors the better fit?
> So I think your analogy of an airbag is wrong in some sense. The issue isn't weather we have an airbag or not, the issue is that we don't test if we have an airbag and then go "Whoops, the airbag didn't deploy in the crash and somebody died".
Then you're misunderstanding my analogy. The car isn't the program written, the car is the compiler. The route driven is the program. You may make an error on the drive, but let's let the car save us in those cases where it's obvious it can and should. Sure, making sure your coworkers check you've secured the pillows to the steering wheel before every trip works, but it should be obvious why that's sub-optimal in multiple dimensions.
> I work at a place where we have written code for 15 years, not only C code, or more without any remote exploitable holes.
I congratulate you on your diligence (sincerely, it is an accomplishment to get to the level where you feel you can say this), but that's a strong assertion in at least one possible interpretation. Perhaps you meant no remotely exploitable holes found? The interesting question that immediately arises from that clarification is whether anyone has seriously looked? Companies that care about this hire pen testers. I hope yours does as well.
I bet there's at least one bug in code you wrote 10 years ago relating to buffer overflows caused by integer overflow. You may have been checking every input against the size of your buffer, yet still have had a buffer overflow. Every integer addition when dealing with buffers is suspect.
Personally I feel as if underengineering is less of a problem than overengineering.
I've seen both in real life scenarios and usually you just throw the underegineered code away and start over with knowledge gained from the previous solution. And the reason you can do that is because you usually realize that you have an underengineered solution fairly early.
PIA might actually log everything and send to the FBI as a regular part of their operation, hell, they might even be funded by the FBI and you would never know.
You should not trust what people tell you over the internet.
I think you misunderstand the reason for using a VPN. Privacy is not the same as anonymity.
Let me try to explain. You use a VPN to protect your connection from MiM attacks, for example if you connect to a public wifi-hotspot, or even when you are connected from home. It also gives you some privacy, because nobody can sniff your traffic, but it does not give you anonymity, well it can, but you'll not be able to verify that it does.
Sure, you hide from your ISP, but you can't verify that your VPN-provider is more trustworthy than your ISP. They might actually log everything and send it on to a third party and you'll never know. Hell, they might even be funded by the NSA...
Use Tor if you want anonymity, even though that's not 100 % sure either.
I don't know anyone working in the field who believes Wireguard is likely to be less secure than StrongSwan or OpenVPN, and Wireguard is something that gets talked about a lot.
It's early days for Wireguard, to be sure, but it's one of the most promising security projects there is right now.
I work in the field and anybody that says that a piece of software is secure before it has even had a security evaluation by a third party does not know what they are talking about.
I think what you have seen is security people saying that the design of Wireguard seems to be equal or better than other, current, options, that doesn't mean that the implementation is just yet.
I've spent my career doing third-party software security evaluations --- among other things, I founded the NCC Cryptography Services practice --- and I will tell you right now that the Wireguard security story is far more compelling than any third-party audit.
It's not simply the protocol design, which is superior in pretty much every conceivable way to IKE or TLS, but also the code, which is carefully written to minimize attack surface and increase reviewability.
Choosing OpenVPN or StrongSWAN over WireGuard to minimize exposure to vulnerabilities would be a dumb bet. Sometimes dumb bets pay off, but it's still dumb to make them.
Could you unpack your statement about the careful code writing, or link to an explanation? We would usually expect a formal third-party audit to substantiate such a claim, but if there is other good evidence for their code's secure implementation I'd love to see it.
First, I'm going to try not to go into this in detail right now, but HN has very weird ideas about the potency of third-party code audits, particularly for things involving cryptography. A short summary: most third-party audits of cryptographic software written in systems languages don't accomplish anything. Most crypto software you depend on has never had a full-coverage audit from third-party auditors qualified to evaluate crypto.
You can watch any talk about WireGuard to see what I mean about the way WireGuard's code is written, but the short answer is that the thing was designed from the bottom up to be simple. WireGuard's feature selection was influenced strongly by what would keep the codebase smaller and easier to review. It was also designed to simplify the object lifecycle inside the code itself. All its state is preallocated at initialization.
WireGuard's cryptography is essentially an instantiation of Trevor Perrin's Noise framework. It's modern and, again, simple. Every other VPN option is a mess of negotiation and handshaking and complicated state machines. WireGuard is like the Signal/Axolotl of VPNs, except it's much simpler and easier to reason about (cryptographically, in this case) than double ratchet messaging protocols.
It is basically the qmail of VPN software.
And it's ~4000 lines of code. It is plural orders of magnitude smaller than its competitors.
WireGuard isn't a panacea. In particular: clientside support for it isn't there yet! But it's pretty clear to me at least that WireGuard should imminently be replacing OpenVPN and IPSEC.
I agree with you. It needs formal evaluation by pros with time to dig into it with review and tool-assisted analysis. That said, a person as experienced at pentesting as tptacek saying the crypto and code looked good puts its trustworthiness above most options in my eyes. I mean, you rarely here good things about both in such software. The quality of average development in crypto is just that bad. I also liked what I saw when I looked at it in terms of simplicity.
I only know Thomas via his output, but will say that based on it, he very much knows what he is talking about when it comes to the design and implementation of security protocols.
One still stay away from stuff that is patented in the US as a European company though, most European companies want to go to market in the US sooner or later.
That's actually a pretty great argument that software patents are a good idea for the US, it prevents foreign copycat competition from soaking up the market on us.
Sure, it's possible to make a lot of stupid mistakes in C, but most of these mistakes should be found in testing, before the code is deployed. You'll probably say that "We don't even have those problems in language X." and I agree, but you should run the same tests no matter the language, because you want to find as many problems as possible, even if you write your code in language X.