If I understand what proponents of C++ say, C++ is supposed to be suited best for medium-level programming where abstraction is helpful but low-level constructs and speed are still helpful. It's supposed to give you many of the benefits of higher level languages while still giving you high performance. It's intended to be widely portable but still easily hook into special OS-specific facilities.
This description sounds like an excellent language for git to me. And in fact, while I don't like C++ much in general, if properly managed, I think a project like git could do well if written in C++.
It's that "if properly managed" bit that would be a nightmare from hell to manage... easier to just do it in C.
What it's intended to do is rather different than what it's actually used for in real life..... code talks, anything else is just smoke, right?
IF someone can come alnog and show us a better way in any language, then tehre's something to argue about, otehrwise, the guy who has working code wins over the guy without it, always.
There's a large faction in AAA games that thinks C++ is a stupid idea. It's debatable who's right in that debate, but good points are being raised.
The problem is not so much that C++ is a worse language than C - it's that it makes it insanely easy to shoot yourself in the foot in hideously complex ways that take forever to unravel. See e.g. two phase name lookup - http://blog.llvm.org/2009/12/dreaded-two-phase-name-lookup.h...
There are plenty of other places where the design constraints of C++ have forced it into a dark corner on the edge of the realm of madness. It _does_ buy you additional abstraction, but there is a price you pay for that.
Personally, I'm not happy with either camp. C shows its age - it's from the 70s - and C++ is just out of control. So I'll continue to use both unless there's a decent replacement. (I'm squinting at Go, Rust, and BitC - and none of them are quite what I'd like to see)
I might suggest taking a look at D. It's more or less C++ redesigned from the ground up by a C++ compiler writer, so it's much more consistent and a lot more pleasant to program in. I've had bad experiences with the third-party libraries, which were sometimes inconsistently documented and half-finished on account of the still unfortunately small community, but the standard libraries are excellent, especially everything that Andrei Alexandrescu has touched. For elaboration: http://drdobbs.com/article/print?articleId=217801225&sit...
It’s a great pity that Objective-C doesn’t get more attention in this regard. It’s C, so it features all the simplicity, elegance and existing APIs. And at the same time it offers a good, simple and flexible object model with almost no surprises. The performance can be very good, too, as proved by the Apple runtime, and you can always drop to lower-level tricks or plain C when you need it. The syntax takes some getting used to, but then turns into another advantage, since it’s very self-descriptive. Again, it’s a shame it gets so little attention outside the Apple world, because it would be a perfect match for many use cases.
The problem is that Obj-C message passing is always going to have a cost. STL implementations can inline accessors, for instance, to provide containers and algorithms with essentially zero overhead that can even be faster than raw C because the compiler has more type information.
That’s a non-issue for the vast majority of people. As an example, I have written games in Objective-C for the first iPhones. In a game running at 50 fps you have about 20 ms to get a single frame out, and still I could freely use message passing in the inner game loop without giving it a thought. See also some older measurements by Mike Ash: http://goo.gl/DBTPE.
An Obj-C message is still about 5x slower than a C++ virtual function call, but even that is too slow for inner loops of expensive algorithms. For example, I'm doing DSP at 44.1k ops per second and I can only afford about one virtual function call for every sixteen samples but I can put my samples in an STL vector with no access overhead vs a raw array.
It's true that most people don't need the performance you can get with C++ but if you do it's still really your only serious option.
According to Mike’s measurements I linked above a cached Objective-C message send is faster than C++ virtual method call, so in a loop both languages should come pretty close. You can also get a pointer to the function implementing a method and call it directly. But I think we now understand each other – yes, Objective-C is not as fast as C++ in some cases, but those cases only matter to a very small number of people.
Also, it's _really_ easy to optimize those inner loops if you need to in Objective-C. If message passing is too slow, just turn it into some C function calls and you're done.
Low-level image procesing library supporting multiple pixel formats. How would you support anything from float16_t to int64_t without templates? And with templates you can selectively rewrite functions for some types in assembly.
The problem with C++ is, it is not language that has been built from scratch. It is just a layer upon layer upon layer upon C written by various people for various usecases. So there is no way to "properly manage" a C++ project. Unless it is maintained by one single person (or few like minded people)
I was going to disagree with you by saying that those layers are there for a reason; and they are there for a reason, mainly backwards compatibility. But thinking a bit more, I have to agree with you that still, those layers are the problem with C++. If you want to maintain old code, you would have to be conversant with loads of different idioms, it is as if you are maintaining multiple languages, starting from C and going on to the latest C++.
To write new code, C++ can be very clean, and C++11 is a huge step in the right direction for this. But to maintain code, especially code written by someone else, C++ can be a little bit hard. And it can never be easier than maintaining C code, because maintaining C code is a subset of maintaining C++ code.
An alternative to C++ in this situation is to have a C core or set of C libraries, with bindings up to something like JavaScript or Lua. World of Warcraft, Emacs, Firefox are a few popular examples of that architecture. GNOME 3 works this way too.
It's pretty important to have good automation for the C-to-high-level conversion, for example GNOME has a gobject-introspection to do this, and Firefox has XPCOM. Otherwise it becomes too tedious and bug-prone to be gluing two languages together by hand all the time.
I suppose node.js could be considered another example of this approach, by writing the super-tiny-and-fast event loop and http parser in C and then putting all the application logic in JavaScript.
This description sounds like an excellent language for git to me. And in fact, while I don't like C++ much in general, if properly managed, I think a project like git could do well if written in C++.