Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> C++ was huge back in 1998, now it's bloody massive.

I don't think any "run-time" feature was added since, though. It's all either OS support (<thread>, etc, that you wouldn't use in-kernel anyways) or template stuff that has 0 impact on runtime (and actually sometimes helps decreasing code size).

https://istarc.wordpress.com/2014/07/18/stm32f4-oop-with-emb...

https://www.embedded.com/design/programming-languages-and-to...

https://hackaday.com/2015/12/18/code-craft-embedding-c-templ...

If some guys are able to run c++ on 8kb microcontrollers, there's hardly a non-political reason it couldn't be used in-kernel.

See also IncludeOS: http://www.includeos.org/



Additions have certainly been made since back in 1998 (things like smart pointers are relatively new on this scale, as far as I know). Many runtimes for resource-constrained embedded systems do not support all of C++'s features. Exceptions are the most usual omission.

You can certainly strip things down to a subset that can fit 128K of flash and need only 1 or 2K of RAM at runtime, but the question is not only one of computational resources used for the library itself. Additional code always means additional bugs, the semantics sometimes "hide" memory copying or dynamic allocation in ways that many C++ programmers do not understand (and the ones who do are more expensive to hire than the ones who do not), and so on. You can certainly avoid these things and use C++, but you can also avoid them by using C.

I agree that mistrust and politics definitely play the dominating role in this affair though. I have seen good, solid, well-performing C++ code. I prefer C, but largely due to a vicious circle effect -- C is the more common choice, so I wrote more C code, so I know C better, so unless I have a good reason to recommend or write C++ instead of C, I will recommend or write C instead. I do think (possibly for the same reason) that it is harder to write correct C++ code than it is to write correct C code, but people have sent things to the Moon and back using assembly language for very weird machines, so clearly there are valid trade-offs that can be made and which include using language far quirkier than C++.


I absolutely do not understand your point. Anybody doing OS development in C++ is doing so with absolutely no C++ standard library support, same as if you were using C++ to develop for your microcontroller. If C++ binaries are compact enough for Arduino or Parallax Propeller development (<32KB RAM), they are absolutely fine for kernel development.

The real answer is historical, and cultural. On the latter, Unix is a product of C (well, and BCPL) and C is a product of Unix. They two are intertwined heavily. The former is as was mentioned a product of the relative crappiness of early C++ compilers (and the overzealous OO gung-ho nature of its early adopters perhaps as well...)

C++ without exceptions, RTTI, etc. has a lot to offer for OS development. Working within the right constraints it can definitely make a lot of tasks easier and cleaner.

It won't happen in Linux, tho.


And lets not forget C++ got created, because Bjarne didn't want to touch C after his experience going from Simula to BCPL, but it had to be compatible with AT&T official tooling.


> I do think (possibly for the same reason) that it is harder to write correct C++ code than it is to write correct C code

I call BS on this.

So many mistakes in C simply cannot be made in C++ if you follow the well-established coding patterns and don't try to switch to C-style code. e.g.: You just cannot simply forget to free any resource, because you never have to do it with RAII. You cannot forget to check a status code and return early because you don't have to; the exception will propagate until someone catches it. You cannot forget to initialize a vector because it initializes itself. I could go on and on.

That said, there is a huge caveat to what I am saying above: I am comparing experienced programmers in each language with each other -- those who are basically experts and know what they're doing. I'm not debating whether it's easier to shoot yourself in the foot with C++ if you use it with insufficient experience (and part, though not all, of the reason is that you will probably write C-style code most of the time, and write neither proper C nor proper C++). I'm saying that an experienced programmer is much more likely to write correct code in C++ than C.


> Additional code always means additional bugs

What additional code?

> You can certainly avoid these things and use C++, but you can also avoid them by using C.

Right, but what you can't get with C is destructors and move/ownership semantics.

> I do think (possibly for the same reason) that it is harder to write correct C++ code than it is to write correct C code

The ability to write typesafe data structures with move/ownership semantics and specified interfaces while being a superset of C would lead some to say that this is not true.


> What additional code?

All the code that you need in order to support smart pointers, templates, move/copy semantics, exceptions and so on. To paraphrase someone whose opinions you should take far more seriously than mine, you can't just throw Stroustrup's "The C++ Programming Language" on top of an x86 chip and hope that the hardware learns about unique_ptr by osmosis :-). There used to be such a thing as the sad story about get_temporary_buffer ( https://plus.google.com/+KristianK%C3%B6hntopp/posts/bTQByU1... -- not affiliated in any way, just one of the first Google results).

The same goes for all the code that is needed to take C++ code and output machine language. In my whole career, I have run into bugs in a C compiler only three or four times, and one of them was in an early version of a GCC port. The last C++ codebase I was working on had at least a dozen workarounds, for at least half a dozen bugs in the compiler.


If you’re writing a kernel, you’ll almost certainly be using “-nostdlib” (or your compiler’s equivalent) so unique_ptr, etc. won’t be there. You could however write your own unique_ptr that allocates via whatever allocator you write for your kernel. See [1] for a decent overview of what using C++ in a kernel entails.

[1]: http://wiki.osdev.org/C%2B%2B


> If you’re writing a kernel, you’ll almost certainly be using “-nostdlib” (or your compiler’s equivalent) so unique_ptr, etc. won’t be there.

Huh? unique_ptr has a customizable Deleter though; you should be able to provide a custom one so it doesn't call delete.

And doesn't a kernel implement a kmalloc() or something anyway? You would just write your own operator new and have it do what your kernel needs, and the rest of the standard library would just work with it.


True, but the "rest of the standard library" is somewhat harder to port and last time I played with it (admittedly some years ago) no compilers were good at letting you pick and choose.


> All the code that you need in order to support smart pointers, templates, move/copy semantics, exceptions and so on.

smart pointers are their own classes, and exceptions would certainly be disabled in kernel-mode, sure, but for the rest ? which additional code ? there's no magic behind templates and move semantics, and no run-time impact. It's purely a compile-time feature.


> and hope that the hardware learns about unique_ptr by osmosis

Unique_ptr is just a template and does not need the standard library. Also move/ownership semantics don't need unique_ptr.

> The last C++ codebase I was working on had at least a dozen workarounds, for at least half a dozen bugs in the compiler.

There are at least four compilers that been extensively production tested - ICC, GCC, MSVC and Clang. Which one had these bugs and was it kept up to date?


I will say in this that C++ can certainly be used for small, fast kernels given L4/Fiasco was written in C++:

https://os.inf.tu-dresden.de/fiasco/prev/faq.html#0050

That was targeted at x86 with 2MB of RAM usage, though. So, I quickly looked for a microcontroller RTOS thinking that would be a good test. I found ScmRTOS that claims to be written in C++ with size being "from 1KB of code and 512 bytes of RAM" on up. So, it seems feasible to use C++ for small stuff. I'll also note that it has some adoption in high-assurance industry with standards such as MISRA C++ (2008) already available. They might be using powerful CPU's, though, so I looked up the MCU stuff.

http://scmrtos.sourceforge.net/ScmRTOS

The throwaways are talking safety features. I used to think that was a benefit of C++ over C. Following Worse is Better, that's no longer true: the ecosystem effects of C produced so many verification and validation tools that it's C language that's safer than C++ if one uses those tools. There's piles of them for C with hardly any in FOSS for C++ if we're talking about static/dynamic analysis, certified compilation, etc. I put $100 down that a CompCert-like compiler for C++ won't happen in 10 years. At best, you'll get something like KCC in K Framework.

The reason this happened is C++'s unnecessary complexity. The language design is garbage from the perspective of being easy to analyze or transform by machines. That's why the compilers took so long to get ready. LISP, Modula-3, and D showed it could've been much better in terms of ease-of-machine-analysis vs features it has with some careful thought. Right now, though, the tooling advantage of C means most risky constructs can be knocked out automatically, the code can be rigorously analyzed from about every angle one could think of (or not think of), it has best optimizing compilers for if one cares little about their issues, and otherwise supports several methods of producing verified object/machine code from source. There's also CompSci variants with built-in safety (eg SAFEcode, Softbound+CETS, Cyclone) and security (esp Cambridge CHERI). duneroadrunner's SaferCPlusPlus is about only thing like that I know of that's actively maintained and pushed for C++. The result of pro's applying tools on a budget to solve low-level problems in C or C++ will always give a win on defect rate to former just because they had more verification methods to use.

And don't forget that, like with Ivory language, we can always develop in a high-level, safer language such as Haskell with even more tooling benefits to extract to safety-critical subset of C. Extracted code that's then hit with its tooling if we want. We can do that in a language with REPL to get productivity benefits. So, we can have productivity, C as target language, and tons of automated verification. We can't have that with C++ or not as much if we had money for commercial tools.

So, these days, that's my argument against C++ for kernels, browsers, and so on. Just setting oneself up to have more bugs that are harder to find since one looses the verification ecosystem benefits of C++ alternatives. This will just continue since most research in verification tools is done for managed languages such as Java or C# with what's left mostly going to C language.


I'm really sorry you're getting downvoted, because there is a lot of useful data in your comment. And I think we definitely see eye to eye on this:

> The throwaways are talking safety features. I used to think that was a benefit of C++ over C. Following Worse is Better, that's no longer true: the ecosystem effects of C produced so many verification and validation tools that it's C language that's safer than C++ if one uses those tools. There's piles of them for C with hardly any in FOSS for C++ if we're talking about static/dynamic analysis, certified compilation, etc. I put $100 down that a CompCert-like compiler for C++ won't happen in 10 years. At best, you'll get something like KCC in K Framework.

Lots of people think additional safety features result in safer code. They likely do most of the time, but when you need to swear to investors, to the public and to the FDA that your machine will not kill anyone, what you want to have is results from 5 verification tools with excellent track records, not "my language does not allow for the kind of programming errors that C allows". Neither does Ada, and yet they crashed a rocket with it, with a bug related to type conversion in a language renowned for its typing system (not Ada's fault, of course, and especially not its typing system's fault -- just making a point about safety features vs. safety guarantees).

A more complex language, with more features, is inherently harder to verify. The tools lag behind more and are more expensive. And, upfront, it seems to me that it is much harder to reason about the code. C++ does not have just safety features, it has a lot of features, of all kind.


> I do think (possibly for the same reason) that it is harder to write correct C++ code than it is to write correct C code

I'd disagree. Modern C++ has much better safety features than C ever has.


The "safety" is not the issue, it's the compulsion of using many layers of (somewhat leaky) abstractions that make debugging and otherwise reasoning about behaviour difficult.


I could not agree with this more.

gstreamer, gtk, etc, are really easy to work with and browse the source.

This is why I love golang as well.

Side note, it is funny how much gstreamer and glib try to add c++ "ish" features to c.


I was going to say +1 with golang, and you said it. So just emphasizing that many people prefer code clarity over cool things that make code unreadable (templates...)

Interestingly, I just found Linus' stance on Golang :) https://www.realworldtech.com/forum/?curpostid=104302&thread...


From that post... But introducing a new language? It's hard. Give it a couple of decades, and see where it is then.


> 0 impact on runtime (and actually sometimes helps decreasing code size)

beware, performance and code size does not always go hand in hand!

https://channel9.msdn.com/Events/Build/2013/4-329




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: