I’m not sure the number of hardware architectures matters that much, as opposed to the number of OSes or compilers. If you have to target N different C++ compilers written independently by different vendors, then sure, each compiler has a heck of a lot of room for bugs in the frontend or standard library. But with a given compiler (say, GCC or Clang) that has N architecture backends, there shouldn’t be that much added risk of architecture-dependent bugs with C++, compared to pure C.
After all, most of the “extra stuff” that C++ adds on to C has no inherent hardware dependence and exists only on the frontend. By the time the code makes its way to the backend, templates have been monomorphized (i.e. copied and pasted for each set of parameters); methods have been lowered to functions with added ‘this’ parameters; hidden calls to destructors and conversions have been made explicit; and so on. For any given C++ program, you could write a C program that compiles to near-identical IR, so if the backend can reliably compile C code, it should be able to handle C++.
True, that hypothetical C program might not look much like a typical human-written program. In order to achieve zero-cost abstractions, C++ code tends to rely a lot more on inlining and other backend optimizations than pure C, even though those optimizations apply to both languages. So if the backend isn’t reliable, C++ may generate more ‘weird’ IR that could end up getting miscompiled, and the level of indirection may make it harder to figure out what’s going on. But compiler backends these days are a lot more reliable than they used to be. And buggy backends can certainly cause problems for C code as well.
After all, most of the “extra stuff” that C++ adds on to C has no inherent hardware dependence and exists only on the frontend. By the time the code makes its way to the backend, templates have been monomorphized (i.e. copied and pasted for each set of parameters); methods have been lowered to functions with added ‘this’ parameters; hidden calls to destructors and conversions have been made explicit; and so on. For any given C++ program, you could write a C program that compiles to near-identical IR, so if the backend can reliably compile C code, it should be able to handle C++.
True, that hypothetical C program might not look much like a typical human-written program. In order to achieve zero-cost abstractions, C++ code tends to rely a lot more on inlining and other backend optimizations than pure C, even though those optimizations apply to both languages. So if the backend isn’t reliable, C++ may generate more ‘weird’ IR that could end up getting miscompiled, and the level of indirection may make it harder to figure out what’s going on. But compiler backends these days are a lot more reliable than they used to be. And buggy backends can certainly cause problems for C code as well.