Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

After going through several love-hate cycles in my career toward C++ I must say I have a kind of admiration for what the C++ authors are trying to achieve.

It took me a while to understand why every iteration of C++ brings so many (often Turing-complete or close) side effect horrors. The reason is that C++ is more than a language: it is part of an actually philosophical quest that language design is about.

It is about trying to bridge the language of the machines and the language of human abstractions as closely as possible. In itself it does not necessarily lead to the best language possible, but it explores an interesting limit.

One can use higher level languages like Haskel, Prolog or even a bit lower like Ruby or Python and code nice abstractions. By doing so, however, the programmer typically loses track of what the machine implementation will be.

C++ strives to be a language where you can still feel what the machine will actually implement while you code high-level abstractions.

That is their goal, that is their quest. In doing so there are many side-effects that pop up, but their effort is commendable.

That so many people still use it is impressive but, I think, irrelevant to the priorities they typically set.




> The reason is that C++ is more than a language: it is part of an actually philosophical quest that language design is about.

> It is about trying to bridge the language of the machines and the language of human abstractions as closely as possible. In itself it does not necessarily lead to the best language possible, but it explores an interesting limit.

> C++ strives to be a language where you can still feel what the machine will actually implement while you code high-level abstractions.

This has been false ever since the earliest days of ANSI C. The C and C++ standards define an abstract machine that is quite far from any machine that exists today. Type-based aliasing rules, to name one important example, are something that has almost nothing to do with anything that exists in the hardware.

It's quite enlightening to read the description of LLVM IR [1] and observe how far it is from anything a machine does. In fact, LLVM IR is quite a bit lower level than C is, as memory is untyped in LLVM IR without metadata: this is not at all the case in C.

In reality, C++ is an attempt to build a high-level language on top of the particular abstract virtual machine specification that happened to be the accidental byproduct of a consensus process among hardware/compiler vendors in 1989. It turns out that this has been a very helpful endeavor for a lot of people, but I don't think we should claim that it's anything more than that. There's nothing "philosophically" interesting about the C89 virtual machine.

[1]: https://llvm.org/docs/LangRef.html


I don't see how type aliasing negates anything of what I am saying. Types are an abstraction but C++ allows you to extract its pointer, to make a sizeof() of it, to do pointer casting and arithmetics.

You can go low level with most high level languages, what sets C++ apart is that using low-level instructions have generally zero overhead. ptr2 = static_cast<uint8_t*>(ptr)+5; pretty much maps to the assembly code you would expect.


> Types are an abstraction but C++ allows you to extract its pointer, to make a sizeof() of it, to do pointer casting and arithmetics.

And the semantics of those operations are dictated not by what the underlying machine does but what the C++ abstract machine does, which is very different. The underlying memory is still typed.

> You can go low level with most high level languages, what sets C++ apart is that using low-level instructions have generally zero overhead.

That's true for the JVM, .NET, even JS...

> ptr2 = static_cast<uint8_t*>(ptr)+5; pretty much maps to the assembly code you would expect.

No, it doesn't, not after optimizations and undefined behavior. It's perfectly acceptable for the compiler to turn that into a no-op if, for example, "ptr" was a null pointer.


OTOH it has become more true over time, because C and C++ based hardware and software platforms became dominant and processors are now designed to just run C(++) as well as possible.

CPUs no longer commonly implement integer overflow checking, granular hardware based memory safety primitives, efficient user level exception handling, etc.


I guess another two other big examples are sequence points and UB.


Even though it's not the real machine it's still a lot closer to it than you get from the higher level languages mentioned by parent commenter like they said.


Once you have the concept of user-defined aggregate types that are meaningful to the semantics of your VM (which C and C++ definitely do), then you're pretty far away from the machine. The difference between the C VM and the .NET VM is not nearly as wide as the difference between the native instruction set and the C VM.


That's the point: C++ allows to go pretty high on the abstract side, but it gives all the tools to map how the high level abstractions are implemented on the low level side.


I think that while it is true, and I am a big C++ fan, given it was my path after Turbo Pascal, exactly because I am a fan of Wirth and Xerox PARC languages I also know there are better paths.

An example is the work being done in .NET Native, C#'s roadmap up to 8 picking up Midori's work, D, Swift, Java 10 and so on.

C++ got this position, because the others stop caring about those issues.

Now that they finally started caring about them, lets see what the future holds.


Oh like I said I have a love-hate thing with C++. Nowadays I mostly do C# and python, with a bit of Java. I only rely on C++ when I have to do brutal per pixel computer vision algorithms.

In most cases you can rely on libraries to do the heavy duty (and then I stick to python or C#) but sometime you have to take the big C++ hammer and try to not hit your fingers with it.


I see, that is similar to me.

I spend most of my time nowadays between Java and .NET languages, with some JavaScript when doing Web stuff.

Diving into C++ means I am doing OS integration work, like Android's NDK, COM, UWP APIs exposed only to C++, CLR and JVM native APIs.

So I guess I kind of share the same love-hate thing as you.


I watched a talk once from CppCon where the presenter was showing different bits of code and asking the audience if it was a 0-cost abstraction or not. This was at CppCon and there wasn't a consensus.

Sure it's possible to do high level things while knowing what the machine will end up doing but if you have to be a guru to be able to do so... then can you really say that?

I'd say "C++ strives to be a language where expert C++ compiler writers can feel what the machine will actually implement while you code high-level abstractions (provided you're using the compiler you wrote yourself)"


There is little value in knowing if a given line of code translates into machine instruction X or Y or into none at all. Simply write readable code and trust the compiler until you have a reason to do otherwise. In aggregate your code will be fast (not the fastest).

Once you measure a performance issue you can focus in on one part or algorithm and map abstractions onto hardware, which can't be done in many languages. Because hardware is different this process is often different. Even between two members of the x86 family there are different ways things can be optimized and the compiler puts a bunch of reasonable defaults in. Once you know the sane defaults are not good enough you can trade effort for efficiency as much as you want and keep getting gains for a long time.

Trying to go to the machine code each C++ build is like trying to understand the JVM bytecode for each java build. I don't know why so many care, because you only need to do this on occasion. I suspect the culture of performance causes people giving talks to focus on this. Then people consuming talks place disproportionate value on this.

This is clearly happening with constexpr and template metaprogramming now. Few people really need it but there is huge focus because a few smart people needed it to solve their problems, and those people are also prolific speakers and, bloggers and authors. Where are all the talks on best practices around smart pointers, thread synchronization tools, class composition, and other common issues that affect correctness, design or other aspects of creating software? I want to see more on Hazard pointers and other pooled smart pointers, these are a real replacement for general purpose garbage collection, but the community prefers to talk about compile HTML parsing (not that these are mutually exclusive, but there is only so much attention).




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: