Hacker News new | past | comments | ask | show | jobs | submit login

"As clean and direct as any other modern language ..."

LOL

Honestly I needed to read the C++98 code to understand the modern one. Yes, C++ has advanced but the inherited waste is obvious.

Do other modern languages still operate with pointers?

http://words.steveklabnik.com/pointers-in-rust-a-guide




In your blind love for Rust you seem to forget that it in fact operates with pointers - everywhere.

The thing is that Rust either tracks the lifetime of a naked pointer (similiar to std::unique_ptr), or you wrap pointers in objects which facilitate reference counting to track the lifetime (similiar to std::shared_ptr).

In the end both kinds operate on generic, unsafe raw pointers - safe only thanks to ownership tracking and "boxing".

The only difference here is that Rust thankfully made it opt-out while C++ unfortunately needs to use opt-in.

But yeah... I'm eager to hear your explaination how Rust doesn't use those stupid pointers. Maybe you can create a new computer architecture too? I mean since x86 uses pointers and stuff. Maybe we should use garbage collected languages for that, huh?


In the end everything is "unsafe" assembler. "It's all unsafe if we penetrate the abstractions!" is basically "Everybody's naked under their clothes!" Yes, true, but profoundly irrelevant.

Safety is added by the higher layers. If Rust has pointers that are ownership tracked, then it is incorrect to think of them as raw pointers; they are safer than that. Safety is attained by creating compilers/runtimes/interpreters that can not be convinced to execute certain patterns of assembler code, or perhaps require rather explicit labeling.


> In your blind love for Rust you seem to forget that it in fact operates with pointers - everywhere.

I don't use Rust at all :-) I favor Nim, Haskell and Lisp. I am just pointing to Rust as a better alternative to cc14.

> I'm eager to hear your explaination how Rust doesn't use those stupid pointers.

Only in embedded systems where direct hardware access, memory and performance are issues pointers makes sense. In all other cases pointers are bad programming style.

http://words.steveklabnik.com/pointers-in-rust-a-guide


I don't use Rust at all :-) I favor Nim, Haskell and Lisp. I am just pointing to Rust as a better alternative to cc14.

This is some quality trolling. How can you recommend something that you don't use ?


I used Rust for a while. _Now_ I don't use it because I discovered other languages that work better for my current applications.


I think what makes C++ so challenging for me isn't C++98, C++11, C++14 or whatever comes next. The challenge for me is that I need to know the superset of all the features (and past 'features') at the same time, doubly so to interoperate with existing code or libraries.

The more gets layered on without taking anything away the more difficult it is to reason about your code let alone that of anyone else. And without taking anything away, all the cool new features can't defend you at all, they're just more rope.

The challenge for me is just how much context C++ forces me to hold in my head at any given time. And thanks to operator overloading, I can't trust anything I see -- when you can change the meaning of the comma operator, any piece of code can do literally anything other than what it looks like.

Ultimately, I stopped developing in C++ because there's just too much cognitive load for me. In a professional setting you're not coding for yourself, you're coding for you 6 months from now, and that guy has know idea what kind of tricks you were playing back in the day.


> And thanks to operator overloading, I can't trust anything I see -- when you can change the meaning of the comma operator, any piece of code can do literally anything other than what it looks like.

I always laugh at this complaint specifically against C++.

Except for Go and Java, all other mainstream languages allow for either operator overloading, or symbolic names for functions/Methods.

Even JavaScript ES7 might get them, http://51elliot.blogspot.de/2015/01/fluent-talk-summary-bren...


There's a difference between overloading `+`, `-`, `<<`, (which must obviously run custom code when applied to custom types), and overloading `,`, copying, assignment, moving, and tons of other "already works on everything" operations.

Both can cause problems when used inappropriately, but the latter is just chaos.


How would you handle deep vs. shallow copying and assignment for classes without overriding those operators?


My answer to that is "you shouldn't want to deep copy on assignment". It's a performance trap and makes the code harder to understand. Without pervasive copy/assignment/move ctors you would be surprised to see how little you actually want to explicitly invoke them (usecases may vary, of course).

Sadly this is the path that C++ has taken.


Shallow copy on assignment is dangerous because it leads toward double frees and therefore chaos in your heap.

Ironically, the original topic (upthread quite a ways) was pointer safety...


Yes, backwards compatability with C makes this a hard problem, but this is exactly what affine typing fixes: assignment is a shallow copy, but the old copy becomes inaccessible.


I don't think it's "backward compatibility with C". It's that C++ deliberately lets you go down to raw pointers. This lets you write things like memory allocators in C++. Yes, that makes it a hard problem to do only shallow copy, which gets us back to overloading assignment operators.

It sounds like you want C++ to be something other than it is. That's fine, for you. Use something else. But C++ is the way it is for a reason. A lot of people find those parts that you don't like to make it a more useful tool for things that we actually do.


> I don't think it's "backward compatibility with C". It's that C++ deliberately lets you go down to raw pointers.

Being able to be copy-paste compatible with C is "backward compatibility with C".


Yes, C++ is that (almost). But that attribute of C++ is not the source of the problems that we're talking about here.

I mean, yes, in a sense it is, in that C had shallow copy of structs (IIRC), but... let's review, shall we?

pjmlp whined about operator overloading, about how it made code difficult to understand. Gankro specifically singled out copy and assignment overloading as being unnecessary and confusing. jawilson2 raised the question of deep vs. shallow copying (implying that you need to overload copy and assignment in order to easily do deep copy). Gankro replied that you shouldn't want to do deep copy on assignment. I pointed out the problem with his/her view, namely, possible memory corruption. Gankro relied, blaming this on backward compatibility with C. I explained that it's not backward compatibility per se that creates the problem, it's the ability to have pointers. My point was that C++ was deliberately going to have pointers - it wasn't just because of backward compatibility. You argue that C++ is in fact backward compatible. In the context of the conversation, so what? We're not questioning whether C++ is C-compatible. It is, but that's not the point.

The point is, even if C++ were deliberately not C-compatible, if it let you play with pointers (and given the intent of C++, it would) it would still have the issue of shallow vs. deep copy, and overloading assignment and copy would still be the solution.


You have completely ignored the actually interesting point of my argument, and focused on pointless details. In particular, my statement on affine types. Or, in C++ terms, move semantics by default, instead of copy semantics by default.

Raw pointers don't deep copy (what would that even mean?) so clearly you're only talking about structs that impose special semantics on raw pointers. If those structs were affine (assignment was a move which marked the old copy as unusable), then there would be no need to overload assignment for that facet of safety. They would just be in a different place, which would be fine.

The only really special case is having a pointer into yourself (really into yourself, not just into some fixed heap location you own -- basically, caring about your own location in memory). This is the only thing, as far as I can tell, that C++'s approach gains over affinity. Personally I consider it a bit of a nightmarish thing to do without GC (it's a bit nasty even with it TBH), but I know all too well that people will demand anything and everything; especially once they're used to it. Although this wouldn't completely preclude this pattern, it just means you can't wrap it up in a pseudo-safe way -- particularly if you want to pass it to arbitrary client code.

However this would be a massive break from C, which is copy semantics all the way down. I suppose C++ could have further given the `class` keyword meaning by making all classes affine. I dunno if people would have tolerated that kind of difference at the time. Sort of a soft breaking change, yaknow? Then again, maybe overloading `=` is already a breaking change in that regard.

shrug

language design is hard


True, I was not addressing the main point of what you said, and quibbling about your wording on a side point.

So before C++ assignment and copy operator overloading, you'd have a C struct, and assignment was a bitwise copy. If the original is destroyed after that, you're safe. If it's not, and the struct contains a pointer to allocated memory, you're going to have trouble unless you're very careful about who owns the memory.

But even then, you had situations where you wanted to truly make a (deep) copy. That took a call to a function that knew the structure, and which parts to deep copy. So the need to do that was still there.

So if we move to affine types or move semantics or whatever, the need for deep copy doesn't go away. If you don't have an overloaded assignment operator or copy operator, you're going to have a function of a different name that does the exact same operations. So the need for both operations doesn't go away. But the move/affine approach does make the double-free problem go away (as does the current C++ overloaded copy approach).

> language design is hard

Yeah. Somebody wants every possible action to be doable, and different people want different actions to be easy or the default. Nobody's going to be completely happy.


I would personally arguing that a deep copy necessitating an explicit function call is a good thing, though. It makes potentially expensive operations obvious, and improves your ability to reason about program behaviour.


> pjmlp whined about operator overloading, about how it made code difficult to understand

Learn to read, shall you?

I am 100% in favour of operator overloading.


I never said you weren't. I said that your statement about backward compatibility, while true, was completely irrelevant to the conversation.

[Edit: I take it back. I did say you whined about operator overloading, when in fact you were quoting arcticbull.]


Ok


... however, Rust's semantics specifically prevent this double free.


Which, if we were talking about Rust here, might be relevant. But in this thread of the conversation, we're talking about C++ operator overloading and why you need it for deep copying on assignment. In the immediate context, how Rust does it is not relevant.


The discussio is about strategies to avoid the double-frees problem in a hypothetical "C++ with changes", and Rust's is one example. Seems fine and useful to give examples. (That's what Gankro's comment is too: Rust's strategy is affine types.)


Could be I missed something, but I looked all the way back to the root of this thread, and I didn't see (in any of the direct ancestors) anywhere where the topic was a hypothetical "C++ with changes".


In Rust, the = operator always represents a shallow copy (for types with move semantics, it will also statically invalidate the source) and there's no way to override this or otherwise permit user-defined code to run. Deep copying is done via a standard method, `.clone()`, which makes it obvious that user-defined code is running.


The languages that allow for symbolic names, apply them also to builtin types.


That's all true. But there's a dilemma we can never completely avoid. The more expressive power a language gives you, the more productive you can be when you really know the code in and out. The same expressive power will make it more difficult for others or for your future self to understand what the code does.

It's always going to be a trade-off. The more focused you are on one piece of code, the more it makes sense to use an expressive language. The more you switch between projects, the greater the developer turnover, the dumber the code needs to be in order to stay close to maximum productivity.

If we really want to have an impact on productivity, we need to change how teams are put together and stay together.


What do you mean by having to "read the C++98 code to understand the modern one"? Isn't this how "here is the new way of doing it"-examples work? Until you don't know it, of course you will need to read whatever version you do know. The point is that once you do learn it, you can use it.

And "inherited waste is obvious"? If you mean to say that it carries a lot of historic luggage, and you feel encumbered by it, then geez, the document in question was almost tailored for you.

C, D, Go also have pointers available. And, to be honest, thinking that pointers are "obsolete" suggests lack of understanding.

You know what, all your points seem to be against C++. A typical language evangelist.


> And, to be honest, thinking that pointers are "obsolete" suggests lack of understanding.

Who has lack of understanding? Rust proves that pointers are not actually necessary anymore.

> You know what, all your points seem to be against C++. A typical language evangelist.

What is wrong about C++ criticism? I was a professional C++ developer for a long time but now I am really glad that I don't need to maintain any C++ anymore. At the beginning C++ was real joy of programming but now it has grown to something which I don't like anymore.

There are other really modern languages much more clean and almost as performant as C++. Rust and Nim for instance. In Nim I am much more productive than in C++.


Rust has pointers all over the place - one of the core features is memory safety with pointers.


Pointers aren't the problem, null pointers are. Nullity (or the guaranteed lack thereof) should be baked into your type and you should be forced to handle as necessary when accessing a resource.

That said, C++ is moving in that direction, too. For me the difficulty is that to maintain backwards compatibility the compiler can't force you to go with the language, and you can defeat the protections simply by not using them or by casting them away. It's the accrued years of cruft and never telling people to stop doing things they way they have been that encumber C++.

IMO I'd favor a clean break. C++next should just drop support for all the old, un-safe ways of doing things and provide clean break. If you want to keep doing things the old way, stick to C++17. Moving forward there should only be one way to do things -- the safe way.


> Pointers aren't the problem, null pointers are.

There is also aliasing, nonobvious shared state, and ambiguous object ownership.

Getting rid of pointers is a no-go if you're interested in writing systems software. You need pointers just like you need goto and inline assembler. You do want to make doing unsafe things possible, but it needs to be contained and more obnoxious than doing the right thing.

Rust has one approach to this. Not many other modern languages actually want to let you point to arbitrary memory and start flipping bits, which is exactly what you need to be able to do to write a driver.


If C++ dropped C compatibility, I'm not sure there'd be any reason to use it. You might as well just switch to Rust at that point.


Hi. Please note that that blog post has a very prominent "Rust versions 0.8 - 0.9" on the top. Almost nothing in that post is accurate at this point, even the stuff that is has different terms now.

Also, if you don't use Rust, and do use nim, please use nim in these examples in the future rather than Rust, please. Otherwise, you make inaccurate comparisons, like you do here.


You can just as well write large pieces of C++ without using pointers or heap allocations at all. In fact this is considered 'good style'. As soon as you have stuff living on the heap, Rust also needs (smart) pointers to track these heap objects, and the compiler is clever enough to guarantee compile-time safety (although I guess this is the main reason why compilation in Rust is so slow), while in C++ these checks are done at run-time (of course compile time checks would be better).


Tracking ownership doesn't make the Rust compiler slow. Compiling Rust is already faster in general than compiling C++ (where the lack of a module system and code explosion from templates is absolutely killer), with the caveat that the Rust compiler hasn't yet implemented incremental recompilation and so subsequent compilations can be slower than subsequent compilations in C++ if your C++ build system is reusing artifacts intelligently. Incremental compilation in Rust is slated for after the middle-end overhaul which is currently ongoing.


Hmm ok, this has probably increased a lot since I last dabbled with Rust. This was about a year ago, and the speed reminded me of a C++ static analyzer, which kinda made sense to me :)

Even in C++ land there are massive differences in compiler speed Clang on Linux/OSX is easily 10x faster than the Visual Studio compiler without advanced tweaks (however, for some reason, clang on Windows is also very slow).


Yes, when I talk about C++ compiler speed I generally mean Clang, since it's a widely-available high-quality implementation with compiler speed as an explicit priority. In the context of Rust it's doubly relevant since they both use LLVM. Compiler speed wasn't an explicit priority for the Rust developers in the run-up to 1.0, but these days it's essentially top priority (hence the aforementioned middle end overhaul, though it has more benefits than just incremental recompilation), and the speed of the compiler has already more than doubled since 1.0.


> Honestly I needed to read the C++98 code to understand the modern one.

Was that your first time reading C++14? If so are you surprised that there was some learning curve?

> Do other modern languages still operate with pointers?

As much as I'm rooting for Rust to succeed, C++ will be around for quite a while longer so it might be a good idea to make it a bit friendlier.


The problem with modern C++ is that people stick to their programming styles. Many will use some of the new good features but they will also use the old ones (pointers!) because it's so easy to use them. Rust however forces people to accept new programming styles because the legacy ones (pointers!) are not appreciated.


Pointers are great for resource saving, eg. embedded. Don't underestimate them.


In embedded systems pointers make some sense but not in usual applications. This is the way how Rust can work:

http://mainisusuallyafunction.blogspot.de/2015/01/151-byte-s...


C++ is the monkey's paw of programming languages. Sure, it'll grant your wishes, but you'll end up paying some terrible price for it in the end.


How is this different with every other language? At least C++ is being maintained gracefully so one could update it with newer ways of doing the same thing and still work versus a language that is dead and you're stuck leaving it or reimplementing the program completely in something else. Or you're using a language that is new and version 0.6 completely breaks 0.5 so you can't ever upgrade without rewriting parts of it and yeah, you should upgrade because 0.5 has some horrible security vulnerability.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: