Hacker News new | past | comments | ask | show | jobs | submit login
What will C++17 be? (docs.google.com)
122 points by g1236627 on April 26, 2015 | hide | past | favorite | 82 comments




Ranges are the killer feature for me. They'll actively improve my everyday code.


You mean the ranges that already exist?


Assuming that the ranges that already exist are the ones that get incorporated into the C++17 standard, yes.


Pattern matching might be similar to this: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n344...

Source code (implemented with a lot of help from C macros): https://github.com/snaewe/typeswitch


LtU posts about concepts being rejected for c++0x :

http://lambda-the-ultimate.org/node/3518 (voted off) http://lambda-the-ultimate.org/node/4450 (suggestions for future work)


There were two fundamentally different proposals for concepts:

Indiana— concepts are records of signatures; checking is done "by signature"; an arbitrary mapping (adapter) can be defined. Notable authors: Doug Gregor, Jeremy Siek, Jaakko Jarvi, a lot of others I'm insulting by forgetting.

Texas— concepts are predicates of "actions" (usage of signatures; expressions). Notable authors: Bjarne Stroustrup, Gaby Dos Reis; later, Andy Sutton.

The crux of the issue is that Doug Gregor actually implemented the Indiana proposal (twice?) and Gaby never implemented the Texas proposal. By 2009, the Indiana proposal was well on track to being accepted; then ... it wasn't.

Fast forward 5 years, and now we've got a re-imagining of the Texas proposal, with no serious dissent, as all the Indiana folks moved on (out of exasperation, frustration; age; interest, whatever).


To be fair, Stroustrup has made an effort to explain his reason for encouraging the Committee to drop the Indiana proposal ( http://www.drdobbs.com/cpp/the-c0x-remove-concepts-decision/... ).

He's also mentioned the Indiana proposal led to increased compile times (as in at least 100% slower) and that the Committee came up with a ridiculous number of concepts for the standard library, which suggested they were looking at things wrong. For instance, there's little value in having CanCompareForEquality, CanCompareForInequality, HasLessThan, HasGreaterThan, HasLessThanOrEqual, and HasGreaterThanOrEqual be separate concepts; they should be grouped into, say, HasTotalOrdering, EqualityComparable, and HasPartialOrdering. The current STL gets this wrong, it wants a concept of HasPartialOrdering but it actually requires HasLessThan (and fakes equality comparison by assuming if a is not less than b and b is not less than a, then a and b must be equal). I don't fault Stepanov for this mistake, it's not obvious and it's relatively easy to tell people "just implement operator< for your types and we'll be able to sort them when needed" instead of "implement the relational operators that make sense; some implementations may use operator< to sort, while others may use operator>, and others operators < and <=, and yet others operators <, !=, and ==, etc."



Why is uniform call syntax a good addition? What is the benefit?

I only ever remember hearing of one programming language supporting it, and I can't even remember what it is. Maybe it was Nim?


It can simplify templates; allowing you to treat free functions and class methods the same way.


I see. Thanks for the reply.


Side note:

C++ should do something to address binary interoperability and ABI issues.

This would be hard, as some of this lies outside the scope of language designers and specifications. The ball largely rests with compiler and platform developers.

But there are things that could be done.

A common pattern to make C++ libraries usable from languages like Java, Swift, Ruby, Python, etc. without encountering DLL hell issues or language impedance mismatch is to "downgrade" the C++ API into a "--" plain C API. This is done by providing plain C wrappers.

Perhaps something could be done to either ease this process or make it unnecessary.

Another possibility would be to do something to lean on platforms to address the issues around this. Sometimes languages with as much center of gravity as C++ can do this.


The last slide is worth reading (and avoid doing) and most of it applies to much more than committees for programming languages.


I like that feature list a lot. It's going to be very hard to get those features ready, implemented, and stable by the end of 2017, though. Even assuming no issues with politics or the rest of the standardization process.

Concepts alone is looking like a very complex feature. And I was and under the impression that modules were even less ready. But both are very much needed.

I do know that the committee is planning on putting more experimental library features in the experimental namespace, which should provide a nice middle ground between "not ready for standardization" and "standardized and set in stone".

I believe all of the library work for concepts will be released such a namespace in C++17.


Correct me if I'm wrong, but isn't the whole infrastructure for concepts already there using TMP? The c++ std library even uses some of them already (for example, when using a set it requires an type with operator<, in other words, similar to Haskell's Ord type).

To me it feels as if c++17 is just making the compiler more aware of them, and as a result likely produce better error messages (and the code to write them to be more pleasant).


Concepts are significantly simpler to implement & check. I did a non-trivial amount of coding both with TMP-concepts, and with ConceptGCC. ConceptGCC produced better error messages earlier, was easier to symbolically debug, and produced better text, all with less & easier to read & understand code.


I wish 'better compiler messages' could be a language feature.

I understand that that's pretty much outside of the scope of the language designers, but I've been playing with rust lately and it's so refreshing to actually get help from the compiler, rather than just looking for the line where it failed and trying to figure out what's wrong on my own.


That's part of the reason people want 'concepts'.


"Bad Committee habits to avoid" are perfectly pointed out.


Bloated.


and when would c++ 17 come?


Somewhere in 2017.


c++11 came in 2011. c++14 came in 2014. Seeing a pattern yet?


When we'll have a compiler support - this is what really matters, especially for those who use MSVC... It is 2015 and we still don't have a full C++11 support.


The pattern is generally that Clang and GCC will have partial support when the standard is release, and full support a couple of years later. Intel will lag behind a year or two after that. Microsoft will be behind another year or two.

So, let's say that C++17 does come out in 2017. It will probably be the case that some of the features will be ready to use in GCC and Clang right away, and around 2019 or so they should have full or nearly full support. 2021 for Intel. 2023 for Microsoft.

So that gives you a range of answers. Depending on which features and which compiler, you could start using some of them immediately (or even before 2017, many of the features are implemented experimentally in advance), but if you want full support across the range of commonly used compilers, you're probably going to be waiting until the early to mid 2020s.


FWIW, Clang and GCC support all C++11 features.


Though GCC only gained full C++11 support as of 5.1, which was released this week.


I think you may be thinking of C++14, GCC has had full C++11 support since 4.8.1.


Prior to 5.1 gcc had full C++11 language support, but libstdc++ did not have full C++11 library support.


That's a fair point.


http://cpprocks.com/c1114-compiler-and-library-shootout/ disagrees:

"First, let’s look at the C++11 language features. Clang 3.3 and above, and GCC 4.8 and later have complete support, so there was no point including them in the table."

Also note that, on the C++14 front, clang supported all language features and most library features. And that is a year ago.


Yes, but I was talking about MSVC - MicroSoft Visual C++ compiler.


That's kind of like saying "You haven't read Shakespeare until you read him in the original Klingon."

Waiting for Microsoft to make a compiler seems like it ought to be orthogonal to moving the language forward, especially when non-Microsoft tool chains are already doing better with that particular language.

Microsoft has enough on their plate bringing C# and .Net to Linux and Unix, and I'd much rather they get that right than compete with GCC and CLANG. (That and Docker for HyperV and Windows Server.)


I'm obliged to make my c++ code work on a variety of platforms, and on windows this means visual studio.


But Visual studio doesn't imply Microsoft's C++. There's clang-cl (http://clang.llvm.org/docs/UsersManual.html#clang-cl), which aims to be a drop-in replacement for Microsoft's compiler (haven't used it, so I don't know its quality)

Also, Visual Studio 2015 will ship with clang (http://blogs.msdn.com/b/vcblog/archive/2014/11/12/visual-stu...). Yes, that's for Android (and, in the future, iOS) only, but it would not surprise me if that it is a sign of things to come: either Microsoft starts following developments faster, or people will move to clang for development. And I doubt the current Microsoft would be bothered if their customers moved to use clang, as long as they kept using Microsoft technologies (even if that's limited to running on Azure)


>That's kind of like saying "You haven't read Shakespeare until you read him in the original Klingon."

Not really. It's one of the top handful of compilers and none of them are "original" wrt the language standard.


http://blogs.msdn.com/b/vcblog/archive/2014/11/17/c-11-14-17...

this is the latest i could find, maybe time for an update if so.


I'll be publishing an updated feature table soon.


C++11 was originally known as C++0x, so C++17 may yet become C++19.


Yes, and C++17 is a big one, which may well slip, while C++14 was a small one, mostly aimed at cleaning up C++11.


c++17 is currently known as c++1z.

You can use Clang in C++1z mode with the -std=c++1z option.


Let's not that forget c++11's codename was c++0x ;)


It's actually C++0B! :)


Rust, maybe?… Ok, ok, relax, I'm just sayin'.


How about making the language more accessible to newcomers by providing a sane default way of doing package management and builds. Go is a great example of both things done reasonably well out of the box.


From where I'm standing, C++ package management is already quite nicely solved by what we call package managers in the linux world. You know, portage, aptitude, etc.


They are hacks that paper over the lack of good package management. Plus they are too much extra work, and OS-specific. No build system integrates with them as far as I know.

I think what people want is something like go, where you add a single line to your code and it magically downloads and compiles the referenced library. C++ doesn't have anything like that (and I doubt it ever will to be honest).


> I think what people want is something like go, where you add a single line to your code and it magically downloads and compiles the referenced library.

Not like Go. It needs to solve versioning.


There's a few problems with this:

* It only works in Linux, not Windows or Mac OS X.

* You need additional work to have it work in more distributions

* Linux package file formats are not distribution neutral, there's no "package.json" which is understood by every package manager

I think C++ needs an OS neutral package management system.


That's all well and good up to a point - but large production systems need repeatable builds that do not depend on the developers OS version or patch level. Eventually you have to vendor every dependency.


This is what is called API and ABI stability. Most well-known C and C++ libraries have been API and ABI stable for years.

In practice, this hasn't been a problem for my C++ projects for years, except on OS X where there have been too many changes in a couple of years (gcc -> gcc-llvm -> clang, libstdc++ -> libc++).


Linux is used by 1% of all computer users on the planet.

Trying to program C++ in windows is an horrible experience that I wont wish on my worst enemy.

Linux is nice, but it doesn't have after effects, support for the latest ultrabooks, photoshop ( sorry GIMP is a joke ).

Linux also contantly breaks whenever I tried using it, sound, video ?

Also gaming.

Anyway the browser environment is cool because it forced everyone to stick to a single standard.

( Also corporate uses windows, sadly. Working with tools created by major corporations in my industry is on windows )


But Linux is used by a vastly larger percentage of C++ programers.

As an amateur, I've found kdenlive and audacity provide decent video and audio editing on Linux. Professionals use Mac.

Typically installing Linux from scratch has involved less fiddling with drivers than installing windows from scratch on an empty laptop for a while now.

If you want a laptop with Linux support out of the box, you can pay for that and it will just work (e.g. http://www.dell.com/us/business/p/xps-13-linux/pd or http://www.dell.com/us/business/p/laptops.aspx?c=us&cs=04&l=...)

Gaming is a fair point, but even that might be changing with Valves efforts.

I use Windows, Mac and Linux each, they all have their advantages, but for serious software development Linux is ahead of MacOsX and both are way ahead of Windows.


> Typically installing Linux from scratch has involved less fiddling with drivers than installing windows from scratch on an empty laptop for a while now.

Windows 8 has worked out of the box on every configuration I've tried, with the caveat that it installs non up to date drivers (e.g. High end graphics cards). My experience with Linux (Ubuntu 15.04) is that the little use peripherals I have don't work at all ( I've a Bluetooth usb adapter and a usb capture card that both are plug and play on windows vista onwards but I've yet to find working drivers on Linux for).


Wow. I haven't seen that troll for a few years, maybe not since the early 2000s.

Here's the thing - if you can't make linux work for you at this point in time then you really have no business being in technology, at all.


Linux desktop use may hover at a few percent. But once you look beyond the desktop and to embedded and server use the percentage is far far higher. And personally, as a Linux user since ~1995, I'd say that Linux can work quite well for all the things you mention ;-)


Maybe, but did you know that Linux is used by the large majority of computers on the planet ?


You can use something like vagrant for you dev environment.


I tried using VM Ware, it comes with its own set of problems. It constantly gets stuck.

I will give vagrant a try though.


Perhaps you're doing it wrong? Your cross-platform OS skills doesn't sound very strong. Maybe should practice some more in the different ways various OS work.


i am so sorry for you. and all your excuses...


Its funny how strong the ideological stance exists among programmers.

Any comment making fun of windows always get upvoted even if its just anecdotal. My experience is also anecdotal and needs to be taken just a data point. Linux users seem to not be able to stomach any constructive criticism.

I tried using linux for a long time - the problem is life got in the way. I need to provide lab reports and do mathematics. Linux is a hassle for any non-programmer. Maybe when I get a decent laptop like DELL XPS that can run linux I will give it another try.

For now though I am having to stick to a single ultrabook for various reason and linux is not friendly to new-comers.

Just my experience.


Your downvotes are (I guess) not for criticising Linux but for doing so in a flame-baity way.


I dont understand why you cant make lab reports and do mathematics on Linux? Serious scientific papers are written using Latex, not Microsoft Word. And what math software doesnt run on Linux?


Serious scientific papers are written using Latex, not Microsoft Word

You'd be surprised. I know labs publishing multiple high impact papers pretty much each year and then some more less high impact papers and they all use Word. Also PowerPoint for posters. Probably not the best tools for the job, but I see that often in research.


Ugh. Language-based package managers are one of the biggest misfeatures of the millenium. They're all well and good* until they have to interact with the world outside their language universe.

* I'm being very generous here.


There is an important distinction to make between package managers and project managers.

In Ruby, for example, Bundler is what I would call a project manager. It installs packages to a place only it knows (or it should know) about, and Bundler or tools that integrate with Bundler wrangle the Gem loader path to control what modules are available to load.

This way the dependencies of any number of project live in complete isolation from one another, including several different versions of the same dependency which is often impossible to achieve using a system package manager like apt or pacman. The end result is similar to something like virtualenv, but with (in my opinion) a friendlier user experience.

These project managers also often include project scaffolding tools and subcommands to run tests, generate documentation, etc. Examples in other languages include Leiningen or Boot for Clojure, Maven for Java, and Cargo[1] for Rust. When used properly and when the creators of a project manager understand its role, they can work in concert with a system-level package manager. Cargo, for example, interoperates beautifully with system packages with great support for linking with system libraries[2].

[1]: http://crates.io/

[2]: http://doc.crates.io/build-script.html#case-study:-linking-t...


Hm... Can you elaborate? I have nothing but good things to say about PIP (the only package manager I've used extensively).


My problem with pip: It requires a gcc compiler to implement most major packages, which is not a tool you want to actually have installed on the target servers.

You can do additional work to pre-build wheels or packages, but then you're stepping outside of the python and pip world and into maintaining your own repos and build servers and...


I would double your question except pip is a really bad example. There are many problems with it, pretty well known inside the python community and discussed many times, so I won't repeat all that can be found on the internet quite easily. In fact, package managers in python are pretty ugly story in general, every new generation of them being long and painful attempt to fix problems of the previous one, but unfortunately introducing the new ones.

npm would be a much better example, however not perfect as well. Actually, I would say all package managers we have are quite imperfect, but it's more of a existing implementations problem, not a package manager problem in general.

I mean, yeah, it would be nice if we all just started using guix or something, and wouldn't need language-specific package managers anymore, but it's not happening yet, right? And we have to live somehow until it does. And if I need to use language-specific package manager to be able to install a library that has a new version with crucial fixes on github since yesterday, and the version apt-get suggests is two years old — well, I'd better be using language-specific package manager. Still better than compiling it manually collecting dependencies all over the world over the next few hours.


the world outside would be, e.g. system libraries. Python package X depend on libxml-foo or libmagick-bar-6.6.7..


Either I've never used Python libraries that would need external libraries, or PIP installs those as well. I definitely know that it can compile code as well, e.g. for installing Numpy/Pandas.


It does not. To keep to the scientific python examples you mentioned: It is a pain to install `matplotlib` with pip if you do not have the C libraries for image compression. (Still, I like pip+virtualenv quite a bit, but they are definitely insufficient when C modules are involved).


It's frustrating having the proliferation of different package managers.

Each one can work well in isolation; but when you then need to integrate with your platform package manager or another language's package manager, you get a combinatorial explosion of possible interactions.

One example that I've been particularly frustrated by recently is Python/Emacs integration. I have Emacs and Python installed via my system package manager (apt). Then there are a bunch of Emacs modes that I need to install via the Emacs package manager (elpy and its dependencies). Those in turn have dependencies on Python packages (rope/flake8). And finally I have a bunch of different python packages that I work on that are all linked to from a virtualenv so that they'll all resolve properly in all of those tools.

If I upgrade one part of that system (apt, emacs packages, python packages), it frequently breaks other parts, because there aren't any proper dependencies between them. I then have to spend a while fiddling with the whole setup to get it back up and running again.

Basically, the more package managers you have, the less value they have. The whole point of package managers is to manage a whole set of packages together, so you don't have to manually go through and resolve all of the dependencies yourself, manually figure out which version of this package goes with what version of that package. But every time there is some dependency between two different package managers, that breaks down.

Another part of the frustration is that they are all basically doing the same thing, but with slightly different implementations. In the end, a package manager's purpose is to let you figure out which packages you need, by resolving dependencies while honoring version constraints, fetch those packages from an archive somewhere (and verify that they are intact and haven't been tampered with), unpack them and install the files in the right places, and run some glue code to set up indexes, documentation, and so on appropriately. The only part of that task which really needs to differ between package managers is that glue code, plus the policies for inclusion in the centralized archive; everything else is basically solving the same problem in a whole bunch of slightly different ways (different ways of representing and comparing version numbers, different ways of verifying package integrity, different ways of doing local mirrors, etc), and so you get a whole bunch of incompatible and differently buggy implementations of the same kinds of things, or some functionality just missing from certain package managers.

What I would really like to see is a single unified package manager core, that handles all of the basic functionality that they all set out to solve, with the ability to utilize multiple different repositories (so that different projects could have their own policies for inclusion and sets of packages that are designed to work together), and appropriate places to hook in all of the language (or distro) specific glue that you need. I've been mulling over trying to write such a package manager, but of course when doing so you need to be careful or you will run into this problem: https://xkcd.com/927/


I agree with you in theory, but in practice the existing package managers are simply not suitable for the new languages (too fragmented, walled gardens, not powerful enough), so language designers really don't have any other solution except making their own manager. Ideally, though, a language's package manager would integrate nicely with the system's package manager and would know how to automatically install other packages.


Oh, I know why it keeps on happening, but it leads to a frustrating and fragmented experience. That's why it would be nice to have a common core package manager, that handles all of the tasks that any package manager will need to handle (metadata, versions, dependencies, dependency resolution, conflicts, features, fetching packages, verifying packages, managing an archive of packages, etc), plus the ability to hook in at appropriate places to provide the distro or language specific hooks necessary, and the ability to have cross-archive dependencies (including union dependencies like "either these three Debian packages, these two Fedora packages, or this Homebrew package"). That would then let you focus on solving the common problems in one place, and leave each language or distro to only have to focus on its own particular glue and policies.


Try installing numpy and scipy.


You might like to try Hunter: https://github.com/ruslo/hunter

Cross-platform C++ package management using CMake only.


"Every program will eventually include a sketchy rewrite of apt-get." - me.

Not just languages - WordPress, MediaWiki ...

If you're very lucky it won't clash horribly with apt and yum. So you probably won't be lucky.


There is a package manager for C++ called Biicode: https://www.biicode.com/


That would be the modules proposal. And the standardized ABI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: