Hacker News new | past | comments | ask | show | jobs | submit login
Why I don't spend time with Modern C++ anymore (linkedin.com)
276 points by nkurz on May 18, 2016 | hide | past | favorite | 255 comments



In my experience, the opposite of what the author claims is true: modern C++ leads to code that's easier to understand, performs better and is easier to maintain.

As an example, replacing boost::bind with lambdas allowed the compiler to inline functor calls and avoided virtual function calls in a large code base I've been working with, improving performance.

Move semantics also boosted performance. Designing APIs with lambdas in mind allowed us to get rid of tons of callback interfaces, reducing boilerplate and code duplication.

I also found compilation times to be unaffected by using modern C++ features. The main problem is the preprocessor including hundreds of thousands of lines for a single compilation unit. This has been a problem in C and C++ forever and will only be resolved with C++ modules in C++2x (hopefully).

I encourage the author to try pasting some of his code into https://gcc.godbolt.org/ and to look at the generated assembly. Following the C++ core guidelines (http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines) is also a good way to avoid shooting yourself in the foot (which is surprisingly easy with C++, unfortunately).


Fully agree with what you're saying. Just a nitpick about bind (which I'm sure you're aware of, just for the sake of others).

The return type of {boost,std}::bind is unspecified (it's basically just a "callable"). This means that bind doesn't have to do type erasure. On the other hand, {boost,std}::function has to do type erasure, which can boil down to a virtual function call. But that's orthogonal to where the callable came from.

Another thing to keep in mind is that if you're writing a function like `copy_if` which takes a callback, but doesn't have to store it for later use, it's much better to take the callback as a template type rather than going through the type-erasing {boost,std}::function. Doing the latter makes the compiler's job a lot harder.


It's not type erasure per se, but it's somewhat functionally equivalent. When you pass a function pointer (or pointer to member function) to bind(), it has to store that pointer in a member variable and perform an indirect call in operator(). In theory this is something that a compiler should be able to optimize away, but in practice they very rarely actually do, so using bind() as the argument for a STL algorithm typically does not result in the whole thing getting inlined out of existence. Lambdas, OTOH, are super easy to inline out of existence and compilers actually do so most of the time.


Note that I'm not advocating bind over lambda. I'm saying that one shouldn't expect a boost in performance when switching from std::bind to lambda. Switching from std::function to a templatized "Callable" could give a boost in performance, but like I said, that's orthogonal to how the callable was created.

There really is quite a difference between what std::bind returns and a type-erasing class like std::function. For the latter, the type erasure may result in a virtual call that the optimizer has a hard time seeing through.

In practice, compilers are pretty good at optimizing bind. Here are some examples where lambda and bind generate identical code: https://godbolt.org/g/dQCSeG


I think the author was mostly referring to meta programming / templates. You have to admit, they can get pretty obscure sometimes.

I agree, lambdas simplify callbackas/async programming immensely.


Right, but there's nothing "modern" about templates. Those have been a disaster for compile times forever.

C++98 with no Boost is an awful language that nobody should want to go back to. It's one where you could legitimately prefer C99, despite its lack of RAII and other conveniences. Actual modern C++ is a great language to write, though the compile times haven't really improved.


Given the position that system quality is directly proportional to frequency of testing, in what ways can a C++ project from-the-ground-up mitigate compile times and be reactive to change?


> In my experience, the opposite of what the author claims is true: modern C++ leads to code that's easier to understand, performs better and is easier to maintain.

Can you comment on the memory safety of modern C++? I am wondering if I should learn Rust or modern C++.


std::unique_ptr, move constructors, etc and you're pretty safe, but it's still not in the same league as Rust.

In my opinion you should probably learn Rust unless you want to get a job writing C++ (e.g. game development).


I never quite understood why unique_ptr and move semantics are supposed to improve memory safety over new and delete. They reduce leaks, sure, since the compiler inserts free for you at a hopefully-useful place. But you still effectively have to decide when to free, and there is no protection against dangling iterators, references, or pointers. From a security point of view, use after free is far worse than leaking, since UAF can lead to remote code execution.

In fact, move semantics create a new type of hazard that wasn't in earlier versions of C++: dereference of a unique ptr after moving it.


They do help somewhat by making ownership more explicit, i.e. they are hints that the programmer can use to do the kind of analysis that rust would do automatically.

Dereferencing a moved-from unique_ptr is UB, but many compilers do abort in their hardened modes.

edit: spelling


> I never quite understood why unique_ptr and move semantics...

It's not those types, in particular, it's because C++11 allows you move your ownership model into the type system. And then, and this might sound familiar, writing code cognizant and explicit in its ownership semantics leads to safer, cleaner code.


> It's not those types, in particular, it's because C++11 allows you move your ownership model into the type system.

But it doesn't ensure that you use it correctly. Most use-after-free isn't the result of not understanding the ownership semantics; it's the result of understanding it but forgetting to handle some edge case or another.


True, but if you use smart pointers for that purpose, you cannot at the same time use references to protect against null pointers.


  class fail
  {
      foo * p;
      void init()
      {
         p->some_init();
         if(p->some_error())
         {
             delete p;
             throw some_exception();
         }
      }
  public:
      fail()
      {
          p = new foo;
          init();
      }
      ~fail()
      {
          delete p;
      }
      void reinit()
      {
          init();
      }
  };
vs.

  class ok
  {
       unique_ptr<foo> p;
       void init()
       {
           p->some_init();
           if(p->some_error())
           {
               throw some_exception();
           }
       }
  public:
       ok()
       {
            p = make_unique<foo>();
            init();
       }
       void reinit()
       {
            init();
       }
  };


I think what pcwalton might be talking about is this:

std::unique_ptr<Person> owner(new Person("name"));

auto stolen = std::move(owner);

std::cout << owner->name << std::endl;

Rust compiler will detect the usage of moved ownership and would not compile. C++ compiler will happily compile but result unhelpful run-time error.


That fail class has major problems that are probably not obvious to those unfamiliar with C++.

It breaks the rule of three [1]. It cannot be safely copied, and you haven't disabled the copy-constructor or assignment operator, which makes it super-dangerous.

[1] https://en.wikipedia.org/wiki/Rule_of_three_%28C%2B%2B_progr...


What fraction of security-sensitive UAF bugs in the real world have had to do with exception safety?


It isn't limited to exception safety. The problem previously had been that there was no clear answer to the question, when should "delete" (and "~foo()") be called for a pointer? If you do it before some other code expected you to then you have UAF. If you do it in more than one place then you may have double free (and two calls to ~foo()).

Now the answer is that unique_ptr will do it when the pointer itself goes out of scope.


> Now the answer is that unique_ptr will do it when the pointer itself goes out of scope.

And my claim is that this doesn't effectively reduce UAF. It doesn't eliminate dangling references, etc.


It seems like you're making the perfect the enemy of the good.

Look at my example above. The problem isn't just exception safety, the first class is five kinds of catastrophe waiting to happen.

If the user calls reinit() then init() may delete the pointer (because it was written expecting to be called during construction). So now the object exists but the pointer is invalid and any further use of the object will be UAF. Even if the caller understands the exception to mean that the object should not be used anymore, the ~fail() destructor is going to call the ~foo() destructor again and double free anyway.

On top of that, the default copy and move constructors and assignment operators for a naked pointer just copy the pointer, which produces UAF as soon as the first copy to be destroyed frees the pointer the others still hold.

None of that happens with the unique_ptr version. No explicit call to delete is required so we lose that opportunity to accidentally call it before the surrounding object ceases to exist. The default move constructor actually works and the default copy and assignment operators are deleted, so attempting to copy without an explicit deep copy implementation becomes a compile error instead of runtime UAF. And it's less code too.

It can't actually stop you from having dangling references because if it did then it wouldn't be able to compile existing code anymore. But how is it not an improvement over the status quo?


> effectively have to decide when to free

No. You just pass around the unique_ptr. Don't actually pass a reference to the object. When a function is done using the unique_ptr, it gets returned to its parent object.

You'll never have a use-after-free if you use unique_ptr correctly. As others have noted: make_unique() and pass it around with movement semantics.

The worst that can happen is that you dereference NULL (use a unique_ptr after it has been sink'd to another function).


> You'll never have a use-after-free if you use unique_ptr correctly. As others have noted: make_unique() and pass it around with movement semantics.

This is far too limiting to handle anywhere near every real world use case. For example, you can't really call any non-&& methods on the referent following the discipline you propose, because "this" is a raw pointer!


Yeah, so use shared_ptr and weak_ptr in those cases. Shared_ptr is basically equivalent to C# and Python garbage collection (loops never get collected). So that should be "good enough" for the generic case.

unique_ptr is supposed to be used in a very limited fashion as I described. Its more efficient than shared_ptr, but much much more limited.


Just to be clear, you're suggesting to use shared_ptr for every object you want to call methods that don't take "this" by move on?

(Which is not enough by any means to ensure memory safety, of course…)


Obviously, full memory safety is not ensured if you ever move over to raw pointers.

But if you're copying shared_ptrs around, the reference counting almost always ensures that the memory is valid. The idea is to rarely use references or raw-pointers unless you know you have to.


I'm not questioning how shared_ptr works. I'm claiming that using it as extensively as you describe is totally impractical, and no C++ code works this way.


I've found the UWP Windows 10 API to be using shared_ptrs and stuff more often. A lot of things are done 'correctly'.

Its mostly when interacting with legacy code (ie: MFC) where issues come up.


C++ programs written using UWP make heavy use of references, like all C++ programs, and are highly vulnerable to use after free.


> I never quite understood why unique_ptr and move semantics are supposed to improve memory safety over new and delete. [...]

Though it isn't the type of safety you're thinking of, if you use make_unique/make_shared you'll also be exception-safe... which is a very nice benefit.


Rust dev baiting in a C++ thread...


Is the general feeling that Rust is largely going to be the future for programming using a statically typed language?


Learning Rust actually helped me understand what's considered good C++ memory management design nowadays.


Totally depends on what you want to use your programming skills for. In case you are just writing programs for yourself, or even starting a company writing the software from scratch with a handful of people you might as well learn Rust. If you want to get hired with many jobs to choose from, need to interface with/use lots of third party code or generally need to maintain legacy software C++ is a sure bet.


> As an example, replacing boost::bind with lambdas allowed the compiler to inline functor calls and avoided virtual function calls in a large code base I've been working with, improving performance.

And not just runtime performance; IME replacing bind() with lambdas typically significantly improves build time performance significantly. C++03 with C++11 features emulated via library trickery has terrible compile times compared to having the same features built in to the compiler.


Question - is it possible to just learn the new modern C++? How does one do that with a language that has been around for over 30 years? Can you just learn C++11? Don't you come across all kinds of code bases and texts that span many different versions that you have to be able to reason about? I mean certainly this is true of Java as well but I would like to hear other's views.


"The main problem is the preprocessor including hundreds of thousands of lines for a single compilation unit."

Some code bases have no real way around that, but many are served well enough with precompiled headers.


HFT is a pretty limited and extreme application case.

From what I understand - everything is not enough for HFT - network cards, kernel drivers, cables, etc.

You have milliseconds (edit: nanoseconds !) to receive, process and push your orders before someone else does it and gets the prize.

It's an arms race between technologists for the purpose of making a small number of people rich.

I doubt that these requirements apply to other application fields where C++ is used - and it's used almost everywhere, with great success I might add.

In my view C++ is actually a couple of languages mixed into one.

The hard part is knowing which part of the language to use for which part of the problem.

The "modern" C++ solves a lot of the nuisances of the "old" C++, but you can do without these features just fine. I apply them carefully to my code and so far it's been a pleasant experience. Even if I don't use all of the new features, it's nice to know that I can (and I will some day!).

So I don't really buy this rant..


HFT is one of those uses C++ is supposed to be particularly good for. Many of Stroustrup's justifications for C++ take the form "You get <set of features> without compromising performance."

The style outlined in your penultimate paragraph - an approach I also follow - suggests that there is some alignment between you and the author.


> So I don't really buy this rant..

He explained that himself in the comments below his post: C++ is his tool of choice but he "just wanted to stir the pot".

Anything in his post is contrary to my personal experience with C++.


Consultants wanna consult.


Exactly...If the building blocks is simplified enough, they lose their jobs.


Not milliseconds. Nanoseconds. Competitive tick-to-trade times are on the order of 1000ns or less.


That really depends on the market in question. There are places where you can still make fat stacks with over 100us tick-to-trade time.


How do you tune the machine for that type of latency? User space drivers? Could you elaborate? Not having to go beyond L1 cache doesn't mean much if it takes a few milliseconds to get that trade out of the network card and on to the wire right?


FPGAs


HFT = Insider trading + Algorithm trading


No, it's just fast algorithmic trading, aka latency arbitrage. It's still quite risky and you don't have any special information anyone else doesn't have.


I think I see what commenter meant. You have to physically be on the inside in the sense you need a direct, short connection to the network that cost ridiculous money along with HW and SW that costs ridiculous money. A prestigious position most stock traders couldn't compete with if they wanted to. Plus, you get to preempt all of them from this position without them even knowing it.


Insider trading is a term with a specific meaning, and that is not it.


This argument always feels like "It's not a Pyramid Scheme! It's a Triangle Opportunity!"

People use the term not because it is an exact fit, but because it is the closest match we have and the intention is largely the same.

HFT being a lot about using computers to gain tiny information advantages (racing the speed of light between exchanges for example) to make perfectly safe arbitrages millions of times a day and effectively skim off of the top of the market.


> effectively skim off of the top of the market

The problem with this point of view is that it implies they are just taking, and not providing. HFT provides a more accurate market price by providing liquidity at small price differences. Whether what they provide justifies their cost is another question though.


It's algorithms guessing about something in nanoseconds with little context. Feels like that would reduce accurate pricing. Not a finance guy, though, so my intuition could be wrong.


To make money, they have to be accurately pricing. If the spread on something was $0.50, meaning that there are people on record as willing to buy at $0.50 lower than others are on record as willing to sell, the real price is somewhere within that range. Presumably, the real value of the stock is somewhere in the middle, but we don't know, because there's not any transactions happening right now, and the real value of something is the value at which people are willing to exchange it. HFT firms provide liquidity, in a lot more transactions, and at smaller spreads than regular people are willing to do (because HFT firms can automate away much of the work). This results in a more accurate price (smaller spread), and you being more likely to sell at that price (even if it's to a HFT). At least, that's the theory.

I'm not a finance guy, but I've followed it here for quite a while, and this is my general impression of it. I used to be fairly anti-HFT until I learned more about it. Now I'm somewhat ambivalent, except for the topic of rogue trading algorithms, which seem fairly dangerous, but the genie's out of the bottle.


One of the most useful HFT tools is to reduce liquidity. Doing this over an hour would cost ridiculous amounts of money, but cornering a specific exchange over 0.01 seconds is cheap and potentially profitable. Remember, if a HFT extracts money that means the actual seller and actual buyer's price never meets which means the market is not doing accurate price discovery instead providing two prices separated by fractions of a second.


Hmm, in which case I guess it would be more accurate to say that HFT's provide some limited ability to control liquidity, whether that's increasing it or decreasing it.

Assuming decreased liquidity is bad (I don't know if there's situations where it's seen as beneficial to the market), the question is how to we disincentivize the use of the ability to decrease liquidity that HFTs have, or do we just accept it as a consequence of their participation, and accept that it happens from regular brokers as well?


What? None of that made any sense. The only way they can reduce liquidity is to stop trading, or buy it all up, neither of which is bad.

> Remember, if a HFT extracts money that means the actual seller and actual buyer's price never meets which means the market is not doing accurate price discovery instead providing two prices separated by fractions of a second.

Just no, that is not at all correct. HFT increases price accuracy, it doesn't reduce it.


Liquidity means the price is stable not that trades will go through.* Stable prices prevent HFT traders from making money.

Price accuracy is somewhat debatable. Many HFT traders may toss lot's of trades around at a price, but that does not mean you can buy or sell large numbers of shares at that price as they can easily just be trading relatively small number of shares back and forth.

*Dramatic price swings are often tacked onto this. But, that's also relative to number of shares traded. If selling 1,000 shares at ~20.00 each involves any price shift it's hard to call that stable.


> Liquidity means the price is stable not that trades will go through.* Stable prices prevent HFT traders from making money.

No, liquidity means there's limit orders sitting on the market. That is quite literally all liquidity is.

> Stable prices prevent HFT traders from making money.

No they don't, stable prices are good for market makers as it reduces the risk of flipping the spread. You can sit there all day long without prices moving a lick and make bank buying at the bid and selling at the ask getting pad the spread for providing liquidity; this is what HFT's want to do. Directional movement, aka volatility, increases the risk for market makers, aka HFTs as it forces them to predict a direction and makes their trades more risky. Yes, some strategies work better with volatility, but market making is the least risky when price doesn't move at all because you're not making your money from the volatility but from the pocketing the spread over and over.


> market makers ... without prices moving a lick

If orders are going though at different prices that's volatility, even if it's just bid ask spread. Reducing bid - ask spread is how HFT traders are supposed to reduce volatility in the first place. Trade at 10.10, 10.00, 10.10 is price motion even if the trades where buy 10.10, sell 10.00 buy 10.00 because a party needs to sit on the other side of the transaction so all trades are both buy and sell transactions.

That said, you can model only one type of transaction and talk about say sell volatility.


If the ask doesn't move, and the bid doesn't move, orders will happen at both prices due to market orders from both bears and bulls; if you'd like to call that "price moving" you're free too, but I won't. If you were staring at a chart you'd see nothing happening at all because charts don't chart orders, they chart the bid and ask or some middle between them. Regardless, you get my point, the bid and ask do not have to move at all for a market maker to make money, they would in fact prefer the safety of no directional movement.


> Feels like that would reduce accurate pricing.

Nope, exactly the opposite, it creates better pricing. HFT is a bidding war to see who can create the best prices and still make enough profit to survive, this reduces trading costs for everyone else by reducing the spread.


Isn't this exactly the opposite effect? You place a bid for what is currently showing on the market, and the HFT firms use their speed advantage go and buy up most of the stock at that price and relist it for a bit more. So the price is never what you actually think it is, it is what you see plus whatever the HFT firm decides to add.

HFT firms can't provide liquidity--they don't hold positions! At the end of the day their books are all 0.


This only works because the HFT firm is confident they can immediately sell at that higher price. That is, they have more, or quicker information than the market is aware of. This results in a more accurate price, because if the stock was prices at that originally the market would have shown it, and the HFT would have had nothing to do.

You can try to classify that as the HFT taking money the seller would have made if they had noticed and repriced quick enough, or taking money from the buyer if they noticed and bought at the original price, but I think both argument could easily be extrapolated to the rest of the stock market and regular users (this plays out quickly, on fairly public info, but it's the same process that goes into someone making a decision on what price to buy at and what to sell at, just sped up quite a bit).


That's not how markets work. If an HFT firm saw your order resting, means that nobody wanted to trade with you at that price in the first place.

If you cross the market then there is nothing the HTF firm can do. When it becomes aware of your order it will be too late. Of course the order you want to trade with might have disappeared while your order is in flight, but that would not be casually related to you sending the order to the market.


You're forgetting that "the market" isn't one monolithic thing anymore, it's a number of independent exchanges. Your order goes out to all of them, and the HFT firm sees it on the nearest market then races (and beats!) your offer to all of the other markets. That's why they care about nanoseconds so much, they're racing the speed of light. That's why you get a tiny fraction of your bid filled and then everything else suddenly dries up and is relisted at a higher price (aka Arbitrage).


That's grossly oversimplified and wrong.

> Your order goes out to all of them, and the HFT firm sees it on the nearest market then races (and beats!) your offer to all of the other markets.

No, they have no idea or way to know if your order was big enough to even go to other markets; if they rush out ahead of you and you don't show up, they lose money. What they do is not risk free.

> That's why they care about nanoseconds so much, they're racing the speed of light.

Incorrect. They about latency because all traders have always cared about being first to get their orders in and this would matter whether HFT existed or not. They're not racing the speed of light, they're racing other traders and that would happen no matter the speed of the trading. It's not the speed that matters, it's being first that matters. Even if you outlawed HFT, being first would still matter except it'd be human traders seeing who could push buttons faster.


If you were to buy the hose stock of an entire neighbourhood, would you expect to pay the same price for the first and last house you buy? And no, markets are not transactional you can't usually buy the whole neighbourhood in a single deal.

What's likely happening in your example is a large trade in one exchange triggering adverse selection protection across all markets from multiple market makers.


Exactly my intent. Well-worded. I'm not sure quite how to describe it how average person would understand. So, insider trading is one option to approximate it. Pyramid Scheme seems like another one in the rip-off aspect but doesn't fix the specifics as well. The traits to categorize are it's rigged, parasitic on others, requires enormous investment, and requires physical proximity that's fairly exclusive.


And all of those descriptions are wrong. They are not trading on inside information, it is not in any way a pyramid scheme or a rip-off, nor is it rigged or parasitic. If you think HFT is bad, you don't understand what it is. HFT improves market prices providing buyers and sellers with better prices than they had before HFT by narrowing the spread you have to pay to find liquidity. HFT is good and benefits retail traders.


> he traits to categorize are it's rigged, parasitic on others

I'm not sure that's a good way to frame them. If you frame them as parasitic, then the argument shifts towards how we can stop them, as they don't provide any value. HFT firms do provide value in liquidity and more accurate market prices. Whether the cost them impose for these is worthwhile is an open question, but then the argument is about how to tweak incentives and rewards to best utilize this resource, not how to prevent it from working at all. The framing matters.


> HFT being a lot about using computers to gain tiny information advantages (racing the speed of light between exchanges for example) to make perfectly safe arbitrages millions of times a day and effectively skim off of the top of the market.

That's entirely wrong. There's nothing safe about it, and they aren't skimming, they're trading exactly like every other trader is, making a guess and risking cash to see if they're right.


Insider trading is a term with a specific meaning, and that is not it; true, but nor is the specific meaning of insider trading what most people think it is.

"Trading on inside information" is a term for what many people attempt to do that is illegal.

"Insider trading", on the other hand, is when insiders trade, which is totally legal, it's just regulated in certain circumstances when the size of the the trade is large or the person's role is at the executive level, an upcoming trade needs to be announced (made public) in advance as if itself it is information.


Getting an information ahead of the rest of the crowd sort of fits under this specific meaning.


No it doesn't, insider trading means using non public information; not getting information faster which is and has always been a thing long before HFT ever existed. Information takes time to impact the market and always has and there's always been and always will be traders fighting traders to know that information first.


Arbitrage has been around since the very beginnings of modern banking and provides a very important function to allow markets to accurately adjust to proper price signals.


There are two separate rants here that aren't delineated well.

1) C++ is too complicated, and therefore hard to reason about and slow to compile.

We're going to argue about this forever, but you'll have to agree that the spec is very large and warty compared to other languages, and that C++ tends to take far longer to compile (this was already a problem a decade ago, it's not specific to "modern" C++).

2) The future of software development will include more of what I'm going to call ""non-isotropic"" software; rather than assuming a flat memory model and a single in-order execution unit, and exerting great effort to pretend that that's still the case, programmers will have to develop effectively on GPUs and reconfigurable hardware. Presumably this speculation is based on the Intel-Altera acquisition.

You can sort of do hardware programming in C (SystemC) but C++ is really not a good fit for hardware. Personally I'd like to see a cambrian explosion of HDLs, but the time is not yet right for that.

It sounds like the author favours the "C with classes" programming style, maybe including smart pointers, and is probably not keen on lambdaization of everything.


Can't really argue about 1.

About 2, in-order single instruction execution hasn't been an assumption for a very long time; c and c++ optimizers (and programmers) have been able to take advantage of these CPU features for a while. There are language extensions (Cilk++, OpenMP), to take advantage of extra cores for fine grained parallelism.

Regarding GPUs, arguably C and C++ have the most mature and transparent offloading support all around (OpenACC, again OpenMP, whatever MS offloading extensions are called) and the most popular GPU programming language (CUDA) is a C++ dialect.

Regarding the flat memory model, for large scale programming the only sane model is a flat, cache coherent one; those architectures that don't provide that, either evolve to provide it or die (cf. CELL) supplanted by those that do (yes, that doesn't mean that all memory is the same, but that is true with your standard CPU anyway).

I don't have an opinion on FPGAs, I expect that, if they ever go mainstream, initially people will just assemble predefined blocks via high level languages, but who knows what the future reserves us.


Parallelism: yes, although OpenMP isn't nearly as accessible as e.g. Swift async closures. C++ only got proper language-native threading in C++11.

for large scale programming the only sane model is a flat, cache coherent one

Do google view their datacenters as a single flat cache-coherent memory space? No, they built mapreduce instead. That's the point of view I'm coming from: distributed systems engineering working downwards. Rather than a single large program operating on a single memory space, a set of fragments whose programmers are aware that there is latency when communicating between nodes. DRAM is just another "node" that you have to send messages to and wait for a response.


"Parallelism: yes, although OpenMP isn't nearly as accessible as e.g. Swift async closures."

I'm not familiar with them, do you have a pointer? Cilk does have powerful semantics and a very light weight syntax.

"C++ only got proper language-native threading in C++11."

Sure, but OpenMP and Cilk are significantly older.

"Do google view their datacenters as a single flat cache-coherent memory space?"

No, but I'm pretty sure they whish they could. Many HPC clusters do present a single memory image across thousands of machines.

"they built mapreduce instead."

Mapreduce (and its extensions) is not a general programming model though.


Exactly. We had older stuff that could handle this. There was a ton of innovation in MPP's and clusters that could be applied to today's problems with ASIC's or FPGA's with interesting results. I intend to do just that at some point.

Far as nanosecond comms, they might want to consider using the old Active Messages or Fast Messages schemes. Their latency was tiny even on Ethernet. SGI also put FPGA's on NUMA interconnect back in the day. What Intel is doing is an increment on that rather than revolutionary or anything. One could use NUMAscale's chips to connect these things together.

There's also academic tools that could be polished for producing Verilog/VHDL automatically from higher-level descriptions. High-level synthesis it's called. Works well enough for simple constructs like he says he uses. Don't have to go straight to RTL level haha.


> I'm pretty sure they whish they could [view their datacenters as a single flat cache-coherent memory space]. Many HPC clusters do present a single memory image across thousands of machines.

No not really, at some point when you're dealing with PetaBytes of RAM and millions of cores, the Law of Physics kicks in, your RAM is spreading across a large physical area no matter how clever you are. If you want a flat memory space you have to guarantee an access to any memory address in less than X cycles otherwise you have a NUMA architecture[1]

While this is true that HPC clusters present a single memory image per cluster node (where one node = 8-32 processors (maybe 64)), the other nodes's memory has to be access with Message passing or other mechanisms.

You need a different programming model, MapReduce is too specific, that's why Google is trying things like their "DataFlow" platform.

[1]https://en.wikipedia.org/wiki/Non-uniform_memory_access#NUMA...


"If you want a flat memory space you have to guarantee an access to any memory address in less than X cycles otherwise you have a NUMA architecture[1]"

There is nothing wrong with NUMA (well, ccNUMA, but today that's a given). Even a simple modern two socket server is a NUMA machine.

Anyways, as I've commented elsewhere, I'm not arguing that shared memory is practical today on a large HPC cluser.


> Anyways, as I've commented elsewhere, I'm not arguing that shared memory is practical today on a large HPC cluser.

I think the point that was being made was that it'll never be practical purely for physical reasons. Any physical separation means that light takes a certain amount of time to travel and no known law of physics will let you circumvent that... A distance of a foot will always incur a latency of ~1ns (at best), so our models must account for latency. (At some point -- it's not obvious that we've reached the end of how compact a computer can be, but there is a limit where you just end up with a tiny black hole instead of computer.)


I don't get it, our models have been accounting for latency for the last 30 years at least. We routinely use three level of caches and higly out of order memory accesses for trying to make latency manageable.

Now it is possible that our best coherency protocols simply aren't effective at high latencies, but that doesn't mean we can't come up with something workable in the future. Is there any no-go theorem in the field?


All the HPC I've done is explicitly message-passing. There's certainly no abstraction layer that allows, me to treat it is a single memory space.


Distributed Shared Memory [1] is a thing, although I guess that today's ultra large clusters make it impractical.

[1] https://en.wikipedia.org/wiki/Distributed_shared_memory


With OpenMP your parallel loops look like normal loops (they have a #pragma to mark them as parallelizable). With asynchronous closures, your parallel code looks like callbacks. Obviously, which you prefer is personal preference, but I've never heard anybody say OpenMP wasn't accessible.


> Regarding the flat memory model, for large scale programming the only sane model is a flat, cache coherent one; those architectures that don't provide that, either evolve to provide it or die (cf. CELL) supplanted by those that do (yes, that doesn't mean that all memory is the same, but that is true with your standard CPU anyway).

CUDA on Nvidia GPUs gives a non-coherent last level cache. Most programs distributed over multiple nodes have no shared address space abstraction, but instead do explicit message passing. I agree that coherent caches make programming easier to think about, but I don't think I would go as far to say they are the "only sane model" for "large scale programming" given that MPI and CUDA both are popular.


Sure, my point is that the general trend is for the hardware to get smarter and hide the ugliness of the system to the programmer. Every time it has been attempted to push the complexity to the programming language and compiler, it has ended in tragedy.


Computers don't really have the flat memory model today. They're networked cores with coherent distributed memory wrapped in a simple API that has very high performance variability.

Any kind of performant multithreaded work today has to treat their motherboard as such unless they want significant performance hits.


As for 2) totally agree, GPU and FPGA programming has become "the new assembly language".

It use to be you could drop from C/C++ to assembler and gain massive performance boosts. These days with C intrinsics there`s no need for pure assembly code. But droping to GPU or FPGA code is a total must now, if you need any significant juice from your system.


Compiler intrinsics as Assembly replacement go back all the way to the 60s.


But good codegen from intrinsics goes back only a dozen years or so (ymmv)


Precisely. Also the instruction sets have been designed for compilers vs minimizing gate counts. Making it easier for compilers to schedule optimally - alot less weird and bizzare shit they have to deal with.


Well, we also have to thank C for the set back in optimizing compilers.

Fran Allen. In Coders at Work (pp. 501-502):

--- Begin Quote ---

-Seibel-: When do you think was the last time that you programmed?

-Allen-: Oh, it was quite a while ago. I kind of stopped when C came out. That was a big blow. We were making so much good progress on optimizations and transformations. We were getting rid of just one nice problem after another. When C came out, at one of the SIGPLAN compiler conferences, there was a debate between Steve Johnson from Bell Labs, who was supporting C, and one of our people, Bill Harrison, who was working on a project that I had at that time supporting automatic optimization.

The nubbin of the debate was Steve's defense of not having to build optimizers anymore because the programmer would take care of it. That it was really a programmer's issue. The motivation for the design of C was three problems they couldn't solve in the high-level languages: One of them was interrupt handling. Another was scheduling resources, taking over the machine and scheduling a process that was in the queue. And a third one was allocating memory. And you couldn't do that from a high-level language. So that was the excuse for C.

-Seibel-: Do you think C is a reasonable language if they had restricted its use to operating-system kernels?

-Allen-: Oh, yeah. That would have been fine. And, in fact, you need to have something like that, something where experts can really fine-tune without big bottlenecks because those are key problems to solve.

By 1960, we had a long list of amazing languages: Lisp, APL, Fortran, COBOL, Algol 60. These are higher-level than C. We have seriously regressed, since C developed. C has destroyed our ability to advance the state of the art in automatic optimization, automatic parallelization, automatic mapping of a high-level language to the machine. This is one of the reasons compilers are... basically not taught much anymore in colleges and universities.

--- End Quote ---


That's some serious sour grapes by Allen. Her assertion that Fortran and COBOL are higher level than C is .. difficult to support given the reliance of both languages on GOTO.

The assertion that compilers weren't taught any more is just silly.


Here's an experienced C programmer and fan telling you a list of ways Fortran is higher-level and superior to C for numeric programming:

http://www.ibiblio.org/pub/languages/fortran/ch1-2.html


Most everything here is subjective, inaccurate, or outdated by C99, save Fortran's multi-dimensional array handling which is legitimately superior to C despite partial reconciliation by VLAs.


Well, darn, there goes that. I'll have to re-examine it with C99 reference to assess it's accuracy.


I don't think it's quite that bad, but "for numeric programming" is a very important caveat here. As is "define higher-level".


I think higher level would be effecient, English-like representation that's closer to algorithm pseudo-code than managing machine details.


I don't remember seeing many GO TO's in post-F77 FORTRAN. When did this exchange happen?


Higher level does not necessarily mean more modern language features and paradigm - it only means language's computation model is farther removed from the actual hardware.

Allen was specifically discussing auto-optimizations (what we nowadays would just call 'compiler optimizations') and essentially argued that low level languages, in the quest of allowing fine-grained manual optimization, prevent many types of advanced auto-optimizations.

Specifically speaking, it is well known that FORTRAN still often beats C in numerical calculations just by the virtue of not supporting pointer aliasing (especially pointers pointing to arbitrary positions in the middle of an array which is being looped over).


Fortran is certainly higher level than C, given lack of aliasing and no decay of arrays into pointers.


You should bookmark this for future discussions bringing up Fortran. Great write-up.

http://www.ibiblio.org/pub/languages/fortran/ch1-2.html


ah your siding on the lisp machine vs todays hardware arch. I think if the lisp hardware architecture has significant advantages we`ll see that emerging in "soft cpu`s" on fpga`s. Particularly as the Xeon/FPGA gear gathers steam.


We've already seen the advantages although the total cost-benefit is unknown. For one, several CPU's for Scheme/LISP in past had hardware-accelerated garbage collection and/or a bunch of cores. Automatic, memory management and multicore are now mainstream due to perceived benefits. Works down to CPU in LISP machines.

Also, I've seen some of these benefits of Genera in modern stacks but I still don't have all of these capabilities:

http://www.symbolics-dks.com/Genera-why-1.htm

Any jump out at you as particularly awesome for a developer OS?


Not only Lisp Machines, that was how many memory safe systems programming languages were done in the mainframe days.

Intel MPX, CHERI are just two modern approaches of reusing those ideas to tame C's memory issues.

Going back to FPGAs, I think function composition is very similar to digital circuit design, so FP concepts could be a very nice way to do GPGPU programming instead of the actual mainstream approaches. But it would require the GPU to be more FPGAs like.


Might be.

ESPOL, NEWP and Algol-68RS are some examples of Algol based systems programming languages where the hardware was fully exposed as intrisincs and no Assembly was required.


> 1) C++ is too complicated, and therefore hard to reason about and slow to compile.

Don't we tend to reason predominantly about the written-down code rather than about the language itself? Once the developer has delineated the semantic circle he will be using, the big part of the specification is left out. Granted, the question is open when we are contemplating a blank ... err, code editor, but does that occur all that frequently?

> Presumably this speculation is based on the Intel-Altera acquisition.

Well, it has been a while companies sell high performance network cards equipped with gigabytes of RAM and an FPGA. I'd be curious to know what people do with that? The level of prices seems to indicate the target would be financial institutions, but how about the developers -- where does one find people proficient in finance, math and verilog/VHDL all at once? And at what price?


SystemC is actually C++. It's built on a bunch of #defines for classes and templates.


This article is not very general. Much of what it tries to convince us is not going to matter for most developers, and has the cost of suggesting modern features are not good for any developers. For example:

>It is not rare to see Modern C++ applications taking 10 minutes to compile. With traditional C++, this number is counted in low seconds for a simple change.

This is simply a bogus statement with respect to what at least 90% of c++ developers do on a daily basis.

I have benchmarked unique_ptr, auto, brace initialization, lambdas, range-based-for and other modern idioms and found them all to be at least as fast, and often faster, than their older counterparts. Now, if I were to instead go off and write template-heavy code using new features, that would be different. But in reality, the vast majority of c++ developers -- I'd wager at least 95% -- are not writing variadic templates on a daily basis (nor should they be).

The memory safety and many benefits from unique_ptr [0] is one of many modern tools that is a non-brainer to use in nearly all contexts. No, not nearly all contexts, allow me to rephrase: all contexts. It just is, and if you compare its use to manual new/delete code, the benefits are solid and faster.

The author further claims that modern C++ is less maintainable and more complex. The absolute opposite is true in nearly all cases. Using unique_ptr again as an example, it leads to less code, less complex code, more clear code, and better maintainability and code readability. Uniform brace initialization is another example that prevents many common older problems in the language.

FYI the author keeps talking about high frequency trading as an example of why modern c++ is a bad choice. Well, I worked at a HFT firm for a long time until last year, the firm places millions of trades per day and is among the most successful in the markets it trades. And what did we use? Only modern features. Lambdas, auto, unique_ptr, range-fors, even std::async -- everywhere in our code. This author is either naive or political.

I think the title of this article is highly misleading, and the contents are not relevant. Overall, this article is just bad advice for most of us.

[0] https://news.ycombinator.com/item?id=11699954


> FYI the author keeps talking about high frequency trading as an example of why modern c++ is a bad choice. Well, I worked at a HFT firm for a long time until last year, the firm places millions of trades per day and is among the most successful in the markets it trades. And what did we use? Only modern features. Lambdas, auto, unique_ptr, range-fors, even std::async -- everywhere in our code. This author is either naive or political.

Just an anecdote, but I also work at a successful HFT as well modern C++ is almost everywhere, without any performance hits and much cleaner code. Often it's easier to write fast code with it.


"Well, I worked at a HFT firm for a long time until last year, the firm places millions of trades per day and is among the most successful in the markets it trades. And what did we use? Only modern features. Lambdas, auto, unique_ptr, range-fors, even std::async -- everywhere in our code. "

FWIW, I have been on the industry only 3 years so far, but I also see modern C++ and boost everywhere. I'm aware of the obvious selection bias, but then again it also applies to the author: if a firm has to call an external consultant to make sense of their codebase, it wasn't probably very good to start with.


I got the feeling that the author's beef with Modern C++ is the same as for any C++ that is template heavy. And to me it is not a new or unique experience (i.e. not limited to Modern C++) to have builds that take 45 minutes to build everything. It can happen in older code and in newer code.


Yes, templates make the compiler work harder (sometimes at least), but badly modularized code with "big ball of mud" dependency structure also makes the compiler work a lot. You don't need fancy features to do that.


"Big ball of mud" dependency structures are common because the most pragmatic quick-fix for so many compile errors it to just whack in a new #include.

C++ is worse than most languages in this way, because private members are defined in public headers. This forces you add #includes to the headers. Thus the #include graph is much denser than the true dependencies in your program.

There's tricks for getting around this, but they aren't always practical; and they go against the grain of the "every class has its own .cpp and .h file" rule, which lots of people want to insist on.


>There's tricks

You mean PIMPL?


Yes, I think so, though I was never truly clear on what PIMPL was supposed to mean.

I am referring to writing a base class in the header and then doing the actual implementation in a derived class visible only inside one .cpp file. There are some other tricks you can do too, such as using pointers-to-incomplete types.

When I do any of these tricks, find that someone else later comes and makes a changes that introduces the extra #includes after all.


PIMPL means "pointer to implementation" and the reason you and your colleagues are having difficulty with it is because you don't understand it, and therefore should not be using it, or, investing the time to understand it before trying to use it.

>every class has its own .cpp and .h file

Wrong. There are so many high-quality and widely-used libraries that are "header only".

>When I do any of these tricks, find that someone else later comes and makes a changes that introduces the extra #includes after all.

It is possible to write sloppy code in any language, and also to receive low-quality contributions from anyone else to your code. This is not unique to C++. That's why it is important that everyone in a team be on the same page about how code is organized.

>I am referring to writing a base class in the header and then doing the actual implementation in a derived class visible only inside one .cpp file.

This is not the PIMPL idiom, though it is halfway right.


PIMPL pattern is great for reducing compilation time and creating stable user-facing APIs but it has a runtime cost. The "implementation" part is usually heap allocated and calling its methods have extra overhead of crossing the pointer to implementation barrier.


With the use of link-time optimization, most of the overhead of using pimpl idiom is removed. Both the outter wrapper functions as well as the implementation functions can be inlined, making the function calls effectively like a normal class implemented inside a header.


The heap allocation for the implementation won't be removed by LTO.


I'm inclined to agree that for objects that are going to be created and destroyed very often, especially in tight loops, PIMPL is not an ideal idiom. I wouldn't use it to generate fleeting effects in graphics, for example.

For objects that are used in setting up parts of a system and that will persist for long periods of time, maybe even the lifetime of the app, PIMPL is particularly well-suited for such classes. Classes that might manage networking, or encompass an entire part of app's running architecture, etc.


Thw complexity and overheead of PIMPL is justified in one scenario, and one scenario only - when you are writing a shared library and need to maintain a stable ABI. Otherwise it is complete overkill.


In my experience both have been true. That's why you get 10+ minute link times in some projects, so even if you use Incredibuild to compile, it still takes ages to link.


(re 10 minute compile times)

> This is simply a bogus statement

I work on a C++ project and believe me, 10 minute compile times would be great :)


The bogus part is 10 minutes vs a few seconds just because you change to the latest standard.


have you tried ccache? the 5x to 10x improvement isn't a bogus statement in my experience


ccache eliminates spurious rebuilds, but doesn't make compilation of things which actually need to be recompiled any faster. It's basically just a workaround for that Make only uses file mtimes and not the contents to decide what needs to be built.


In theory, agreed. In practice I see a lot of re-compilation in large projects.


ccache doesn't help as soon as you change a popular header file. distcc helps quite a bit, but there was a project I worked on for years where by the end the compile time was 15 minutes after you'd distributed it across 20 machines.


“A Variadic Template A Day Keeps The Job Security In Play”


You should see the code that variadic templates replace.


Say more! I'd like to know what you mean.


Before variadic templates you couldn't define a function template taking an arbitrary number of arguments; what you could do is define N overloads up to a finite large N. This quickly becomes tedious and hard to maintain, so you either write an external generator script, or 'creatively' use the preprocessor. These solutions were also not much easier to maintain and of course would kill compilation times.

The worst part is that often these were forwarding functions, that is, they didn't do anything directly with their parameters, but simply forwarded them to some other function (this happen surprisingly often in highly generic code). To do forwarding correctly, you have to handle const and non const reference arguments properly, which means that N overloads are not enough and you need N*2^N (someone please check my math) overloads. As you can imagine, this was impractical for N > 3.

Variadic templates plus perfect forwarding via magic references make easy, practical and possible what was impractical before.

edit: spelling


As has always been the case, effective use of modern C++ requires knowing which subset of the language to use and which to avoid.

I agree with the author's criticisms of many C++ features. At the same time, I think that a proper simple, modern subset of C++ exists that is much more productive and safer than C, without sacrificing performance. You can also optimize progressively, for example start with using std::string and std::vector and then replace the stock implementations if they aren't performant on your target architecture. I would not, however, recommend using C++ for GPU kernel code - a mix of C++ for CPU code and C for GPU kernel code works best. It is not ideal, but it's the best toolset available for serious industrial development.

FPGAs are exciting, but they've also been the "next big thing" in general purpose computing forever. Obviously it makes sense to use FPGAs for certain HFT and embedded applications, but that's not the same as general purpose computing which is what C/C++ is for. Not to mention, FPGA compile times can take hours or even days, which pales in comparison to most C++ template overhead. I would also say that for IOT, I'm not sure why it is obvious that "$10 FPGAs" should dominate. Why not a $0.50 microcontroller? Or the $5 Raspberry Pi Zero board? Both of which are eminently programable in C and even C++. Embedded devices have been around long before "IOT" became a buzzword, and we can see that microcontrollers, FPGAs, SOCs, and custom ASICs all have a role to play depending on the application.


"Not to mention, FPGA compile times can take hours or even days, which pales in comparison to most C++ template overhead."

It's really funny to see him bitching about iterations with templates then suggest using a FPGA and synthesis tools. I'm glad it amused someone else, too. :)


If he is complaining about C++ being bad and suggesting Verilog on FPGAs as an alternative, boy do I have some bad news for him.

HDLs (yes including Systemverilog) have 10x worse design than the worst software languages. This is why there are entire companies out there that make high level synthesis tools or high level HDL specification languages (like Bluespec).

And I haven't even said anything about the quality of FPGA tool chains.


This, a thousand times. If you want minimum latency, FPGAs are of course a possible solution, but selling it as a "better C++" is just laughable. When you have an IP core which fits your problem, then by all means use it (and hope it works as advertised), but otherwise: the less you need to do in an FPGA, the better.


> If you cannot figure out in one minute what a C++ file is doing, assume the code is incorrect.

This statement at first resonated with me, and then I thought about it: this doesn't reduce the complexity of the overall application or service, it just means that one file is simple. You could have 10,000 files instead of 1 much shorter one; is that any more simple?


Yes. If each file makes sense in isolation then the whole will as well. Just splitting code into lots of files won't necessarily produce files that you can figure out in 1 minute though (you have to define the boundaries between files such that they make sense).


I disagree. A complicated function may be made of a bunch of statements where each statement makes sense easily. The entire function may still be complicated. The same argument can be extended for files and projects. Even if each file is simple, if the code in those files interact with each other in a complicated manner, the project becomes complicated. This can happen despite having neat boundaries between files. Nothing stops a new programmer from writing new simple files that interact with the existing files in a complicated manner. Simplicity of source code in individual files or functions is just one of the factors behind a simple project. Simplicity of design has to go hand in hand with it.

On the other hand, a couple of files may be very complicated but the entire project could still be simple if those complicated files hide the complexity behind neatly exposed functions, and the remainder of the project does not make use of those functions in a complicated manner.


> A complicated function may be made of a bunch of statements where each statement makes sense easily. The entire function may still be complicated.

Statement yes, but I avoid them where possible - the complexity comes from their interactions because their interactions are unmanaged, implicit and arbitrary. If you make each function an expression made up of expressions and functions, then I think it becomes true that if each expression makes sense easily then the whole will also make sense easily.


As far as the complexity of programs are concerned, there is a similarity between statements at one level of abstraction and functions at a higher level. I have seen many cases where small functions have been assembled into complicated programs. These programs often have a proliferation of 'helper' classes and functions, where you have to trace through long series of calls to get to where the work is done. They often seem to come from a poor design that has been repeatedly patched instead of fixed, or from programmers who write functions because they think they will be part of the solution, but not backing out and replacing them them when they find a complication they had not anticipated.

Using small functions is a necessary, but not sufficient, condition for making understandable code.


I think what you're describing is a case where you can't understand what those helpers do, and therefore can't understand what the function that calls them does. I maintain that if each individual function makes sense then the whole will too.


This holds if the small functions are built around a coherent top-down design, respecting each other's invariants. Once the project is too large to fit in one's head, it is no longer sufficient for each function to be 'correct' in a local sense.


Sensible, understandable functions can be assembled into complicated, incomprehensible programs in exactly the same way that the sensible, understandable operators of a programming language can.


In my opinion this only shift the problem from coding clean code to handling hundreds or thousands of simple and small source files. This will make much more complicated to handle a large project because everything is scattered in such an extend that a developer spend more time searching thru the include list of files than understanding what the code is actually doing. Implementing complex functionality is going to be a nightmare and the end, everything is going to be merged in a single translation unit anyway. But this is just my personal opinion..


> In my opinion this only shift the problem from coding clean code to handling hundreds or thousands of simple and small source files. This will make much more complicated to handle a large project because everything is scattered in such an extend that a developer spend more time searching thru the include list of files than understanding what the code is actually doing.

I've worked on a number of very large codebases and that simply isn't my experience. If code is easy to understand it tends to make good use of the domain language and therefore also be easy to search.

> Implementing complex functionality is going to be a nightmare

The opposite, in my experience. The only maintainable way to implement complex functionality is to break it into small pieces.

> everything is going to be merged in a single translation unit anyway.

That's the compiler's business. I don't care one way or the other about its implementation details.


> That's the compiler's business. I don't care one way or the other about its implementation details.

Actually, you do- for at least several reasons.

1. If the runtime or compiler were to have problems with interdependencies.

2. If the compiled code that will actually be executed or the application or service itself across cores, processors, VMs, geography at runtime takes longer to run because of its compiler implementation, that might make it more expensive or too slow for your needs or to compete.

3. There may be a security flaw in the compiler, e.g. https://www.cvedetails.com/vulnerability-list/vendor_id-72/p...

4. The compiler may have a bug or problem prohibiting you from finishing your code in a timely manner, e.g. https://gcc.gnu.org/onlinedocs/gcc-4.0.4/gcc/Cross_002dCompi... or http://www.securitycurrent.com/en/writers/paul-robertson/mot...

5. The compiler may lack other required functionality or features.


lmm's statement was clearly intended to be taken in the context of the statement he was replying to. While your points are valid in general, the fact that the functions will generally be composed into a single translation unit is not an argument against the benefits of making them small.


I apologize. I assumed he was generalizing.


Felt like this after doing lisp/python coming from Java. In the end it's about being sensible on reducing a system size, no matter where the 'unit' is.


Depends how well the project is structured. Imagine that you're writing a function that adds some values to a hashmap. Would you rather have the the logic, the hashing, and the datastructure details in that function? Getting a shorter function and reduced complexity in that function is great, even if it doesn't affect the complexity of the whole project.

If the modules are well-designed, you can ignore how the hashmap works and the details of hashing itself. You'll get at least 4 extra files, but yes, it's very likely worth it.


I know why I don't like C++ anymore, it's just no fun.Its slow to compile, the errors are like 6 lines long full of template and class hierarchy that makes it hard to understand what exactly happened, and then of course there's the common coding shortcut of declaring everything auto. (What type is this list? I don't know, it's auto all the way down.) Then there's the whole thing about making constructors, but leaving the bodies empty because everything should be on initialize lists now, and now there's wrapped pointers for some reason.

I hated writing modern C++. It was just so depressing and frustrating.


My personal rules for using auto. Only use it iff:

1. the actual type is clearly visible on the right hand side. auto f = make_widget(); // it's a widget auto i = 123; // it's an int auto x = vec3(1.0, 0.0, 0.0); // it's a vector

2. the actual type doesn't really matter so much or is complicated to type out. auto it = vec.begin(); // it's an iterator auto it = // some template expression

3. the actual type is not known i.e. lambdas


I'm not a huge fan of the first rule. It's valid as stated, but the examples are questionable. I've seen all those in real code, and they're kind of annoying.

    auto f = make_widget();
    // is f of type widget or widget*?

    auto i = 123;
    // ok, I guess, but...
    // int i = 123; is shorter and more clear.

    auto x = vec3(1.0, 0.0, 0.0);
    // this can be shorter and simpler.
    // vec3 x(1.0, 0.0, 0.0);


These are good rules – as in liquor advertisements, it’s advisable to always accompany a declaration that uses `auto` with a rejoinder in the comments to Please Enjoy Responsibly


In a word where most new code is JavaScript, just use auto everywhere. It will be OK.


> Then there's the whole thing about making constructors, but leaving the bodies empty because everything should be on initialize lists now, and now there's wrapped pointers for some reason.

I fail to see how this line is supposed to be a "criticism" of modern C++.


It is funny that people on HN are so critical of `auto` when many people here are also coding in Ruby, Python, Swift and Javascript. Is the intersection between the two groups that small?


I still like it a lot.

However I have came to realize that I rather use it on as an infrastructure language.

Just when I need to write portable code across mobile OSes without dependencies on third party SDKs, interact with LLVM or give an helping hand to my actual daily programming languages.


Swift solves the "auto" issue pretty nicely - with the help of the IDE:

let x = someFunction()

Alt-click on x and it shows the type.

Otherwise, as you say, the code becomes very confusing and I abstain from auto except in cases where the type is clearly obvious.


Visual Studio shows the type of auto on hover.


… also one of the forthcoming Clang projects is a static-analysis tool (á la `clang-format`) for replacing any `auto` decls with the actual typename, so you can write lazy code without appearing unscrupulous at code reviews


I had to write C++ yesterday and hated every second of it, from the clunky header files, to the errors that make no sense, to the can't make a cyclic dependency between classes i.e.

      class A { B: b}
      class B { a: A}
Long story short - I was to create a wrapper around a Poco::Runnable, so you can use the wrapper as a Poco::Runnable (don't ask why, it's TEH LAW) but without extending it.


C++noob here, but: Given that the members in your example are no pointers but actual substructures within your data structure, wouldn't that result in an infinite data structure? Therefore it sewms quite logical to me it's not allowed.

Disallowing cyclical dependencies via pointer would make no sense though.


Right. C++ objects are values, not references to values like in almost all other languages. So C++ needs to know the sizes of everything to construct them. If you tried to write out the mathematical series describing the ultimate size you would need to allocate for A or B, you would end up with a value approaching infinity.


This is probably my favorite thing about C++. Values are so much easier to reason about than references.


It's more of a simplification. Although, I did try references and they didn't compile either. In fact, if I remember correctly the error was "forward declaration forbidden".

After spending about 10-12 years away from C++, I was pretty much on novice level. I remember some things, but most details and day to day specifics are gone. It did serve as a fresh reminder why I hate C++. Or more precisely, why I hate C++ compilers.


You can only do that when using pointers or references:

    class B;
    class A {
      B* b;
    }
    class B {
      A* a;
    }
because the compiler can (obviously) not tell how large the instances of the other class are when you have a field of that type. Remember that the class layout has to be fixed at compile time and both instance sizes depend on one another. That's not solvable.

C# does the same, actually:

    struct A {
      B b;
    }
    struct B {
      A a;
    }
will cause the following error:

    test.cs(2,5): error CS0523: Struct member 'A.b' of type 'B' causes a cycle in the struct layout
    test.cs(6,5): error CS0523: Struct member 'B.a' of type 'A' causes a cycle in the struct layout
with classes and references it works, of course:

    class A {
      B b;
    }
    class B {
      A a;
    }


This is also what I attempted, and as I remember it didn't work either, though to be honest I was really worn out fighting the obscure compiler errors by that point, I might have missed a sigil somewhere.

I spent good solid 30min trying to understand why defining and not defining my constructor was causing issues, to realize that the constructor error was actually a previous error, somewhere above the constructor.

Rust, also notes where and how you created an infinite loop.


Your compile error is likely because you use references instead of pointers. Reference as a class member is really hard to get right because it can be initialized only once, and cannot be copied or moved.


Not sure what you're trying to do, but:

    class A; class B; // 'forward declaration' if you want to google
    class A { unique_ptr<B> b; };
    class B { unique_ptr<A> a; };
...generally classes instantiating each other is a sign that your design needs rearranged, but it has its uses when writing graph types.


Because it doesn't make sense. These types would have infinite size.


You should be able to do that with either references or pointers between classes with a forward declaration. In essence:

  class B;
  class A { B* b;}
  class B { A* a;}


shrugs I still like to code in C++. Beats having to use JS ;)


Well that's faint praise indeed. ;)


I've been spending a few days going through talks from last years cppcon[1]. They're all good, but they've all left me with a strong desire for a strongly opinionated, there-is-no-past-only-the-future introduction to C++14 (and incidentally, C11).

I've added a few of those talks I've watched and liked to a small playlist (with a couple of other talks on programming, unrelated - the relevant talks are prefixed with "cppcon 2015"):

https://www.youtube.com/playlist?list=PLHvt7sld3hmvdDnwHkdU2...

The full cppcon playlist is here: https://www.youtube.com/playlist?list=PLHTh1InhhwT75gykhs7pq...

Of special relevance is the rather excellent talk by Kate Gregory on "Stop Teaching C" (when you're supposed to do an intro course to C++):

"CppCon 2015: Kate Gregory “Stop Teaching C"": https://www.youtube.com/watch?v=YnWhqhNdYyk

(But again, I want an open wiki/live tutorial, along the lines of "How I start", but longer, which goes through all this stuff. We really should make one, both for C++14 and for C11).

CppCon 2015: Herb Sutter "Writing Good C++14... By Default": https://www.youtube.com/watch?v=hEx5DNLWGgA Is also rather uplifting, and:

CppCon 2015: Sean Parent "Better Code: Data Structures": https://www.youtube.com/watch?v=sWgDk-o-6ZE

makes a pretty good case for simple, modern C++ being clear, easy and efficient.

Finally,

CppCon 2015: Phil Nash “Test Driven C++ with Catch”: https://www.youtube.com/watch?v=gdzP3pAC6UI

apart for being a pretty straight forward intro to a simple testing framework, appears to reveal some secrets combining template magic and still get sane compile times (hint: don't recompile more (template) code on every iteration than you need to).

There's a couple of good ones that touch on meta-programming too (Look for the talk on Spirit X3 and Brigand), and there's a nice lightning talk on clang-tidy (magically go from explicit loops to modern foreach, yay!).


This article is coming from a frustrated developer and lacks any scientific evidence. The frustration (understandably) is coming from the overwhelming complex new features and patterns that barely a compiler can understand.

C++11 onward revamped the language to make up for the lack of progress in the past 10 years. The majority of C++ developers that aren't keeping up with the new features because they are busy with their daily jobs feel that they are falling behind and the language they thought they new has changed underneath them.

C++03 already had a steep learning curve, but with C++11+ that learning curve is orders of magnitude more.

On the upside, you can use C++11 without understanding most of the details and it will do the right thing most of the them. And I think that's the bet that the language is making.


Ok. Try a different language :)?

A single language needed to solve all problems is a fallacy.

I don't see FPGA programming ousting c++, but expect higher level languages with strong parallel semantics to gain "market share". You can always call a dedicated process written in optimized c for the hottest components. Compose the rest in go, elixir, or any high level language (lisp).

Architectures will naturally gravitate to higher level languages that support cleaner composition. The tools and interfaces will push towards higher abstraction without impacting build or run time. Maybe this process is related to Kevin Kelly's inevitable. I'm an optimist here.


>A single language needed to solve all problems is a fallacy.

Has that been proven and do we have pointers to any peer reviewed papers on that?

Because else its just an old's wives tale.

I don't see any logical impossibilities for one language to solve all problems (meaning, to work well for at least 4 domains: OS and drivers a la C, apps/games a la C++, network programming a la Go, Java etc, and scripting a la Python).

It's just cultural, monetary, design and community issues with most current languages.

And even if we want to have different profiles for each domain, ideally I'd do it with a 2-layered language implementation like this:

1) Base library: network, threads/fibers, UI, database etc of Python/Java SDK proportions

2) A "close-to-the-metal" layer without a (big or any) runtime (can use RAII, ARC, etc).

3) A "scripting" layer that is a GC'ed an easier to use superset of the (2)'s syntax. Ideally all (2) code should also be valid (3) code.

Both use the same base library (written in the "metal" layer or C). (3) can be embedded into (2) as a scripting engine, and (3) can call all (2) APIs trivially (e.g. no need for declarations like for using C from Go, Python etc -- just import and call).

Programmers can use (2) or (3), mix them, share code between the two. And what's best, the core APIs are all the same -- which is most difficult part to master in a language.

(Think like Java and Groovy, but with the parent language closer to the metal and the later closer to C).


  > I don't see any logical impossibilities for one 
  > language to solve all problems
There are no logical impossibilities in constructing a vehicle that can serve as a passenger vehicle, dump truck, submarine, and airplane, but tensions in design will very likely result in a compromise that is more complicated, more expensive, and less capable than a dedicated solution. Not only that, but your vehicle will be just as inadequate as every other once the landscape changes and someone now needs a space shuttle.

All-in-one compromise solutions only excel when a market is both small (so that niche solutions that serve only a percentage of the market don't pass the absolute mindshare threshold for viability) and uncompetitive (so that there's no competitive advantage to ditching generality in favor of efficiency in a specific space). As long as the software market keeps growing and remains competitive, specialization and fragmentation will only increase (in the long run, anyway; we'll still be subject to the same bust-and-boom cycles, so it will still be possible for fragmentation to decrease in the short-term).


> a passenger vehicle, dump truck, submarine, and airplane

The problem with using metaphors to make your argument is we generally have to argue about whether the metaphor is even appropriate enough that the conclusions apply to the original topic... It's simpler just to argue the topic.

Within a huge class of problems, I don't need to get a new computer to solve each new thing that comes up. That's a very general tool. Why do I need different programming languages?

> All-in-one compromise solutions only excel [...]

Who asked for a compromise? I could make a short list of all the features I want in a language, and while there isn't one single languages that currently has all those features, I doubt you could make a proof that creating such a language is impossible or would involve some horrible trade off. Your list might be different than mine, but that's not the point.


  > The problem with using metaphors
Then ignore the metaphor and focus on the longer paragraph that succeeds it. :P

  > I don't need to get a new computer to solve each new 
  > thing that comes up
Except that, in practice, you do. I have a smartphone in my pocket, a laptop in my bag, a desktop in my office, and two personal servers in the cloud. Just because two computers are both effectively Turing machines does not automatically invalidate the importance of form factors, power draw, integrated peripherals, physical location, and other practical differences. This also ignores the existence of domains that actually demand dedicated hardware, like supercomputing. We are never going to live in a world where microcontrollers are just as capable at running weather simulations as the TOP500, because the economics don't pan out. So no, you're right, it is not logically impossible to construct a language that is capable of performing all imaginable tasks, though that's not something that I've ever disputed. Rather than being logically impossible, it's merely economically infeasible. :P


> Then ignore the metaphor and focus on the longer paragraph that succeeds it. :P

Your second paragraph had a bunch of economic pseudo-theory about what sells... Maybe that explains why we don't have a good general purpose programming language, but it said little about whether there could be one.


Economics is a social science, pseudo-theory is the bulk of it. :P However, I welcome you to prove me wrong by creating the language to end all others.


> :P However, I welcome you to prove me wrong by creating the language to end all others.

Yes yes, I'll be sure to let you know. In the meantime, I hope you'll keep up the great work maintaining the status quo and contributing to a language which avoids problems that experienced programmers don't really have.

:P


You just described Modula-3 basically.

https://en.wikipedia.org/wiki/Modula-3

As in Pascal or Oberon, language was simple and close to the metal enough for fast, production code. As Wirth language, it compiled lightening fast. It had support for manual or GC management of memory depending on your use-case (eg performance or OS stuff). It had a subset of C++ features for programming in the large. It was a clean, consistent design for a language rather than a pile of other language features added to C. Simple language, GC, and fast compiles mean it could be used for scripting although not as high-level.

It was used in SPIN OS, CVSup, and some businesses. Also had first stdlib with formal verification of some properties. If Oracle wins on API's, then I'm reviving Modula-3 immediately as a Java or C# alternative given its history is open with many features tracing back to ETH Zurich.


Languages can incorporate as many features as they like, but where two features conflict, they have to privilege one over the other. Nothing stops a language from having both statically- and dynamically-typed elements, but the library it ships with is going to privilege one over the other, and it's going to drag the rest of the code in the language in that direction. Nothing stops a language from having both immutable and mutable elements, but the language is going to have to privilege one over the other. (If all the library code uses mutable values, then you will have a harder time using immutable values with it; if all the library code requires immutable values then you'll have to copy a lot of stuff if you're using mutable values.) And so on and so on for a large number of features.

In the end you can not help but have a language that is either a "systems" language, a "scripting" language, or something uselessly in-between. You can't help but have an "imperative" language or a "functional" language or something uncomfortably in-between. And in a lot of ways it's more the libraries driving this than the language itself; a language can be a kitchen-sink language but when it comes time to write the function that splits a string on the given character, you're gonna have to choose whether the strings are immutable and how arrays get allocated and how they get passed by that specific function no matter how flexible the underlying language is. Multiply by several hundred for even a simple standard library. (And there will be a standard library; even if you don't bless one by the language designers the community will develop a "standard" loadout and make the decisions for you even if you didn't.)

You can imagine this sort of language as being a good idea because you can hide in the conceptual ambiguities in the vague idea, but when you try to manifest it concretely, you can't help but make a long series of decisions that will create a language that is better at some things than others, or, if you choose sufficiently poorly, not terribly good at anything.

You can always do the "multiple language on the same base system" approach; it's not just Java that works that way with its virtual machine, it's the way the real machine works too. You've got the base machine code and a whole bunch of things that back to that already.


> but the library it ships with is going to privilege one over the other

This really just seems to me a matter of no one having done it right yet...

Following your example (which I like), I can have both mutable and immutable types in my language. Your point is that the library will favor one or the other, but I think you admit both libraries are possible. Why can't I have twice the library?

Yeah, it might be more work to have a library twice as big, (and again, no one seems to have done it yet) but it's not more work than having two languages with two libraries.

All that to say, you're summarizing what's been done, but you haven't proven that something better couldn't be done.


It's just cultural, monetary, design and community issues

"What difficulties does this proposal face?" "All of them"


Just not the one that would fail it terminally: it being some kind of logical impossibility.


>> A single language needed to solve all problems is a fallacy.

> Has that been proven and do we have pointers to any peer reviewed papers on that?

I agree with you and can't stand it when people say things like: "All languages have their strengths and weeknesses" or "you just need to choose the right tool for the job".

I can't see any reason a clean and simple language can't have high level constructs when you want them and low level performance when you need it. It's just that no one has done it well enough yet.


If you drill down far enough it's all binary or byte code. But on the way back up, there are variations of implementation in more than a single language.

Pseudo inception like self describing languages are close but never 100% self describing (this always blows my mind). Then there are languages implemented on top of C, and the many XVM languages (elixir built on the erlang virtual machine, JVM langs, etc)

Any particular language is almost always composed of multiple languages, so trying to craft one to solve all problems is an interesting problem.


Mesa/Cedar, Ada, Oberon(-2) and Modula-3 come to mind as such languages.

Specially the interactive environments of Mesa/Cedar and Oberon OSes.


Eventually they'll stop describing Modula-3 in their requests and actually use it instead. ;)


I've come to the conclusion that one should "use C++ when they absolutely have to and C when you can." There just aren't many areas where C++ is absolutely required when plain old simple C can be used. (Not to mention using higher languages if possible).


Such rant appears once every few months on HN, this one is one of the least convincing. Many problems he mentioned are not "Modern C++" problems, but problems with C++ from beginning, some of them already have reasonable solutions, for example ccache + distcc for speeding up compilation.

The real problem with C++ is the standard committee, the design by committee approach for such a complex language is failing. If C++ is taken over by a company, it will be a much better language.


“There are only two kinds of languages: the ones people complain about and the ones nobody uses.” - Stroustrup.

C++30, might end up being D, today.


Or Rust, or Go.

C++ is like Java, it will pull in features of other languages years after they have been proven to be valuable and useful, not that there is anything wrong with that. There is something to be said for a slow moving target that you can kinda rely on to work well. If anything I think C++ went off the rails when they started innovating in the language space and tried to introduce all sorts of novel features that other languages hadn't prototyped. I think of all the nonsense around templates that they introduced and how generally the consensus is not to use any advanced template features.


This sounds like it's written from the point of view of implementing something inhouse. I fail to see how FPGA programming will be relevant if one wants to distribute software for consumers (or am I technologically clueless...).


According to the writer's LinkedIn profile, he comes from the world of High Frequency Trading. That's an area where performance is so important, that it might make sense to design your own hardware. And you certainly don't want to distribute it to anyone...

I do agree with the writer that with each release, C++ has become more and more complicated, seriously hurting the maintainability of C++ code.


I also agree that c++ is getting more and more complex over time. However if you build things from scratch and you cherry pick your language features, C++ can be quite pleasant.


That's the problem - C++ isn't one language, it's at least three generations of languages living together in the same compiler, like an extended family crammed into a tiny apartment.

No one has moved out since 1983, and new additions just keep on coming.


We c++ programmers are very welcoming. We have yet to meet a paradigm that we didn't like.

On a more serious note, any old system acquires cruft with time and you don't always have the luxury of throwing away pieces or even restarting from scratch.


It may be pleasant for you, but not so pleasant for the next developer who needs to maintain your code. Perhaps he likes picking different cherries than yours...


So it seems like he wants people to learn FPGA based programming languages (Verilog, VHDL) because he will need to hire (more of?) these people soon.


Sounds like he is suffering from bias of the problem domain he is working on right now.


Which, to be fair, is also an apt description of defenders of C and C++ (I consider myself in that category). We are all susceptible to the biases of niches we have worked in and what we have seen working well in those niches.

C and C++ can be made to shine in some ways that I get the strong impression many people who happen to not have the same experiences will never appreciate, and perhaps that's not even a bad thing, maybe that just is.


But that problem domain is dominated by the need for speed -- which is exactly where C and C++ are supposed to have a competitive advantage.


I find the beginning and end of the article quite contradictory. Basically that C++ is too complicated; and oh by the way we should start programming FPGAs, which are much harder to get right.

I like modern C++, because I think it simplifies a lot of things (RAII for the win here). Templates let you engage in duck typing, but with (if you are careful) very performant results.


While some pretty good points were stated in this post, I cannot but feel OP is a bit biased. Too narrow sort to say.

I feel totally opposite in terms of new Modern C++. I guess the thing is how, where and when you use it will define your opinion/experience.


> Today the "Modern Technologist" has to rely on a new set of languages: Verilog, VHDL

That was a complete surprise ending! :)

I like surprise endings, and he makes a lot of good points, whether or not I agree with them. But, I totally wasn't expecting "I'm done with C++ because: hardware." I was expecting because web or because awesome new high performance functional scripting language <X>.

A lot of what he's talking about there will still run compiled software though... FPGA programming and C++ aren't exactly mutually exclusive, right?


FPGA programming and C++ aren't exactly mutually exclusive, right?

I would say yes: you can't really run C++ on an FPGA. There are all sorts of tools which promise this (SystemC etc), but it requires you to stick to a careful subset of language constructs. You don't have a heap, for example.


"FPGA programming and C++ aren't exactly mutually exclusive, right?"

Currently they are exclusive. You can theoretically 'compile' some subset of C++ to an FPGA, but you very likely do not want to.

Of course C++ makes it a decent language to talk with a custom circuit in FPGA, but that's orthogonal.

edit: clarify


20 year C++ programmer here. I work on multithreaded server code. Stopped using modern C++ features 5 years ago. I'd compare my use of C++ to be roughly equivalent to the use of C++ in the NodeJS project or the V8 project. I'm not a user of Boost.

I have to agree with the author of the article. It takes longer to train developers to write idiosyncratic modern C++ code and compilation times explodes. Compiler support for bleeding edge C++ features is spotty at best. Harder to reason about the correctness of modern C++ code.


15 years here.

I'm mostly in MS ecosystem, so I don't have issues with C++ compiler support. But I totally agree with the rest of your comment.


One of the biggest users (some would say abusers) of template metaprogramming I know works on HFT software. He trades extremely long compile times for performance at runtime and finds that C++ allows him to do this and maintain a decent architecture (through what amounts to compile-time polymorphism as well as RAII).

For him, it's actually the older features of C++ that have no use. He doesn't use deep class inheritance and never touches virtual functions, for example.


I never programmed HFT software, but I agree with the criticism of the modern C++.

It’s bad the author hasn’t defined what exactly’s “modern” is. I saw some comments compared boost with C++/14. I think boost is also modern. Even Alexandrescu’s Loki is also modern, despite the book was published in 2001.

I think that modern stuff was introduced in C++ because in end 90s-start 2000s there was expectation C++ will remain dominant for some time. There was desire to bring higher-level features to the language, to make it easier to learn and safer to use — even at the cost of performance.

People didn’t expect C++ will lose its market that fast: very few people now use C++ for web apps or rich GUI. However, due to the inertia and backward compatibility, the features remain in the language.

Personally, I’m happy with C++.

C++ is excellent for system programming, also for anything CPU bound. For those you barely need those modern features, and fortunately, they’re completely optional: if you don’t like them, don’t use them.

But if you do need higher-level language for less performance-critical parts of the project, I find it better to use another higher-level language and integrate it with that C++ library. Depending on the platform, such higher-level language could be C#, Lua, Python, or anything else that works for you.


> After 1970 with the introduction of Martin-Löf's Intuitionistic Type Theory, an inbreed of abstract math and computer science, a period of intense research on new type languages as Agda and Epigram started. This ended up forming the basic support layer for functional programming paradigms. All these theories are taught at college level and hailed as "the next new thing", with vast resources dedicated to them.

This seems pretty dubious. Dependently typed languages and other projects embracing advanced type theory are still the realm of niche enthusiasts. While some of the more academic colleges might teach them in one or two courses, the vast majority of education a CS college student receives will be taught in traditional imperative languages. If "vast resources" have been devoted to Agda and Epigram, then I'm not sure what kind of language should be used to describe the resources devoted to C, C++, Java, etc. Also as the author mentions, Intuitionistic Type Theory has been around since the 70's, in fact the same year that C was introduced. Certainly it hasn't been taking over the CS world by storm since its inception, as he seems to claim.

Beyond that, the author's argument seems to be a bit incoherent. He critiques the readability of Modern C++, but C++ is notoriously hard to understand, including or especially prior to the development of C++11. It's never going to be an easy language to read except to seasoned developers. If anything, modern C++11 seems to provide abstractions that increase readability and safety. He critiques the performance of modern C++, but then he ends up recommending that people ditch C++ entirely and learn VHDL/verilog instead. Not even vanilla C++ is fast enough for him, then why criticize modern C++ on the grounds of performance?


I recently had to switch a project to -std=c++11 because a header I include now uses C++11 files. This change alone made compilation at least twice if not three times as slow. The new safety and convenience features are nice but compile times seem to be out of focus and getting slower and slower every year. I don't know how I feel with g++ 6.1 defaulting to -std=gnu++14.


How do you know that it is c++11 switch that dramatically increased your compile time, instead of the header file and the headers it includes? Two conditions are changed, and you feel confident that one of them is responsible for the outcome. Why?


Because it's a C++11 header that doesn't exist in C++98. And I cannot compile that version without switching to C++11 mode.

Edit: there appears to be some confusion here. I don't include the C++11 header myself. I include a public API header of another project which now happens to include a C++11 header in their public include file. Unless I want to stay on the old version of that API and accept the risks, I don't have a choice whether I compile my consumer module in C++11 mode or not. And it's a major fault of the C++ language that there are no modules and therefore this include file overhead and mess. They appear to be working towards a module system, thankfully.


The right answer should have been "because that was the bench-marked time result for the same code-base compiled in both modes" (before including the new header).


> Because it's a C++11 header that doesn't exist in C++98.

That is supposed to be a fault of C++11?


I don't understand your argument. An upstream project I use put a C++11 header in their public API's header. Now, I also have to build in C++11 language mode. If you're saying it's not the language standard's fault that CXX is slower, sure, but how or why should I differentiate between ISO C++ and g++/clang++ with libstd++/libc++?


I think what the GP was saying is: Ignore the new feature you're using from that upstream project for a minute. Did you try compiling your project with -std=c++11 before doing any other change? That is the only way you'd clearly see the difference of switching to C++11 in your project. Maybe it won't add any compilation time, it could even be faster...

And then you add the new feature and your compile time goes crazy, that's unfortunate indeed, but that's the price of not being penalized at run time with the [cool new C++11 feature the upstream project is using], you have to endure a longer compile time. But you can't blame C++11 for it, without it that feature wouldn't even exist!


This makes more sense, but why is it not C++11's fault if compiler writers have a hard time keeping the compile time overhead in reasonable bounds? Compiler writers actually do not enable some optimization passes or cap some passes at a certain search level because they know the algorithmic complexity would be unacceptable for most users, although the performance benefit is clear.

The planned C++ module system will most likely solve a large set of the pain points.


Just started to relearn c++ and QT for cross-platform GUI programs, c++ is not easy, but its performance is still unbeatable and in certain use cases, e.g. games or video-related-performance-critical-apps or GPU-opencl-etc, c++ seems to be the sole candidate still.


I have a few problems with this article: >structure leads to complex code that eventually brings down the most desired characteristic of a source code: easiness of understanding.

If done well, the structure of things like variadic templates make libraries easier to use, and make coding faster (granted, code bloat can be an issue with N different function signatures).

>C++ today is like Fortran: it reached its limits

Not quite. Fortran died because well, object oriented programming came out and lots of people like it. And well, C was always more popular regardless so...C-like C++ was the obvious next choice. There is a lot of cruft in any new library, so some things aren't as performant as if you wrote them in say assembly, which is what the author seems to suggest. Yes, if I built a bare metal iostream-like functionality it would be more performant (ha, used the word :) ). People know iostream isn't that performant. Could it be better? Perhaps. Is it safe? Yes! If you want perf, use the C interface directly. Is that safe to use, probably not for the general careless user.

>To handle the type of speed that is being delivered in droves by the technology companies, C++ cannot be used anymore because it is inherently serial, even in massively multithreaded systems like GPUs.

Well, yes but so is just about every language. People are trained to write sequentially (left to right, top to bottom), with many exceptions...but none the less, sequentially. There are very few languages that do multithreading natively. There are lots of additions/libraries to C++ that enable very nice ways to consider parallelism (including w/in standard: std::thread), outside of standard (raftlib.io,hpx(https://github.com/STEllAR-GROUP/hpx),kokkos (https://github.com/kokkos), etc.). There are lots, some are quite easy to use. C++ is inherently serial, but there is no better way to write. It is fairly easy to pull out "parallel" pieces of code to execute. It is even easier if the programmer gets quick feedback (like the icc loop profiler,etc.) on things like ambiguous references and loop bounds that can be fixed quickly.

Interesting read, but don't agree at all.


I agree with the author; I still long for the not-overly-complicated C++ back in the 00s I could write super-fast 3D rendering engine without much bloat. I find it very appalling when C++ went from a poster child of imperative programming to implementing monads in its libraries (mind you, monads are used to "simulate" imperative programming in functional programming). Something went wrong there...


I was once a C++ programmer but migrated first to Java, when I thought it was better designed and more convenient, and then to Python when I wanted less verbosity while having greater freedom to choose between a procedural style or OO.

C++ may still be an ideal choice in some problem spaces but I think the number and size of them has shrunk as more and better alternate choices have appeared and ate away at the C++ share.


How are Verilog and VHDL a "new set of languages"? That set has been around 30 years, almost as long as C with classes.


The problem with modern C++ is that it wants to be everything. Now this behemoth is crushing under its own weight.

People who are not forced to use C++ should consider other languages which are way cleaner and even more performant. Code written in Ada and Nim for instance is much easier to maintain.


Wonder if they tried IncrediBuild to reduce their compile time? They are right that C++ - while faster than ever before - takes much longer to compile than many other languages.


Things like Incredibuild can only help so much. You still need to have a decent project structure, and enough computers to effectively cut down compile time.

Though (as mentioned in another comment) if your project is a big ball of inter dependencies then the link time will dominate.

I've seen compile times with Incredibuild that are less than a minute, but the link takes 5-10 minutes. Or better yet crashes due to the pdb size.


Am I the only one being redirected to a linkedin sign up screen?


I'm redirected too. I'm not signing up with Linked-In, agreeing to their policies, letting them spam me, etc just so I can read this article.


Being an expert FPGA programmer is easy, the problem is that small things take a really, really long time.


> "that is where the unicorns are born: by people that can see on both sides of the fence"


Functional language programs have to run as interpreted. If compiled they will be too bloated.


Anybody got an idea to which video series of Chandler Carruth he is referring to?


Here's a few:

- "Tuning C++: Benchmarks, and CPUs, and Compilers! Oh My!" https://www.youtube.com/watch?v=nXaxk27zwlk

- "Understanding Compiler Optimization" https://www.youtube.com/watch?v=FnGCDLhaxKU

- "Optimizing the Emergent Structures of C++" https://www.youtube.com/watch?v=eR34r7HOU14

- "Efficiency with Algorithms" https://www.youtube.com/watch?v=fHNmRkzxHWs

The first one is really good if you've never benchmarked anything in a Linux environment. Or if you otherwise want to learn how to investigate how C++ gets turned into machine code.


Actually, the Author wants GO.


His argument about simplicity resonates with me. Sure you can learn variadic templates and all that fancy stuff but in practice when you are working on production software in any company involving more than one person using the code base, it just pays in heaps to write the simplest easiest to understand code; meaning all that nice fancy stuff is almost never used.


Kernel is my new home;


Me too :)


It just sounds like someone who couldn't handle C++ whining and making a bunch of blanket statements without really having any proper understanding.

I agree that some of the features such as lambdas can use to hard to track bugs (lifetime issues) and difficult to follow code when abused. When used nicely though they can lead to simple, elegant and straightforward code (anyone who tried to use the STL algorithms before lambdas knows what a pita it was most of the time).

Bottom line, if your code base is a mess don't blame the tool. Blame the programmers.


Bottom line, if your code base is a mess don't blame the tool. Blame the programmers.

The problem is the attitude towards these new features that naturally leads to programmers abusing them, building overabstracted complex bloated behemoths to accomplish the simplest of tasks.

I'd say the vast majority of programmers, for some reason, seem to have an appetite for complexity --- they tend to feel that overly complicating things somehow makes their code better, especially if they can use some new shiny fancy features in the process. I don't think like that so I don't know exactly why that is the case, but perhaps it has to do with the feeling of accomplishment from having written something "big", solving simple problems with complex solutions. Instead, I'm the opposite --- I like solving complex problems with simple solutions, which means code that is usually very straightforward and only occasionally makes use of some more advanced features of the language, when it helps simplify the solution.


A common assumption among C++ programmers seems to be that if an ounce of prevention is worth a pound of cure, the value of a ton of prevention must scale up similarly. But a ton of anything is too much weight for most projects to handle.


>It just sounds like someone who couldn't handle C++ whining

As if you have so more experience and skills than him in the language? He's not some newbie trying C++ for the first time...

>Bottom line, if your code base is a mess don't blame the tool. Blame the programmers.

If the tool is a programming language, then its syntax, modularization and other features absolutely affect the code being a mess or not.

Tools as not passive, they influence how we use them and what we can do with them.


>>As if you have so more experience and skills than him in the language? He's not some newbie trying C++ for the first time...

Not much experience. Just 15 years writing C++.

>>If the tool is a programming language, then its syntax, modularization and other features absolutely affect the code being a mess or not.

In any language there's the idiomatic way of doing things. There are designs to which the language lends itself nicely and there are designs to which it doesn't lend itself. There are badly written code in any language. It's up to the programmer to choose designs and paradigms that work well with the language.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: