The thorough seeding of the list with defunct examples (BeOS, Mac OS < X) gives off a weird vibe.
There's also the standard halo effect listing. Any sufficiently large company is using a language / methodology / philosophy / technology you've heard about. And they're being namechecked on that language / methodology / philosophy / technology's website.
The classic is Microsoft.
Did you know Microsoft uses Scrum? They do!
Did you know Microsoft uses ISO 9295-424:54 (Humungo Software Dev Standard Written by the SEI, the DoD and a committee of 200 researchers from different Fortune 50 companies)? They do!
(Hint: Microsoft doesn't have a single unified method of development, it's largely up to each section to decide how they do the work)
I remember another newsletter article where one of the devs described techniques for getting C++-like power while remaining within C (heavy use of function pointers in structs, that kind of thing).
The BeOS userland api had base classes with reserved functions that were there solely to reserve space in the objects vtable becase of the fragile base class problem:
I think it's simply an old list...if you look modern OS X stack traces, you often see C++ names in various frameworks. Random ones I found in `Console.app` (sadly this is a rather new & stable system):
I think I even saw a core part of NSAutoreleasePool in there once. I have never read the public Apple source tarballs, but from just seeing stack traces every now and then, it seems as if Apple is very happy with using both Objective C and C++ (the compiler makes this easy of course - e.g. C blocks have support for C++ construction/destruction etc...).
I'd be interested to see which subset of C++ each company picked. Google's coding guidelines are always cited, but I can't find many others. MySQL's seem lacking in details. C++ is such a broad spectrum, I bet some of these companies only used simple classes and virtual functions, whereas some used boost.spirit and functors.
At Bloomberg, we just published our lowest-level C++ standard library (and plan to add more layers on top).[1] Everything we do is based upon this library and all the higher level layers which are not OSS yet. (i.e., No external 3rd party libs in any of these lowest layers) We CC-BY-SA'd our code standards as well if you're interested.[2]
We went with gyp because we wanted to natively support MSVC and XCode projects as well as standard GNU make environments. Our production build system actually has simple text metadata files that live along-side the code and it generates all the required files as well. We felt that it easier to provide a gyp based system instead of also open-sourcing the entire Perl-based build system which was tied to our custom data format and not generally useful to anyone else. We generate the gyp input files from the metadata on our end and then anyone using the open-source project can use gyp to do whatever they need.
If we get to the point where OS distros want to package the libs, we may opt to generate autotools files, but that was too much work to be a roadblock from open-sourcing the lib.
I hope nobody uses boost.spirit. It's nearly impossible to debug, increases compile time dramatically, and increases binary size dramatically as well. It's a nice idea showing what can be done in C++ but it's really nothing you should practically use. I think Boost is simply going overboard with stuff like this.
Another interesting C++ coding guideline are the JSF coding guidelines. Stroustrup was actually involved in creating them. It's mostly focused in embedded and critical systems. But unlike what most people expect they highly recommend stuff like Templates
Actually boost.spirit is an excellent library. The fact that it can both parse and generate made it a great basis for the Io debugger I wrote that wraps gdb; since gdb uses a text based interface I had to both parse and generate in order to spoof gdb responses for Io script debugging. Spirit allows me to do that quite ncely.
Like anything boost spirit has a leaning curve. You could just as easily say no one should use emacs because it is hard to learn. In fact I personally don't use emacs because I found it too hard to get going with, but I don't from there extrapolate that no one else should use it.
The compile time and binary size problems of boost spirit can be addressed by factoring your grammar into smaller subgrammars in separate compilation units. The subgrammars can still be combined the same as rules can be.
The obscure error messages can be improved by using typeful programming so that you get an error like "no conversion from MyRuleADatatype to MyRuleBDatatype".
Yes these things are still problematic, but how do we get from "X is hard" to "no one should use X"? See the Clojure author's talk on Simple vs Easy: he lambasts programmer culture for being obsessed with 'easy' to the detriment of creating useful stuff. He asks, what if the Foo Fighters said guitars are hard to play, let's play kazoos because they are easy. Would you listen to the Kazoo Fighters, he asks.
For simple lexing/parsing it likely is overkill, that said for more complex things it can be nice. I have a lexer for Erlang written with Boost.Spirit on my Github. I found debugging no worse than the general Chutulu-horror that are C++ template errors, something you get used to, looking at the root of the spew is generally your best bet. Also they have built in debug nodes you can trivially add to your grammar that will spew every step of the lexing/parsing process so you can easily see where things are going off the rails if your grammar is wrong. Compile times are definitely increased in my experience, don't recall the binary bloat though it has been awhile since I played around with it, a good compiler should be able to fairly aggressively trim it down. Will it be as optimal as a hand written system (assuming you know what you are doing)? Unlikely, but it will be much quicker to get up and running if you know Boost.Spirit.
> And everyone knew the huge amount of software in C++. Why the list?
Perhaps the submitter just likes watching HN'ers fall all over themselves to reflexively point out all the reasons why C++ is, like, the worst language ever? (Someone even mentioned coding Windows in ML-derived languages, I can only imagine Raymond Chen's reaction to that)
MySQL, though written in C++, reads more like C with classes. Even then, a large portion of the code is procedural instead of OO. They are slowly moving towards an OO style, but it will take a while before a reasonable percentage of the code base has been re-written.
C++ isn't just about subsets, it can easily be about building your own language, even without templates. I've seen enough C++ in "really big projects" to know that the amount of complexity it spans and the extreme differences in every aspect of usage is every bit as significant as comparing completely different languages.
> Don't go crazy with Templates, in fact don't even use them (except for what's in STL already)
What's wrong with templates? This is the most powerful feature of C++ allowing for Generic programming, Polymorphism and metaprogramming. The latter is of course clunky considering C++ syntax, but overall templates are great. And STL is the best thing that happened to the C-like family of the languages.
The problem is that, as expected in a strongly typed language, you can't do real Generic programming, even with templates. Or even adding another inheritance level (which makes some things easier)
Because type 'fixing' (that is, knowing what you need) happens at compile type, for example, duck typing ("even with" dynamic_cast - yes, dynamic_cast works at run-time, but you're limited in what kind of objects - by your inheritance hierarchy - can be present at the time you're doing the dynamic_cast)
You end up having to specify the type, you don't have a 'Generic' type. And you may have to deal with dynamic_cast
And read again, I'm not against STL, they 'get' templates.
> You end up having to specify the type, you don't have a 'Generic' type. And you may have to deal with dynamic_cast
Do you really need one? Because then you really are dealing with run time polymorphism, whereas templates are all about compile time.
I recommend checking D programming language out. It has very powerful templates with a nicer syntax than C++ and a lot of other things aiding metaprogramming. Moreover with Ranges you get an implementation of ideas laid out in STL but with a better syntax and matching the power of what is in proper functional languages.
struct data {
void *data;
size_t length;
};
union value {
char *zstr;
int intgr;
float flt;
struct data *data;
};
enum type {
STRING,
INTEGER,
FLOATING,
DATA,
};
Firefox and Chrome are both actually written substantially in JavaScript. It's possible that they could get away with JavaScript + C rather than JavaScript + C++ (conjecture here). In this world more stuff is implemented JavaScript perhaps.
There is the saying that any sufficiently large program contains a buggy and ill-specified version of Common Lisp... I would rephrase that as any sufficiently large program needing to contain its own extension language and programming language. JavaScript is filling that role for the browser, and I expect there to be a language in the future that does it better.
There won't be anything that replaces C (or C++). Go can't do it. But there will be something that works better with C and C++ for big performance-intensive applications. Lua is close. The reason I would prefer C is that C++ has a very hostile interface for another programming language to interface with.
I wish Python were there, but it's C API isn't really up to the task, especially with regard to concurrency (which e.g. Firefox and Chrome have a lot of!).
Something ML-derived perhaps. Strong typing, mostly functional, but practical.
I once wrote a large part of a CSS+HTML renderer in OCaml (think: the very core of something like Firefox), and the ability to match over data structures and use trees naturally was a huge advantage.
ML was devised as a language to write compilers in, so LLVM is covered.
For an operating system you really want a lot of different languages because it's a huge undertaking that covers many domains. MirageOS is an OS written in OCaml, so that's a proof that it's possible at least.
I was under the impression Rust was being developed by Mozilla to eventually be their language of choice for implementing software like Firefox precisely because C/C++ leads to too many (security) bugs among other things. At least one of the reasons, anyway..
A: Mozilla intends to use Rust as a platform for prototyping experimental browser architectures. Specifically, the hope is to develop a browser that is more amenable to parallelization than existing ones, while also being less prone to common C++ coding errors. The name of that project is Servo.
Q: Are you going to use this to suddenly rewrite the browser and change everything? Is the Mozilla Corporation trying to force the community to use a new language?
A: No. This is a research project. The point is to explore ideas. There is no plan to incorporate any Rust-based technology into Firefox.
I was being careful and said software _like_ Firefox :) From discussions I had with people about Rust it seemed obvious that if it reached maturity then it could be considered for new projects starting from scratch.
Windows: for lower level, C. like Linux and Mac OS X. For higher level development, C++ or just go with C#
Firefox/Chrome: C++ looks like a good choice, still, there's a lot of JS behind them as well if I'm not mistaken. Now, Go may be a good choice
LLVM has interesting requirements, it needs to be both fast and needs a lot of 'flexibility'.
I would probably try making the next LLVM version in PyPy (which uses LLVM itself), so it maybe bootstrap itself using CPython first, hence bringing us one step closer to Skynet.
But in general Go/C#/PyPy may be steps in the right direction (Java is too afraid of new things being added to C# like anonymous functions, so it's behind)
> For higher level development, C++ or just go with C#
The facts:
C# was created in 2000.
After 12 years Microsoft still does not sell a single app written in C#. Not one.
C# is used for PowerShell (free) and MSN (free, the backend is mixed C# and Java).
This is not likely to change anytime soon: pretty much all Microsoft job openings require C++. Also for new projects MS recommends JavaScript or C++.
Python is not better off either: Google does not allow Python code in anything that can be seen by end user. (Youtube is Python, but it was bought, not built.)
I'm not saying C++ is perfect, just that for some use cases it is pretty good. For some other use cases obviously it is not the best choice.
I have heard these things about C#. Too bad, because I think this is one thing Microsoft nailed. Sure, it has its warts, some parts of the library have to be improved, etc, still.
Also for new projects MS recommends JavaScript or C++.
This was one of the clashes between Microsoft divisions, the division between the new Metro (JS) and the rest, can't remember which side Sinofsky was on this.
Of course, to work at MS you have to know C++
" Google does not allow Python code in anything that can be seen by end user" really?! Didn't know that, maybe it has to do with their infrastructure and the way it scales. Can't say I blame them (still, GAE runs python)
How about Lisp? I know, I know, it's a religion and all that, but let's put it this way: you can get high-performance Lisp code, there are Lisp compilers that allow you to do things at the machine level (SBCL, for example, has support for pointers and inserting assembly language instructions), and OSes have been written in Lisp in the past.
Why this over C++? Well, first of all, basically anything is better is C++ at this point, but as for why Lisp in particular, there are a few reasons. First, I won't deny my own biases -- I know Lisp, I use it in my work, and I know that the tools to do the things I mentioned above already exist (perhaps an ML fan can answer whether or not such things are available in SML or OCaml). Second, Lisp is never really "out of date," in the sense that you do not need to rewrite your compiler to introduce a new language feature (consider what was required to add lexical closures to C++, and compare to something like CLOS, which can be implemented in macros for Lisps that do not have object oriented constructs). Finally, there is the expressive power of Lisp; an experienced Lisp programmer will run laps around an experienced C++ programmer, simply because it is easier to describe a complicated program in Lisp than in C++ (this argument also applies to ML, Haskell, Python, and dozens of other languages; again, I will not deny a personal bias towards Lisp).
I was one of those who was vehemently anti-C++. Then I read the source code for LLVM (or, large chunks of it...it's a rather large code base). Now I am of the opinion that beautiful things can be created in C++ as well, it just takes considerably more talent than were you to use any other language.
Actually, C++ is bad by definition. The C++ standard has some serious deficiencies that make it harder to write reliable code (and forget maintenance). Part of the problem is that the C++ standard is written with compiler writers in mind, rather than with C++ users in mind, and so some things (like not unwinding the stack until an exception is caught, thus avoiding the double exception fault) are not done, and new problems are simply layered on (like causing the default behavior for exceptions propagating out of destructors to be program termination).
At this point, C++ should be placed in the same category as COBOL: it is a language you learn because you need to maintain some old code that nobody has time to rewrite or replace.
"the C++ standard is written with compiler writers in mind, rather than with C++ users in mind" Just looking at the enormous lengths compilers have to go through to correctly parse C++ should tell you this isn't true. C++ does have some rather glaring deficiencies, but the idea that they're there to make compiler writers lives easier is outrageous to say the least.
Sure, it is hard to write a C++ compiler. Yet neither the C++98 standard nor TR1 fixed the template/right problem, despite it being a major source of headaches among C++ programmers, because compiler writers complained about the complexity it would have added to the grammar. It was not until the majority of commonly-used C++ compilers began to emit a suggestion about the problem that the committee finally fixed it in C++11, and even then, the committee had to convince compiler writers that it was a good idea. That same pattern of behavior can be seen with the double exception faults: compiler writers were unwilling to put in the effort needed to catch an exception before the stack is unwound (without making compiled code slower), and so instead the standard was updated to further cement the idea that destructors should never throw exceptions.
No, it is not purely for compiler writers, but compiler writers and library implementors are a powerful group on the standards committee.
What you're describing sounds more like "C++ strikes a balance between features and implementation difficulty" not "C++ is written for compiler writers." And even that's charitable because it's not like we haven't gotten hard to implement features either. Templates (especially SFINAE), lambdas, and even concurrency and generalized attributes are all places where implementation difficulty was trumped by users. I do agree that compiler/library writers can slow down good features, but they have an important position that needs to be heard and weighed with every other voice too.
Most of amazon.com systems are written in perl mason. Very few systems use c++, I would say not a single major system at Amazon is written in c++. It definitely should not be included in this list
We actually do have major systems written in C++ - but nearly all of them are legacy, and most have been deprecated already, while the rest is on the deprecation list. It's mostly Java now. Perl/mason is only used on the front-end.
Oh, and there's at least one critical service written in Erlang =D
Disclaimer: I left Amazon in 2008. I've tangentially stayed in contact with a few people there in my old groups.
I worked in Supply Chain specifically on the FC side of software (OK...specifically on COFS for those of you who remember) from 2004 till 2007 before moving over to retail. And I can say that all FC software was C and C++. There were small pockets of Java when I left, but I'd be really surprised if ALL of FC software has been ported to Java in the last 4 years. You're talking about 10+ years of software. Software that controls hardware that works as is. Tons of "bugs" and "features" to port.
And I'd call the software that runs the warehouses a major system.
My team tried to replace a C++ daemon with a Java one and our favourite phrase became "Oh yeah. Forgot about that business rule".
Most of the customer-facing stuff may be written in Mason, but more and more back-end services are written in Java. And there are a lot of back end services.
There was a time when Amazon was mostly C++, but it wasn't very recent.
This list proves nothing. C++ is often used for performance reasons or even as a target language. Facebook's HipHop compiler converts PHP into C++. I would argue that it means that Facebook develops in PHP rather than C++.
Similarly Adobe's most recent product Lightroom was about 63% written in Lua [1] which they attribute to its rapid development.
What would be interesting is a table which shows what percentage of developers or projects predominantly develop in C++. I am guessing it would be a much smaller number.
C++ is coming back in a big way because it saves money. It is the king of performance per watt, per cycle, per transistor which directly translates to performance per dollar. It cannot be beaten.
That's very interesting. I didn't know that about Lua and Lightroom. It's interesting for me as I'm asked a lot by photographer friends about large performance problems with Lightroom. Particularly between versions 3 and 4. Huge number of issues on the boards out there about it.
I'm not saying there's a causal relationship. Still, it might be an avenue to investigate for me.
It's interesting for me as I'm asked a lot by photographer friends about large performance problems with Lightroom.
Wait until you try Aperture, performance-wise Lightroom is heaven. I moved to Lightroom for this very reason. Adobe also supports new Camera's RAW formats earlier than Apple.
I never really peeked into the application bundle of Lightroom, but at least some of the plugins that I used were written in Lua.
Nothing against Lightroom though. It's one of my favourite programs from Adobe, together with InDesign. And I've heard that about Aperture, and recently FCP.
Yes but the info is very old. I wonder if Adobe is still using it and would provide some sort of update. No Lua IDE from them as they have half promised, or anything i have seen they contribute back to Lua. Was wondering if they were using LuaJIT as well.
Honest question: Is C++ really worth the complexity over something like C? I'd love to hear people's technology evaluation thought process when choosing C++. To me, C seems far less complicated, while still providing similar levels of performance. I tend to enjoy languages that are small and get out of my way, but if C++ let's me be that much more productive than C, I'd like to make a serious effort to spend more time with C++. What do I gain from C++ that I can't get from C? Is it worth the tradeoff?
The two things I miss most when writing C instead of C++ are RAII and exceptions. Both relocate and consolidate code unrelated to the intention of a function (resource management and error handling, respectively), making it easier to write, read, maintain and debug.
I also find function templates very useful while prototyping: Rather than having to select concrete data types up front, I write a function "as if" I have objects that can perform the operations I need. Now I have a concrete set of requirements (the standards committee calls this a "concept") for my classes. And if I've done it right, whatever abstract idea I've expressed in the function template need never be expressed again.
With some practice, it's remarkable how much C++ allows you to remove redundancy and repetition from your code. All without paying much, if any, of an abstraction tax.
C++ doesn't have to be complicated. It involves such a compoundingly convoluted tool box however that you can create overly complicated contraptions that can explode right in front of you.
There was one fundamental decision made back in the day about the copy constructor, where it takes a const reference as its single argument, that removed logically the C pointer problem. It's still possible to reintroduce it, but it helped prove correctness. In other words you can't make something unless the compiler knows that it's made something. Using an address the compiler has to take your word for it. You can think more contractually, as in say Eiffel, where if there is garbage in its not the programmers doing by definition. At its best it cures some C gotchas. At its worst it can be contrived and misapplied.
Thanks for the honest answer. I have no desire to start a flame war. Having an open discussion about costs/tradeoffs between the two seems helpful. Either way it seems worth my time to learn more about C++, so I can make these assessments for myself. However, it's great to hear other people who've "Been there, done that - here's what I found" from inside the trenches.
With C you are close to the hardware and you don't need as large of a mental model to understand what is going on. It really depends on what kind of problem you are attacking and what level of abstraction you need.
C++11 introduces new problems. Programmers are expected to know how a lexical closure should capture variables from the enclosing scope -- should it be by reference, or by value? Programmers are also expected to know that when they capture by reference, sometimes they are really supposed to capture by value but with smart pointers (and of course, they are expected to know which kind of smart pointer they need). Instead of fixing the double exception fault, C++11 just punts on the issue and says that no exceptions are ever allowed to propagate out of a destructor (which is another way of saying, "destructors must never signal errors, even for things that are correctable like running out of disk space).
That Microsoft would recommend JavaScript or C++ is unsurprising, considering how much business those languages have generated for them and how many programmers who support the Windows ecosystem are only comfortable in one of those two languages.
Yes, they fixed many things - and that's good.
The alternative is to create a new programming language every year, just to discover it breaks down somewhere else (ask Twitter about Ruby).
Memory management is needed for C, but not for C++ or ObjectiveC.
I worked years on large C++ systems (>300 developers) without using "new" or "delete".
Whenever I develop in Scala or other GC language, I still miss the deterministic destructors. For memory it is OK to be freed anytime, but there are other resources (GPU, file, TCP port) that you still have to manage manually and it needs to be freed right away. It's trivial to write in C++, but I never saw a failsafe C# implementation.
C++ "smart" pointers do not free you from thinking about memory management; the fact that you are expected to choose between shared_ptr and weak_ptr should be enough to tell you that. Unlike languages with a garbage collector, C++ requires programmers to spend their time trying to figure out if they will create a cyclic data structure, and then also to figure out where that cycle should be broken. That is moderately better than having to figure out when it is OK to free memory and when it is necessary to free memory, but only moderately. Let's put it this way: if you were writing a doubly-linked list, where would you put the shared_ptr, and where would you put the weak_ptr (or would you just give up, use "dumb" pointers, and have destructors free memory)?
In addition to what is talked about below, using value semantics (with mutable or immutable objects) can be helpful. Lots of types can be given value semantics and end up more efficient (most people think copying == less efficient, always) due to the lack of pointers and the locality destroying/cache unfriendliness they introduce. In my experience if you see new/delete in C++ code you should be immediately questioning who owns that object, how it is shared and how/when it is destroyed. I can't even count the number of times I have seen one object new up something and then share a raw pointer with N other objects, who then either have to coordinate who calls delete or, the creator itself does so, and you may have use-after-free issues. unique_ptr and shared_ptr, or their moral equivalents, help avoid most of these issues. As a bonus shared_ptr enables weak_ptr, which can also be very handy for non-owning users and breaking cycles in object graphs.
If you use smart pointers in your code (std::unique_ptr, std::shared_ptr), you will rarely have to call delete. A unique_ptr for example owns an object through a pointer; the pointer is automatically deleted when the unique_ptr goes out of scope. shared_ptr retains shared ownership through reference counting; the pointer is deleted when the reference count reaches zero.
You don't need to call delete because the unique_ptr destructor will do it for you.
You would only need to use shared_ptr when ownership is shared among multiple objects. A shared_ptr holds a reference counter that is incremented every time it is copied, and decremented every time it is destroyed or re-assigned. When the reference counter reaches zero, the pointer is automatically deleted.
Pre C++11 you would use a scoped_ptr in that scenario. And for the record, the conditionals in that destructor are redundant, deleting a null pointer does nothing.
Depends on the situation. I'm asking about pointers specifically because I have a situation in which the lifetime of the property is different than the lifetime of my object.
C++ encourages hiding the data inside an object, in which case the lifetime of the data will be the same.
Sometimes the wrapped data has to be exposed directly, e.g. graphics pipeline needs direct access for performance reason. Even in that case it is useful to have a class. For example the VTK image reader class will handle a lot of bit depth related differences transparently. Makes it possible to store different image types with different bit depth in a single container and most of the access is still trough interface (e.g. GetWidth).
My first decent sized program is to parse a strange file format that contains multiple tables into a DOM-like representation, and then output that DOM to CSV.
What I have so far, is a SAX-like event-driven parser that works, and I have a CSV writer that works, and I have hooked them together, and that works. I managed to avoid using pointers for that chunk of code.
For the DOM building portion, I need to allocate table and row objects while I'm getting events, and since I don't know ahead of time how many of each I'm going to have, I more-or-less must have a pointer to "the current" table, and then when that table is complete or I run out of document I can add it to the document object. So the lifespans are not identical anymore: the builder is going to have to allocate tables as it builds and probably rows as well.
At the moment I'm using pointers, and I just make a new table or row when appropriate and then add it back to the container. But this is a little messy because the containers are, you know, vector<table> and vector<row> and they don't need to think in terms of pointers for that reason. So I wind up passing currentTable and currentRow to the add methods, which probably makes copies on the stack.
I am sure that somebody with more C++ experience would see the right way to do this. My experience is largely with conventional OO languages (i.e. memory managed) and functional programming languages. So I'm a little lost on how to structure this.
Yes, C++ has been and will stay as the language of choice for many major free and commercial apps. It's not very surprising as there exists quite a lot of professionals with skills in C++ and the performance of applications written in C++ is generally very good.
I'm doing mostly embedded work on ARM chips at the moment. Which is a world away the rooms of Sparc V9's I've also worked on. It's a tighter space and you can't use too much of C++'s bigger bits. But multiplexing an SPI bus with C++ for example was vastly more efficient coding and performance wise because you don't need to remember whether some function or other set a mutex and released it. A specific custom lock in the base class improved code review and mutability.
Correction: the performance of good applications written in C++ is generally very good. Unfortunately, the language itself is not particularly conducive to writing a lot of software, fast, while maintaining the satisfactory quality level. Especially in today's world.
The interesting thing about the list to me is how many of these applications are written in C++ and one or more other languages.
There is a certain monotheistic outlook by most developers, and I fear this list does nothing to help it. Everyone knows the merrits and use of C++, surely its not in doubt, but to suggest that its the only solution is frankly silly.
An example plucked from the list that intreged me is Sophis, the risk system. The reason is more and more of these systems are been made in Java or .Net, in fact Sophis itself is using more .Net internally with every new revision. The reason being that performance issues are normally far more related to the algo than the language, having a mixture of functional bits thrown in helps the dev make a leaner algo. Mixing such paradigyms is just easier in .Net than C++.
To my knowledge most of the games/game engines are c++.
I'm not a hard-core programmer, but I'm frequently interacting with that breed and almost all projects that require high performance are implemented in c++.
He missed off Windows 8; if I'm not mistaken its written primarily in C++? Also the "native" language for programming against the Windows Runtime SDK is C++, although it can easily be consumed by others like .NET and JS.
After researching a bit on the net I chose C++ Primer 5/e (not C++ Primer Plus). The author uses some of the new features of C++11 in the examples (like constexpr, auto, list initializer etc.) so you get used to a modern C++ style, but for every new feature it tell you what it replaces so you can still maintain "old" code. The book in subdivided in 4 parts, the first part is about the language, the secondo part about the STL, the third part about advanced OOP and the fourth part about advanced stuff.
"I felt a great disturbance in the Force, as if millions of voices cried out in terror and were suddenly silenced. I fear something terrible has happened."
There's also the standard halo effect listing. Any sufficiently large company is using a language / methodology / philosophy / technology you've heard about. And they're being namechecked on that language / methodology / philosophy / technology's website.
The classic is Microsoft.
Did you know Microsoft uses Scrum? They do!
Did you know Microsoft uses ISO 9295-424:54 (Humungo Software Dev Standard Written by the SEI, the DoD and a committee of 200 researchers from different Fortune 50 companies)? They do!
(Hint: Microsoft doesn't have a single unified method of development, it's largely up to each section to decide how they do the work)