One of the most pleasant parts of programming in C# is that an 'int' is an 'int'. If some function takes an integer input, that's exactly what it takes.
In C/C++, the base type system is so weak and squishy that everything redefines every type. You can no longer pass an 'int', but instead pass a "FOO_INT" or a "BARLONG" where FOO and BAR are trivial libraries that shouldn't need to redefine the concept of an integer.
Like C#, Rust has well-defined basic types, which eliminates a crazy amount of boilerplate and redefining the basics.
C++ itself has well-defined (if verbose-named) types (e.g. uint32_t ) the problem is as soon as you interface with any large library or platform API with a long history: Win32 and Qt comes to mind. It’s 2020 and Windows.h still has macros for 16-bit FAR pointers. I’m disappointed Microsoft hasn’t cleaned-up Win32 and removed all of the unnecessary macros (they can start with T()!)
C# and Java might seem to have escaped that problem - but now it means that because `int` was defined back in 32-bit days programs can’t use `int` (System.Int32) when the intent is to use - what is presumably - “the best int-type for the platform” (I.e. C++’s fast-int types) - or the context (.NET’s arrays are indexed by Int32 instead of size_t, so you can’t have a Byte[] array larger than 2GB without some ugly hacks).
(I know this is moot for function locals as those will be word-aligned and so should behave the same as a native/fast int, but this isn’t guaranteed, especially when performing operations on non-local ints, such as in object instance fields).
>I’m disappointed Microsoft hasn’t cleaned-up Win32 and removed all of the unnecessary macros
Considering how seriously they take backward compatibility, the only way to do that would be to design a completely separate API, like they did with UWP. I'm 99.999% certain these macros are still being used somewhere out there. And who usually takes the blame when some badly written application stops working or compiling properly? Microsoft. (And I don't even like Microsoft.)
What I'm proposing isn't really a new API - but you're right about it having to be separate. It avoids the work of having to design a new API (and then implement it!) - what I'm proposing would keep the exact same Win32 binary API, but just clean-up all of the Win32 header files and remove as many #define macros and typedefs as possible - and redefining the headers for Win32's DLLs/LIBs using raw/primitive C types wherever possible.
There's just no need for "LPCWSTR" to exist anymore, for example. And I don't see anything wrong with calling the "real" function names (with the "W" suffix) instead of every call being a macro over A or W functions (which is silly as most of the A functions now cause errors when called).
This would only be of value for new applications written in C and C++ (which can directly consume Win32's header files) where the author wouldn't need to worry about missing macros. It would certainly make Win32 more self-describing again and reduce our dependence on the documentation.
Which is exactly why UWP ended up being an adoption failure to demise of us that were quite welcoming to its design goals, and I still believe that UWP is what .NET v1.0 should have been all along.
Now we have Project Reunion as official confirmation of what has been slowly happening since Build 2018, as Microsoft pivoted into bringing UWP ideas into Win32.
Breaking backwards compatibility is a very high price to pay, as many of its proponents end up discovering the hard way.
> Breaking backwards compatibility is a very high price to pay, as many of its proponents end up discovering the hard way.
I don't believe breaking back-compat was ever the problem: there were (and are) two main problems with UWP (and its predecessors[1]) going back to Windows 8:
* UWP were/are unnecessarily and very artificially restricted in what they could do: not just the sandboxing, but the app-store restrictions almost copied directly from Apple's own store.
* And because the then-new XAML-based "Jupiter" UI for UWP did not (and still doesn't, imo) ship with control library suitable for high-information-density, mouse-first UIs - and because XAML is still fundamentally unchanged since its original 2005 design with WPF in .NET Framework 3.5 - the XAML system is now far less capable (overall) than HTML+CSS in Electron now (the horror). Microsoft had a choice to maintain progress on XAML or let Electron overrun it for desktop application UIs - instead they've decided to keep XAML alive but for what gain? There simply isn't any decent exit-strategy for Microsoft now: they've just re-committed themselves to a dead-end UI system that needs significant amounts of re-work just to keep it competitive with Electron, while simultaneously using Electron for new headline first-party applications like Teams, Skype, Visual Studio Code, and more.
Microsoft has completely wasted the past ~10 years of progress they could have made on Windows and the desktop user-experience, letting Apple stay competitive with macOS while still funneling billions into iOS and iPad OS - further weakening the Windows value-proposition).
[1] Metro Apps, Modern Apps, Microsoft Store Apps, Windows Store Apps...
Skype uses React Native, and given that React Native for Windows bashes Electron in every talk that they give with its 300x overheard bar charts, expect that when React Native for macOS and Linux get mature enough, which MS is also contributing for, that eventually all Electron in use gets replaced with React Native.
I admire their efforts in backwards compatibility, but I never saw the point of extreme source compatibility. If I don’t want to recompile then I don’t need to worry that some names were changed. If I do rebuild my app then I’m happy to spend the time fixing the errors, or build agains an old library or language version.
As someone who does a lot of work in Java (which lacks typedef), I feel the opposite. I don't like "stringly typed" APIs where everything is a String or int or whatnot - it's only slightly better than Object; you're basically giving up on typechecking.
With generics or typedefs (or a willingness to create lots of classes), you can be certain you never pass an Id<Foo> someplace that expects an Id<Bar>.
In C/C++, the base type system is so weak and squishy that everything redefines every type. You can no longer pass an 'int', but instead pass a "FOO_INT" or a "BARLONG" where FOO and BAR are trivial libraries that shouldn't need to redefine the concept of an integer.
Like C#, Rust has well-defined basic types, which eliminates a crazy amount of boilerplate and redefining the basics.