Hacker News new | past | comments | ask | show | jobs | submit login
C# 9 top-level programs and target-typed expressions (redhat.com)
136 points by benaadams on March 30, 2021 | hide | past | favorite | 184 comments



I mean, top-level programs are nice, I guess? IMO it's more of a means of attracting non-.NET developers into the fold. I work in a .NET shop and we probably won't use this in our production code.

The benefit of target-typed expressions IMO is that it changes property syntax from:

public List<Foo> Foos = new List<Foo>();

to simply:

public List<Foo> Foos = new();

It may seem subtle, but if you write C# you see this pattern constantly.

Honestly, these are the features in C#9 I'm least excited about. I'm much more excited about record types, init-only setters, not to mention the improved pattern-matching and type inference (no more casting nulls inside of ternary operators!).

Immutability in C# has classically been pretty cumbersome. I work on a payroll platform where we've managed to work around a lot of that, but we're excited to introduce record types and init-only setters in particular to help with some of the maintenance pain-points surrounding the elimination of side-effects from things like pay calculations.


top level is mostly going to be nice when teaching new programmers. You can just get on with hello world and now have to type all this boiler plate and explain why you have to. But yeah, for "real" programs it is probably irrelevant. But who knows, maybe we will find a use for it.

I am mostly excited about the sort-of option type getting rid of nulls option.


> top level is mostly going to be nice when teaching new programmers. You can just get on with hello world

Yeah it seems like kind of a weird thing? “Here’s marginally less boilerplate once, if someone is going to learn to use the language, they may as well learn how it actually works instead of hiding stuff that they’ll soon see anyways...


You should see how many new engineers are confused in CS101 when confronted with all of Java's boilerplate just to write a simple "hello world" program. It's a real issue.


So people with no programming experience are confused with Java. That doesn't sound like an issue at all.

I don't think the solution should be to bend Java backwards so that it's easy for people with no progrmming experience to work with.


It will benefit code examples and such. Say you want to post a code snippet on Stackoverflow - it is much simpler to provide minimal working code.

It have always been a clear benefit of say Python that you can provide a two-line code snippet which can actually run.

It is also step in the continuing process of removing boilerplate. C# started out like Java, but have over time removed boilerplate where possible. "public static void Main()" is also just boilerplate.

I agree it won't matter for larger real-world program.


.NET was designed from the get-go for creating large programs. There is of course some overhead/boilerplate to this. So you could not do a Write("Hello world"); without wrapping it in a method. In a class. In a namespace. In an assembly.

After all, why optimise the language for "Hello world" ? How important is that case? language designers rightly decided that supporting large programs was more important.

In the code that your team has been working on for years, you'll be glad of all of those "overhead" things, that help you manage complexity and decompose the program.

But yeah, show it to a c# newbie who is used to e.g. Python where "Hello world" is literally a one-liner and they might think "what a crapsack language, there's like 6 lines of overhead per 1 line of code, argh this is as bad as Java" - I've seen pretty much this reaction. Is it justified? From my POV, not at all. But I have seen that happen.

Top-level statements (i.e. compiler-generated boilerplate instead of Visual Studio-generated boilerplate) does seem like an "on-ramp" feature that would be discarded as the program grows past a few lines though.


> But yeah, show it to a c# newbie who is used to e.g. Python where "Hello world" is literally a one-liner and they might think "what a crapsack language, there's like 6 lines of overhead per 1 line of code, argh this is as bad as Java" - I've seen pretty much this reaction. Is it justified? From my POV, not at all. But I have seen that happen.

I don't understand why we keep optimising for this audience in the first place.

These are people that lack basic understanding, but also have no patience whatsoever, or any intentions to RTFM. Why are we bending the ecosystem for this audience?


> don't understand why we keep optimising for this audience in the first place.

I wouldn't say "optimise for them". I would say "get them in the door", get them used to how things work in C#, and then drop the training wheels.

I take the point, it seems a marginal case though. IDK.


Why though? I can’t see any advantage to wrapping the startup code in a Main method from a maintainability standpoint.


Consistency, mostly. It's all methods on classes everywhere else, why make it different here?

Even with "top level statements" the startup code is wrapped in a "main" method, it's just compiler-generated in this case.

The CLR insists that code statements are always in method on classes, I believe. In cases where it appears otherwise (e.g. a lambda) the compiler is generating a wrapper class.


Because it’s just noise that doesn’t mean anything to me, so why keep it? Same as type inference, expression-bodied members, and other LOC savers.


So you're going to go with a top-level entry point even with a substantial program? OK, I can see that happening.

I guess the point is that your substantial program will also have plenty of classes and methods elsewhere, so the idea that "you can do a one-liner without having to first learn about declaring classes and methods and namespaces" isn't relevant to you. You save a few LOC 1 time over in your top-level entry point though. OK.


Yeah, true. I don’t find the “for beginners” thing too compelling but I do like the idea of removing boilerplate.


I think that MS is at this point fairly concerned with the "long game" for .NET, i.e. get the next generation onboard now, reap benefits for decades to come. Or equivalently they noticed that .NET wasn't "cool with the kids" and set out to fix that.


This isn't a REPL but the benefits are almost identical

Good for beginners, good for removing the overhead of "trying just one quick thing real quick"


If I want a REPL there's always the interactive window in VS.


> But who knows, maybe we will find a use for it

Outside of a project that deliberately has multiple entry points so that you can compile different executables from the same code base, or code where Main is, directly or indirectly, recursive, there's no additional utility from an explicit Main(), so i don't see why implicit wouldn't win for the normal case.


Top-level programs seem like a nice feature for shell scripting, assuming that the C# compiler has a "compile and run" feature (without littering intermediate files of course).


I’m a big fan on readonly structs in C#. They make immutability less cumbersome. The init only feature will make readonly structs a lot easier to work with.


That’s not that compelling on its own given that you could already use var to avoid repeating the declaration.


You cant use 'var' for fields and properties though.


Fair enough.


To me, the benefit of "Target-typed new expressions" is more with bringing consistency to class-scope member declarations and method-scope member declarations, so these two can look the same, and a programmer only has to read one way of declaring new members in C#:

  public class MyClass {
   private readonly MyDependency _myDependency = new();

   public void MyMethodThatDoesWork() {
       MyVariable variable = new();
   }
  }
As opposed to the (now old, IMO) var syntax/Java-like syntax mixture:

  public class MyClass {
   private readonly MyDependency _myDependency = new MyDependency();

   public void MyMethodThatDoesWork() {
       var variable = new MyVariable();
   }
  }
Edited for HN formatting.


I can't see var going anywhere, many times you aren't directly creating a new object, but getting a generated object from something ( like linq) which tends to have more complicated type signatures.


Agreed that `var` isn't going anywhere, but this feature's best use case is with complex generics/enumerables and things like anonymous tuple types. For example:

  // old:
  var x = new List<KeyValuePair<string, int>>
  {
    new KeyValuePair<string, int>("a", 1),
    new KeyValuePair<string, int>("b", 2)
  };

  // new
  var x = new List<KeyValuePair<string, int>>
  {
    new("a", 1),
    new("b", 2)
  };


Another option if you're still on .NET framework is to use a collection initializer extension method. I add a comment when I do this and place the extension method in a separate namespace.

    // ListExtensions.cs
    internal static class
    ListExtensions
    {
        public static void Add(this List<KeyValuePair<string, int>> list, string key, int value)
        {
            list.Add(new KeyValuePair<string, int>(key, value));
        }
    }
    
    // new w/extension method
    var x = new List<KeyValuePair<string, int>>
    {
        {"a", 1},
        {"b", 2},
    };


Wow, when I (only a C# dabbler) read about this feature in the article I thought it sounded incredibly silly, but this really sells it. Was there really no better way to create tuples before?


You could create tuples with `Tuple.Create("a", 1)` since at least .NET 4, if not 3.

Since C# 7 you can also create tuples using just `("a", 1)`. But tuples are not KeyValuePairs. So the new "new" syntax will be very helpful in a lot of cases.


I really don't understand why they haven't added implicit conversion operators between the modern ValueTuple and the legacy value-types like KeyValuePair, Tuple, and Pair, grumble. It's a non-breaking change!


Adding that is potentially a source breaking change. Admittedly an easy enough one to address, and they are more willing to make such changes in CoreFx, then they historically were, so it is not impossible that they could decide to do it.

For an example of this being a breaking change. Let's say that B is some other type implicitly convertable to A.

The following two method overloads exists:

    void Method(A a, ValueTuple<int,int> vt);
    void Method(B b, Tuple<int, int> t);
Right now `Method(SomeB, (4,5))` compiles fine, but if your proposed conversions were added, it would suddenly yield `CS0121: The call is ambiguous between the following methods or properties:...`.


Tuple is not a value type. And this means that they won't add an implicit conversion, because right now a major focus of .NET and C# is also performance. and this conversion would mean an implicit heap allocation. They'd rather avoid adding features of this sort as I understand it.


C# (as of 7.0) has pretty nice tuple syntax that supports all things you expect (deconstruction, etc):

https://docs.microsoft.com/en-us/dotnet/csharp/language-refe...

But the situation here is that the type KeyValuePair is older than that and isn't a tuple.


> Was there really no better way to create tuples before?

Here are some other options for code like the grandparent post. I tend to either live with the boilerplate or use a local function.

1. Use a collection initializer extension method (https://news.ycombinator.com/item?id=26641997).

2. Define a local helper function.

    KeyValuePair<string, int> KeyValue(string key, int value)
    {
        return new KeyValuePair<string, int>(key, value);
    }

    // new w/local function
    var x = new List<KeyValuePair<string, int>>
    {
        KeyValue("a", 1),
        KeyValue("b", 2)
    };
3. Use an array of C# 7 tuples and convert them in a loop or query.

4. Use generics to infer the argument types à la Tuple.Create. This technique can be helpful for using anonymous types with collections.

    //KeyValuePair.cs
    internal static class KeyValuePair
    {
        public static KeyValuePair<TKey, TValue> Create<TKey, TValue>(TKey key, TValue value)
        {
            return new KeyValuePair<TKey, TValue>(key, value);
        }
    }


    // new w/helper class
    var x = new List<KeyValuePair<string, int>>
    {
        KeyValuePair.Create("a", 1),
        KeyValuePair.Create("b", 2)
    };


Oh true, there are still use cases where it's valuable. `out var ` also comes to mind.


I hate hate hate var.

Sure, in your own IDE, and reading the code you just wrote, no problem. But when doing code reviews and jumping all over, I want to see the type right there. Nothing more frustrating than reviewing a PR with var's all over. This isn't even up for debate anymore... No more var!

It is frustrating to still see var as the recommended way by Microsoft... You can even put in a rule to format the document and add explicit types as well on save. So complicated type signatures for linq aren't a problem!


This is completely the opposite of my own experience. Using “var” makes the code easier to write AND easier to read. The code becomes decluttered from all the mess of types everywhere and you can read what it is actually doing.

Anyone who names their variables properly should have no problem telling what kind of types are being worked with anyway. And in the rare case you absolutely need specifics, they’re just a tooltip away.

I always find these arguments against “var” somewhat absurd because those same programmers work with fields and properties and methods from other classes all the time, and none of those requires you to write the type.

For an example, if you were to access “someExternalThing.bigRedTrain”, the type of “bigRedTrain” would be written nowhere in the current class. I fail to see how using “var bigRedTrain” is any different.


Can't disagree more. var improves readability so much that I can't imagine reviewing code anymore with unnecessary type declarations everywhere.


Every time you write something like Foo.Bar, the type of the value returned by Foo is implicit - i.e. it's exactly like var. Do you never dot-chain properties and method calls? If you do, why should it be any different when some intermediate step in the chain gets a name?


  MyFluffyGenericType<WowThisIsFun<MeToo>, MoreStuff> x = myVendorsStupidContract.SomeInsaneType;
vs

  var x = myVendorsStupidContract.SomeInsaneType;


> Nothing more frustrating than reviewing a PR with var's all over.

ooh you gotta try reviewing your PRs within visual studio [1] (and be sure to use VS internal diff tool as well) - var is no longer annoying with intellisense

[1] https://github.com/github/VisualStudio/blob/master/docs/usin...


There's a reasonable convention of specifying full type for a variable that is returned from a method where it's not very clear what the method returns without looking up method signature. This usually means that the method name can be is not descriptive enough, but there are case where you can't really fix that. Other than that, I think var definitely is a huge benefit for readability and overall code design.


Even then, you only see the type signature when you are initializing the variable, not every time you use it.

If you really want to see type signatures everywhere, you should use something like Hungarian notation which embed type information in the name. And you should disallow method chaining and having methods or properties as parts of expressions.

But I would recommend just using an IDE instead.


It’s definitely up for debate and since “use var everywhere” is the default ReSharper setting I suspect your preference is the minority position.


This kind of thing is really quite common in languages though. You have to understand that this is quite subjective.


https://ericlippert.com/2009/01/26/why-no-var-on-fields/

Eric Lippert blogged about that like 12 years ago (when var just came out) interestingly there's a c#9 epilogue now at the bottom of the post.


Who would have believed a decade ago that there would be a RedHat developer blog entry on C# today?

If someone had told you that then you would have thought the world would have collapsed in the meantime.


Exactly my first thought. I remember my first steps with C# and it felt very "MS version of Java" and I didn't expect how much a) I would like it and b) how successfully it would spread to other areas


I don't know for sure, but I'd have to guess the Mono project made it on Linux blogs quite often since its inception approx 20 years ago.


I think that C# is underrated. Although TypeScript is my default language choice for most programming, C# / .NET is my choice for cases where:

- high performance is important - or interop with native libraries is required (because C#'s DllImport attribute makes that super simple)

Another benefit is that C# is syntactically similar to TypeScript (they were both designed by Anders Hejlsberg after all), so switching between the two languages feels easy. Java and Go are both similar to C# in terms of performance, but interfacing with native code isn't as simple with those languages, and they're also not as similar to TypeScript as C# is (especially Go, which is quite different).


C# is my goto for Windows, for sure. Is the native interop good in Linux as well? I've never even considered anything like that, would probably gravitate to C++ since that's what I'm familiar with but would seriously consider Go or Java in non-Windows situations.


C# on Linux, as of the past 2 or 3 years, has been consistently improving with great tooling, good interop, is open source(MIT) and has the ability to package "native" versions of your app that contain the runtime at negligible space costs for distribution or deployment.

Give .NET 5 on Linux a try, you'll be surprised!

For me, the work this team has been doing is the example of the positive "change" Microsoft has brought to the open source space. A stellar language with great infrastructure.


Yes, native interop works the same across all platforms (Linux, macOS, Windows, Android, iOS). You can just place a dynamic library with C linkage (.so, .dylib, or .dll file) into your app's folder, and then you can call the native functions as external functions in C#. For example, if your library exports the following C function:

    void printMessage(char *message) {
        printf("Message from C#: %s", message)
    }
You can call it from C# like this:

    MyLibrary.printMessage("Hello from C#!");

    public static class MyLibrary {
        [DllImport("MyLibraryName")]
        public static extern void printMessage(string message);
    }
C# takes care of marshalling data types, and an instance of a native object can be passed as an IntPtr. I'm currently developing web services for a new WebRTC-related product using .NET 5 on Ubuntu, and it's working well. One of the services utilizes a native C++ library and another utilizes a native Go-based library using the same approach. Java and Go are both good options, too.


Thanks, I'll give it a try. I've done a lot of interop in Windows so it'll be good to leverage to Linux.


it's a little bit harder when your linux library is not inside LD_LIBRARY_PATH or the windows one is not inside a path. but there are tricks to override the loading behavior, it's just a little bit tricky since you need a loading context, which can load other c# dll's without having a reference to them.

it's basically like that: AssemblyLoadContext loads the managed assembly (like YourPackage.NativeApi which contains the DllImport) and the AssemblyLoadContext that loads via LoadUnmanagedDll all paths to the DllImports. you also need a Shared Library for the calling package and the dll import package so that you can do (Type)Activator.CreateInstance(LoadContext.LoadFromAssemblyName(YourDllImportAssemblyName))


The native interop is the same. That is, if your DLL and .so have the same function signatures, you can use the same interop code unmodified between Linux and Windows. This is really convenient.

(But watch out for bitness)


I’ve actually been using dotnet 5 on Linux recently. It’s been really successful, and to your question; I put performance critical bits in a .so with which I can use pinvoke. Speaks to the power of C, really because there’s been amazingly, no issues with ABI or strict layout or anything.


Also want to add with Jetbrains Rider, you can get a consistent IDE experience across multiple platforms as well. As much as I like vscode, Rider is just fantastic.


How do you think they built the Linux-version of the .NET standard library? ;)


Lots of wine?


Azure has the majority of its VMs on Linux now. C# is almost certainly more popular on Linux than Windows for anything developed in the last couple years. We are in the process of migrating all our services over to kubernetes, it's quite nice.


Interop on Linux works well.


I agree it is an excellent language from an ergnomics point of view.

One problem I've found for proprietary programs is that C# stored basically your whole source code in the executable. I don't just mean it's easy to disassemble... it's really retained the source code, or at least a lot of metadata. Even the original local variable names are there! I wonder if anyone has any ideas to work around this?


The term you're looking for is obfuscation. This seems like a good roundup: https://github.com/NotPrab/.NET-Obfuscator


A bit of a tangent but is it possible yet to compile C# to native code easily (like calling a normal compiler on the command-line)? Last time I checked I needed to jump through a bunch of hoops through Visual Studio, and create some XML or other nonsense. I don't get why they've made it so hard compared to just compiling managed code.


It is super easy with dotnet core. The new CLI is pretty simple, you use it as a package manager and compiler. Visual Studio also uses the CLI to run and debug dotnet core projects. https://docs.microsoft.com/en-us/dotnet/core/tools/


Is it possible to create a single, stand-alone, self-contained .exe? (I'm fine with only building for a single platform - Mac/Win/Linux)

Needing to run my programs (which show up as .dll's) using dotnet just feels weird (and it doesn't match my intuition for 'how programs are run' in Windows cmd/etc).

I'd be fine with an .exe that's not self contained but at least I run it like a 'normal' exe. :)


Yes that is possible, you can produce an .exe which contains your application's assemblies but relies on the appropriate .NET runtime being present on the user's computer. You can also build a .exe which has the .NET runtime bundled, but it can be quite big. I made a simple program that wrote "Hello World" to the console [1] and it was between 46-78MB: https://blog.mclemon.org/no-this-is-what-peak-hello-world-lo...


Yeah, it's not really a "real" native binary.

You can get it down to 8kb by jettisoning all comforts: https://medium.com/@MStrehovsky/building-a-self-contained-ga... but that's not exactly practical.


Wow! Yeah indeed there's a few impractical steps required to go all the way to 8kb, but trimming down to a couple of MB is definitely a huge improvement. I somehow never looked into CoreRT before, I'm gonna need to do so. Thanks for sharing this!


You have to set up "dotnet publish" for the project that makes the executable.

It then makes a stub loader "your program.exe" which looks and behaves normally. You can then make it "self contained", which bundles the assemblies for you and transparently unpacks them.

You can have different publish profiles for the different targets.


.NET 5.0 can now create true single file that doesn't need any unpacking on first run (except on Windows where the Jit and GC dlls will live or be unpacked separately; but that should be resolved in .NET 6.0)


Yes, you can use ILMerge to merge all other DLLS into the main exe/dll: https://stackoverflow.com/questions/8077570/how-to-merge-mul...


> Is it possible to create a single, stand-alone, self-contained .exe?

Yes.


It's possible but along with an .exe you'll get a couple of .dlls. Exe size will be 60+ Mb.


But I want it to produce a single EXE and that's it... like the C# compiler has always been capable of doing. Sorry I neglected to mention that part (kind of assumed it was a given).

Also, the whole thing is still a stateful operation revolving a .csproj file, which (again) is XML I have to create (sure, it's just one command) and manage/deal with every time. Even to compile a tiny C# file from my editor now I need to make a project for it. With a normal C# or C++ compiler I can just invoke csc.exe or cl.exe and produce the executable in one command from my editor... no intermediate state or project file management required. That's kind of what I was getting at with it being "simple". I didn't just mean the syntax, I also meant the structure of the solution.


You can still just invoke csc.exe, no project file required - I'm not sure what you are getting at? But it's probably going to be easier to just dotnet new console and get the project file for free. They tried migrating away from csproj but there is just too much ecosystem around it. The XML sounds bad, but in reality they added file globbing and other things to make it so you rarely even have to change it. That and to be honest it just works and is quite readable even though it's the "evil" XML.


csc.exe doesn't compile to native?


No, it's always compiled to MSIL. You've always had to use a separate tool (such as NGen or dotnet publish) to do AOT compilation. But there are tradeoffs and AOT doesn't always mean faster performance.

But again, if you use the dotnet CLI you can just dotnet publish to create a AOT version of your exe. Just remember it still has the runtime and GC. You are just reducing the work of the JIT. Also AOT does not guarantee you will not JIT and still contains the MSIL. It's just a performance optimization where pre-compiled code will be used when possible.


Yes, I'm already aware of all this? I'm confused why you're explaining all this background—I'm well-aware of all this and my previous comments were already in response to these.


I think the short answer to your question is "no".

For single-file-no-project simplicity your best bet is dotnet-script: https://github.com/filipw/dotnet-script

For native code your best bet is NativeAOT: https://github.com/dotnet/runtimelab/tree/feature/NativeAOT

I'm not aware of anything that combines the two.

As to why: it's not so much that they've made it hard as they haven't made it easy. The reason they haven't made it easy is that it's a fringe use case. The main benefits of AOT are faster startup time, smaller storage requirements, and compliance with the iOS interpreted language ban. Most people who worry about those things don't mind having a project file.

They _have_ made some changes that may partly address your concerns. Visual Studio is no longer required to build C#; you can do it purely from the command line. There is also a new project file format. It's still XML, but much simpler. The HelloWorld example[0] from NativeAOT is a good example of how simple it can get. And the command line tools include an easy way of creating basic project files[1], so you don't have to memorize what little boilerplate remains.

[0] https://github.com/dotnet/runtimelab/tree/feature/NativeAOT/...

[1] https://docs.microsoft.com/en-us/dotnet/core/tools/dotnet-ne...


The C# compiler has always been included with the .NET Framework, it has never required Visual Studio:

https://stackoverflow.com/questions/861384/is-it-possible-to...


Gotcha, thanks!


On Windows using .Net Core makes this fairly trivial.

You basically install the .Net Core SDK and then run these commands to create your first executable:

  dotnet new
  dotnet build
  dotnet run
And I suspect it is the same on the other platforms

Here is an simple console example with more detail: https://www.zeusedit.com/phpBB3/viewtopic.php?f=5&t=8135


They were created by the same person.https://en.m.wikipedia.org/wiki/Anders_Hejlsberg All the hate C# gets is because of balmer and gates, not for technical reasons


Are you sure? My personal reluctance to C# has been that it's not handy for the stuff I do that needs to run on Windows, Mac, and Linux, which is everything.


Anything in particular? .NET 5.0 (and therefore C#) runs beautifully on Windows, Mac and Linux (x64 or Arm)


.net core has handled that well in the last ~3-5 years. Yeah, not a great rep before that but the new iteration/rewrite has been impressive.


I don't think it really started to settle down until .Net Core 3.0 which is just 18 months ago. Five years ago it was version 1.0 and a bit of a dog on all platforms.


there's always a bit of disdain from anything coming from MS I think that's a fair assessment


Relative to what? Assembly?


C/C++. Java. Python. JavaScript.


It should take less than 5 minutes to get it up and running. https://docs.microsoft.com/en-us/dotnet/core/install/macos

It took me about 60 seconds on a new EC2 ubuntu instance


What's your go to boiler plate for a new TypeScript project?


The three most popular kinds of TypeScript projects I create are:

- Node.js services built with Serverless framework targeting AWS Lambda - Create React App front-ends - Gatsby front-ends

For the first, I have a single shared webpack config that I reuse for all my serverless services. For the front-end projects, I just followed the few steps to add TypeScript support to CRA and Gatsby.


Once you suss out working with local shared dependencies, Serverless framework with typescript is pretty awesome. If I were starting a new project today I'd have to take a good hard look at this for my API back-end.

- Lambda Authorizers for token auth (or use the built-in stuff if you can get away with it)

- Lambda w/ API Gateway for individual endpoints

- Aurora for relational dbms

It's really a nicely put together ecosystem.


The article states that the benefit of target-typed new-expressions is that "the type declarations are nicely aligned" compared the use of "var".

I think a better justification is that fields and properties does not support type inference, so this can avoid some redundant type expressions for initializes, just like "var" can avoid redundant type expressions for local variables.

Of course it would be more elegant if fields/properties supported type inference like "var", but that is a can of worms since you can have recursive dependencies.


I’m not working with C# anymore but I’m really liking the features they keep adding.


Unfortunately the new tricks can't be used when the codebase is still stuck on .net framework :(


There are many paths out of hell. Check out the windows compatibility pack for all things active directory, WCF & System.Drawing:

https://docs.microsoft.com/en-us/dotnet/core/porting/windows...

Winforms is probably a no-go on migration, but you should definitely check out Blazor if you want to develop any new business apps. We have constructed some incredibly complex business dashboards using this framework and cant imagine how some of this would even be possible if we had to move back to a front-end client framework and invent a bunch of JSON APIs.

Also, the new Blazor Desktop stuff is extremely exciting:

https://medium.com/young-coder/blazor-desktop-the-electron-f...

> The first difference is that the WebWindow container doesn’t use WebAssembly at all. Yes, you can run more or less the same Blazor application in WebWindow as you would in a web page. But when you use a web page, it’s executed by a lightweight .NET runtime that’s powered by WebAssembly. When you use it in WebWindow, it will use the cross-platform .NET runtime directly.

There is a lot of really amazing stuff coming down the roadmap, so I can easily advocate for enduring some degree of pain to get on this wagon.


> There are many paths out of hell.

Unless you are maintaining a Webforms project :(

Everything new is core or now .NET5, but I see no way to migrate our main website without essentially stopping all other development work for a long time.


I'm pretty sure WinForms is being migrated. I've built dotnet 5 WinForms apps, anyway. At least on Windows - I wouldn't bet on a Linux Gtk/Qt port any time soon.

WPF is probably dead though.


WPF is quite alive. I had a pretty big audio processing project using WPF a while back, and when I migrated the codebase from .NET Framework to .NET Core 3.1 (and now .NET 5), there were surprisingly noticeable responsiveness improvements. I think they also have some more improvements planned for .NET 6 as well.

Self plug: A while back, I split off my own WPF MVVM framework from that larger project. It's designed for building simple touch-style interfaces, like you would see on a phone/tablet: https://tinyurl.com/upbeatui


Microsoft wouldn't let WPF die. Anyone writing enterprise desktop applications is using it heavily.


Anyone writing enterprise desktop applications is using it heavily.

I don't think there is much of that going on anymore.

In reality Microsoft did let WPF languish for years, it always felt unfinished. I was very surprised when they migrated it to .Net core.


WPF is not dead, it got open-sourced and was ported to .NET Core 3.1 onwards.


We use C# 9 in our WebForms monolith and it works fine. Just add `<LangVersion>latest</LangVersion>` to your .csproj file. (This assumes you're using recent Visual Studio version and use a recent version of MSBuild as well)


There are a few language features which require a specific type to be defined, but you can define those types yourself.

For example, to use record syntax, you need to define this attribute:

    namespace System.Runtime.CompilerServices { public class IsExternalInit : Attribute { } }
And supporting the new indexing syntax requires you to define System.Index and System.Range types, which are a bit more involved, but still pretty trivial:

https://docs.microsoft.com/en-us/dotnet/api/system.index?vie... https://docs.microsoft.com/en-us/dotnet/api/system.range?vie...

But pretty much all the other new C# language features I've tried have just worked straight out of the box with .NET 4.7.2.


Not by default, no. These two specific tricks should work on .NET Framework (some of the other tricks in C# after 7 or 8 require functionality that .NET Framework doesn't have, though). You can ask VS to use the more recent C# compiler with the LangVersion tag in the .csproj: https://docs.microsoft.com/en-us/dotnet/csharp/language-refe...

Obviously, it's not suggested doing that nor is entirely well supported, but it is at least allowed.

Though if you are looking to use top-level programs you probably are working in greenfields anyway that should just be .NET 5+. It's not really a feature that makes a lot of sense for existing codebases as it is more a teaching tool/"quick and dirty command line app/script" tool.


There are ways, but definitely can be hard to get unstuck from framework. This came across my newsfeed this am: https://www.telerik.com/blogs/jump-starting-migration-dotnet... and also, the other day, dotnet rocks podcast had a guy on talking about an upgrade tool. I think it was this one: https://visualrecode.com/ He was also discussing the "Strangler Pattern": https://akfpartners.com/growth-blog/strangler-pattern-dos-an...


Allegedly there's an upgrade path from .NET 4.8 to .NET 5.


if you have built anything on web forms, then you really have no way forward other than rewriting all that. That alone will keep many an enterprise locked on .NET Framework for quite some time.


You can give it a shot with this project. https://github.com/FritzAndFriends/BlazorWebFormsComponents

It emulates WebForms component with Blazor. Although I would just start from scratch.


> Although I would just start from scratch.

Which is fine if you are a hobbyist developing for fun.


In our case not web forms but WCF


You may want to keep an eye on CoreWCF: https://github.com/CoreWCF/CoreWCF

Though as a heavy WCF user in the past, I'd suggest your best bet is simply to throw out all your binding configs, and build your own REST API or similar backend (gRPC support in .NET 5+ is something often also recommended if you want something more RPC-like and need/want a more binary-serializer like approach, though in 2021 I'd just use JSON and something REST-ish myself). The nice thing about the Interface-driven "contract" approach should be that implementing your own is just a matter of implementing all your contract interfaces and injecting the physical implementations yourself.

I realize that can be easier said than done as things accidentally got coupled to very specific styles of bindings over the years and not everyone followed best practices and used the Async contracts so you have to tear out a bunch of synchronous faking code and wire back in Task<T>/ValueTask<T>. But generally, overall, the process was implement the interfaces and remove the "magic" in the process and I often found you end up with something better anyway because it is simpler and prone to less "magic" failures.


Could you expand a little on that? How exactly does an REST API backed desktop/forms application look like in 2021, and how is the decoupling accomplished?


A lot of the decoupling is naturally accomplished by just not being backward compatible with previous WCF bindings and throwing out the whole mess.

The easiest place to start with is your data contracts. You may not have many depending on which side of the RPC/not-RPC boundary you were on, but it may be as simple as creating plain classes that implement those data contracts and then double checking that the Newtonsoft.Json (classic) or System.Text.Json (newer, sleeker) JSON serialization looks nice, deserializes just fine. Generally this work is pretty straightforward and sometimes you can just eliminate the data contract interfaces when you are done (you won't likely need them anywhere else and most of their WCF-specific annotations may just be clutter at that point) and replace them with references to the "POCO" data classes.

Some of your service contracts you might also want to explore moving more of the arguments from lists of arguments into simple data classes that serialize nicely to JSON.

Then at a high level it's a matter of writing client side class implementations of your service contracts and server-side "proxies" that then call your existing server-side physical implementation of your service contract.

At a high level from the client side:

- Open a connection to the recipient

- Send the JSON serialized data to an endpoint

- Wait for the response

- Deserialize the response and return it

That's basically all that the "bindings" are doing for you in a magic nutshell. You'll just be writing it directly "by hand". In the case of a "REST-like" it would be using something like System.Net.Http.HttpClient on the client side and the HTTP method and your URLs would likely reflect your "operation contracts" (the names of the individual methods on your service contract): `Task<IMyItemDataContract> IMyServiceContract.GetItemDetails(int id)` to `http GET /api/item/details/{id}` for example type things. In general it's a lot of straightforward simple boiler plate to fill out for every interface method ("operation contract").

At a high level for the server side:

- Receive a connection

- Deserialize the data

- Call the appropriate service contract method

- Serialize and return the results

On the server side, for the simple "REST-like approach" you might want to build a simple ASP.NET (Core) host wrapper. These days you don't necessarily need the full weight of something like ASP.NET MVC and go do all the above steps with just what ASP.NET these days calls "Endpoint Routing".

Obviously complications come from figuring out your network topology shifts. Maybe you can't shift your "server" as easily to a real ASP.NET application hosted on a "real" HTTP server somewhere like IIS or Apache or whatever. In some of these cases you can find hints in your existing bindings. (Are you using MSMQ bindings? Use MSMQ or a modern direct replacement like Azure Storage Queues or redis instead of "REST-like" HttpClient and Server, for instance.) There are also ugly "duct tape" solutions such as ASP.NET's "Kestrel" web server can easily be hosted in other .NET processes, if it absolutely must remain an "ordinary" Windows Service or something crazy like that.

Once you prove that your "hand-written" clients and service endpoints work, you can generally just throw out most of WCF's binding gunk in your config files and all the attributes on your interfaces. A lot of your code shouldn't change that much in this whole process if it was already just calling the interfaces with again the big caveat being if you need to add Task<T> now, as opposed to having already migrated to it years ago or even using the older uglier Begin/End Async pattern which you often can easily replace with Task<T>).

Another option to possibly explore, depending on the nature of your ServiceContracts, how they are expected to feel ("real time?"), is to use SignalR rather than HTTP. (The basics are about the same: make sure everything serializes/deserializes JSON well and change your "operation contracts" to SignalR calls and event responses.) SignalR is also generally hosted on a webserver for negotiating match-making, but actual communications happen in "peer-to-peer" websockets generally after negotiations complete. Azure SignalR has some powerfully scalable hosted solutions, depending on your environment's tolerance for cloud-based tools.


Thanks!


There is a tool out there that converts WCF to gRPC.

https://visualrecode.com/

The guy who wrote it talking about it: https://www.dotnetrocks.com/?show=1728


I second the other commenter suggesting you should try to rip off the WCF bandaid and go to a RESTful API as soon as you can. It sounds scary but after the one-time pain of migration we've had way fewer "magic, black box" errors that require insane investigations and hacks.


Allegedly there's also a route to the top of Mt Everest.

For some of us the "upgrade path" is impassable.


Not really


The upgrade path is massive rewrites, something most business are wary of. For good reasons.

MS really made a grave mistake by coupling the evolution of the C# language to the .NET version. This means all projects on the .NET framework is now also stuck with an obsolete language version for no good reason.

It is becoming a a lot less fun to be a C# developer because you more often will have to work with obsolete tools.


> MS really made a grave mistake by coupling the evolution of the C# language to the .NET version.

For the most part that isn't true. You can use later C# compilers with older frameworks. Some features do, unavoidably, require framework support.

But that being said, the changes in C# aren't often that radical that I feel particularly constrained by older versions.


You can (and I do), but it is explicitly not supported. If it breaks, you get to keep both halves.


At work we've been slowly pulling code out of our WebForms monolith. That's still on .NET Framework, and will be forever until we're done killing it, but we can use recent VS and MSBuild with it and we use C# 9 as well. Business code generally works copy and paste in .NET Core, so there's a really good path to migrate to newer stuff, especially with .Net Standard 2.0 being a thing.

gRPC is really fast and we have the WebForms monolith call into .NET Core services now so we can improve the back-end without dealing with so much ASPX crap.


Your programming language either dies young, or it lives long enough to become the very legacy crud it sought to replace.


> MS really made a grave mistake by coupling the evolution of the C# language to the .NET version.

It’s not directly coupled, but each version of C# has a minimum .NET version it can support, due to the requirements of the features in the language itself (like LInQ in the past).

And MS has been very up front about .NET framework being legacy and not getting any future upgrades for years now.

Keeping every new C# feature compatible with every obsolete .NET version obviously has a big technical cost. MS has clearly decided it would rather spend its effort to improve C# for those keeping up to date.

> It is becoming a a lot less fun to be a C# developer because you more often will have to work with obsolete tools.

On the contrary. The language and platform is better to work with than ever and I can do everything I need, on Linux, with just Emacs and the CLI.

I couldn’t be happier!


> Keeping every new C# feature compatible with every obsolete .NET version obviously has a big technical cost

Strawman there. The problem is not "every obsolete .NET version", the problem is .net in reality have two incompatible platforms at this point, but newer C# versions are only supported on one of them.

Going from framework to core will require (risky, expensive) rewrites, which means lots of real-world code will never be migrated. That is just reality. Startups who rewrite their whole tech stack every six months is not the norm outside of Silicon Valley.

So for the foreseeable future, .net will have these two platforms in parallel. And unless you are a hobbyist, you probably don't get to decide which one to use.

Microsoft have forgotten how important developer enthusiasm is for the success of a platform. Nobody likes to work on legacy code with obsolete tools with no end in sight.


Exactly this. My current project is on full .NET Framework 4.8. It is on the latest possible version of ASP MVC, Entity Framework and all typical 3rd party libraries(of which 90% is supported on .NET 5). In the last two years we created several POCs for migration to .NET 5. Our current best estimate is 7 years for 80% of our development team. No sane manager will ever approve that crazy effort with almost no benefits for end users. Additionally we are talking here about most typical CRUD application without almost any advanced engineering. Our onboarding is taking at most 2 days and people can start be very productive even in first couple days. I can only imagine nightmares with more complex systems.


> The problem is not "every obsolete .NET version", the problem is .net in reality have two incompatible platforms at this point, but newer C# versions are only supported on one of them.

But that’s fairly consistent right? An old platform not getting updates and a new platform getting updates?

Why would you expect the SDK of the officially deprecated platform to get updates with the latest compilers and technologies, but at the same time in every other way remain stagnant?

Microsoft has been very clear that what you have will keep working, but you won’t get any new toys.

If you want those, you have to put in the effort to upgrade your projects, and in worst case rewrite the portions you can’t.

> So for the foreseeable future, .net will have these two platforms in parallel.

Yeah. The old one not getting updated, used by those who won’t put in the effort to update their tools.

And the new one where all the nice things are happening, which really should act as a nice carrot, not a source of bitterness.

> Microsoft have forgotten how important developer enthusiasm is for the success of a platform.

99% of the old ASP.NET stack was terrible and unpleasant to work with.

The new stuff is nice (enthusiastic!) to work with exactly because they decided to not carry all that stuff forwards.

The new APIs was a nice fresh start for the current kind of apps one writes today.


> But that’s fairly consistent right? An old platform not getting updates and a new platform getting updates?

I'm not sure what you mean with "consistent", but the problem is of course creating two incompatible platforms in the first place. All previous releases of .net have been largely backwards compatible (except for rare edge cases and security issues), so it remained a single platform where you could get the latest improvement just by upgrading the framework.

Of course when you have created two incompatible platforms, it would be double the work to keep both updated, so they have to abandon one. Which is why they shouldn't have done it in the first place. They could have upgraded and deprecated components in a modular manner without creating a whole separate incompatible platform.


> the problem is of course creating two incompatible platforms in the first place

Interest in .NET was dwindling because ... it was Windows-only and its APIs and frameworks was a bad fit for the cloud. It was literally heading for a nose-dive.

So if Microsoft wanted to address that and make .NET truly cross-platform, they either had 2 options: Port everything which is Windows-specific to all other platforms, or just strip it from the core .NET APIs.

If they wanted to make it more cloud friendly, again they had 2 options: Make the already complex APIs, even more complex by trying to shoehorn cloud use-cases into a BCL largely designed for a Windows AD-network kind of security model. Or just start over, with a clean slate, and APIs designed for the age we're currently writing software for.

While yes, the latter options creates a new incompatible platform, it's really the only realistic option. And if you're going for the first latter option, you might as well go for the second.

I don't think you will find anyone who believes Microsoft would be able to execute on .NET Core as it did, if it had tried to bring all the old legacy along for the ride.

> All previous releases of .net have been largely backwards compatible

Yes. Which has been a mixed blessing. Look at what a mess classic ASP.Net has become (which ASP.NET MVC is built upon) wrt to page-event life-cycles and what not. Are you not happy to see all that terrible things truly gone, for good?

> Which is why they shouldn't have done it in the first place. They could have upgraded and deprecated components in a modular manner without creating a whole separate incompatible platform.

But that's what they did, really.

They obsoleted classic ASP.NET and WebForms. They obsoleted WCF servers. And they obsoleted the Windows-centric security-model. Anything else?

Everything else is still there. The language, the rest of the BCL, the packages you know and love.

And you were given years after they said "These technologies are not getting any new updates" to migrate to the new ones (while still on classic .NET Framework), which was kindly published as multi-targeted Nuget-packages, to give you a gradual migration path.

Sure. It wasn't perfect, but what is?

If anyone is to blame here, it is how people have kept hanging on to classic .NET for years when the writing has been on the wall, hoping that maybe perhaps if they wait for another .NET Core release... That all those terrible technologies everyone is glad to let go off, will be ported to .NET Core too.

You know what? It didn't happen, because .NET Core was a community project, and the community didn't want it, or at least certainly not enough to warrant the effort.

And not only that! Thanks to being freed from the Windows it was borne into, it has become more vibrant and popular than ever before.

It's no longer a technology threatened with extinction. As someone using and invested in .NET yourself, surely you must appreciate that?


They could have done all those things but still retained backwards compatibility. Consider how WPF was introduced as a complete replacement for WinForms. This didn't mean WinForms stopped working! You can run both under the same .net version and you can use most recent C# version (and all other libraries) with both.

It is great that MS delivered a ground-up rewrite of ASP.NET. There is no reason this should cause the old ASP.NET framework to stop working. They could exist in parallel just like WPF and WinForms - one for new code, one for legacy code. Old libraries should just keep working by default.


>It is becoming a a lot less fun to be a C# developer because you more often will have to work with obsolete tools.

This


Can anyone comment on cross-platform GUI for .net? Last time I seriously investigated it, Gtk# seemed to be the primary option. Some quick googling found avalonia, but no clue if that's a reasonable GUI to use. I'm guessing MS isn't planning on porting WPF to linux (though I suppose linking with wine might work)...


It looks like MAUI [1] is the path forward

[1] https://devblogs.microsoft.com/dotnet/introducing-net-multi-...


Comment streams there indicate that it relies on Xamarin.Forms which is GTK2 and does not support .NET core.


+1 for top-level programs

-1 for target-typed expressions, it makes code unreadable, i'd rather see var used as field members


You can't have 'var' for field members since this could lead to recursive type inference. Var does not have this problem since it can only be used for locals.


> Top-level programs allow you to write the main method of your application without having to add a class with a static Main method

Riveting stuff happening in .Net land! Sarcasm aside, this seems like such a small, oddly-specific, once-off feature that its like...why bother? Any C# devs want to enlighten me here?


Why not bother?

This mindset is one of the core features of .NET-land - Microsoft just goes ahead and implements anything useful. It's why we have a gigantic standard library ("framework") filled with every function under the sun and some multiple times. We didn't have to wait until C++20 to get string.Contains. Sometimes the features Microsoft implements are more useful than others but most of them are a little useful to somebody.


It's not without its downsides though, the language and ecosystem is quite big now. I don't envy people starting out and having to learn it all from scratch.


Really it's much better than it used to be. The people I don't envy are those like my past self who had to contend with .NET 2.0 web forms, the AjaxControlToolkit (and really UpdatePanel in general), bloated heaps of ViewState... the list goes on, and I haven't even gotten past ASP.NET.

Most of the improvements that have been made in the last 10 years (aside from maybe async/await, but that pattern is pretty ubiquitous at this point) have served to reduce the amount of idiosyncrasies, not increase them.

In general, web programming has increased across-the-board in complexity manifold, regardless of your server-side language. Aside from that I find writing C# to be very succinct and logical these days compared to the past, even taking into account my own learning.



I see the benefit mostly in documentation and code examples.


Can I just create a single .cs file somewhere, and execute it by typing its name? No Visual Studio solution/project shenanigans? That’s my dream. Top-level programs to eliminate boilerplate is a nice related step.


It's fairly trivial to write your own for any language, I have a c version where you simply start the file with "#!/usr/bin/env crun" the guts of the script is:

  gcc -x c -ggdb -Wall -Wextra -Wno-main -Wno-unused-parameter -o $out $CFLAGS -include $header - $LDFLAGS <<-CRUN_EOF 
    #line 2 "$1"
    $(tail +2 $1)
  CRUN_EOF
Obviously gcc would be replaced with csc, you wouldn't need CFLAGS and you might need a temp file or something.


Not quite at a single .cs file, but there is the third-party dotnet-script [1] installed as a global tool to run .csx files. That's been around longer that C# language support for top level statements.

[1] https://github.com/filipw/dotnet-script


There are third party projects that support this: https://github.com/oleg-shilo/cs-script


what do you mean by "execute it"? It's source, you need to compile it and get an executable out of it. This is not Python, an interpreter. But if you want an interpreter for C#, nobody's stopping you to write one.

Me, on the other hand, I want Python to become a compiled language. And to those who's going to reply with "PyPy/Cython/Nuitka" I have a question for them: -do show me a .dll from those!


To give an example, in Swift, I can write a hello.swift with:

    #!/usr/bin/env swift

    print("Hello")
And after a chmod +x, it works. Granted, anyone who I distribute this to will need a swift compiler installed, but that’s to be expected (and similar for Python.)

Golang and other “compiled” languages have similar ways of making this work too.


Oh, I get it. You mean install some sort of interpreter that is called by the OS for that language. Sorry, I live in real world, where customers don't allow you to install a gazzilion files just to run your application, they expect a proper installer that is under strict supervision by their admin team and you only deliver executables.


I don’t think the use case for this is anything you’re shipping to “customers”... more for just personal code you can use as scripts/playgrounds.

To use the swift example, often I want to double check whether something in the language works as I expect, so I write a quick test.swift and run it. It’s a useful thing even if you don’t personally want it.


VS comes with an interactive C# window that basically serves this purpose. Aside from that, you can always write a unit test and set a breakpoint to inspect whether something works as you think it does. In my opinion both these options are superior to running a file for that use case (though running a C# file would certainly have other uses that tests/REPL aren't sufficient for).


I would love to replace all my powershell build scripts with .cs files. imo there is lots of room for C# Scripts.


Wow, a full-screen cookie pop-up with a “X” dismiss button in the upper right corner that just causes the full-screen pop-up to re-display. Can it get more obnoxious?

This is the second one on the HN front page today [1]. HN tries to avoid paywalled articles, maybe we should also discourage articles that start out by deliberately hiding the content.

1: https://news.ycombinator.com/item?id=26639722


I really love c# - one of my favrorite languages, but can any one explain the benefit of this new syntax?

  Person p1 = new();
  Person p2 = new("Tom");
  Person p3 = new() { FirstName = "Tom" };


In my opinion it's more useful when declaring class members, e.g.:

  public class SignPost {
    public List<string> Messages { get; set;} = new();
    private List<string> Errors = new();
  }
In both of the above usages var is not allowed. Often-times instead of a simple list you have something like an IReadOnlyCollection<T> or an equally-abominable custom type that is just a pain to write out 2x for every time you add it as a dependency somewhere.


You save typing by not having to repeat the class name.


But you could use var instead of Person on the left, so you're only saving 3 characters which likely you're not typing out in most editors


Var is not supported for fields or properties though.


I'm very cautious of anything from RedHat regarding .NET - some of the worst C# I've ever seen comes from some of the devs working at RedHat in the form of commits to public repositories.


Scala 3 kills C# by a large margin, imho. C# is not a good pursuit unless you are stuck with .Net codebase.


Scala is a great language with terrible (in comparison) tooling. I'd use it (or F#) over C#, if IDE support was not as slow as it is, and if it was comparable to ReSharper/IDEA Java feature-wise for refactoring.


How so? .NET is 2-4x faster than Scala in benchmarks (https://www.techempower.com/benchmarks/#section=data-r20&hw=...). Do you just prefer the syntax and language features?


I like the expressibility, its easy to improve performance in the compiler later.


Famous last words...


F# is probably a more relevant comparison - it is kind of the Scala of .net. But in any case C# is much more widely used than F#.


I have been programming with c# since 2000 i think. But I do not like where the languages is going.

It is becoming way to loose in a sense and it seems that it wants more than it should.

If you want to do functional programming, pick a fp language.

If you want to do dynamic typed programming pick a dp language.

Sure you can mix some things in, but c# is becoming way to scattered imo. And it's not pretty i think.


I think I'm going to get diabetes from all the syntax sugar...

On the more substantive changes, it feels a little like the language designers have been looting F#.

Combined with how fast Dotnet Core is moving and how many breaking changes there are there, it does feel like things are getting very fragmented.

Stasis isn't good either; just witness the decade or so of stagnation after Java 1.6 as the Sun -> Oracle transition happened. But you can go too far in the other direction.


I don’t think they’re just randomly throwing stuff at the wall. These are all things they’ve been talking about for years.


The C# language design process is very much in the open, so anybody can go and take a look at the rationale etc.

https://github.com/dotnet/csharplang


I think it suffers from the design by a self-selected committee now. Very little of recent changes have long-term vision behind them now, and seem to be quick patches.

Specifically, I just tried disabling 7.0+ features in an open source project I run, and we only actively used pattern matching for "is MyType inst", ref structs (hi Rust!, that one is really fundamental), and natively sized integers (which are still poorly supported by runtime).

Other sugar changes have uses across codebase you can count by fingers of one hand.

We also use unmanaged function pointers, but the feature appears ugly in many ways. For our purpose, I'd prefer a tiny bit more flexible DllImport (a hook to give it the dll path at runtime) over function pointers.


For contrast, I'm looking at the old C#7 and C#8 release notes--I can say our team frequently uses:

C# 7

  - value tuples
  - local functions
  - throw expressions
  - default literals
  - out variables
C# 8

  - switch expressions
  - using declarations
  - static local functions
  - asynchronous streams
  - null-coalescing assignment
  - enhance string interpolation e.g. via @$ prefix


You don't consider the more advanced pattern matching stuff to be "long-term vision"?

And it's not surprising that an existing large C# codebase wouldn't use many of those features. But for new projects, they are very nice.


Yeah, it seems clear to me that the long-term vision is importing a lot of functional features to make using functional patterns/immutable data easier. Which I think is an excellent choice because that stuff is super useful.


People have been saying this all the way back to when Linq came out, and who could imagine C# without Linq today?


When LINQ first came out, I personally remember creating some long rants at the bar with my drinking buddies that we didn't need no stinking FP in our OOP.

I now find myself writing half my data processing in LINQ and find myself thankful that I can chain together a few methods on a collection with possible transformation in a line or two rather than having to write 2 or 3 nested loops with all sorts of conditionals.

I'm taking the new things that come up in C# as they come, and even the ones I don't care for, I generally end up using to some extent.

Go with the flow, or pick another language, would be my suggestion.


I think for a lot of devs having a multiparadigm language is the only way to explore new things. For the stuff I am working on it’s not really feasible to add new languages but you can add FP concepts slowly because C# supports it. That would also explain the popularity of C++ because it allows gradual transition.


If you have the time, I would be curious what your thoughts are on this talk by Bill Wagner. Gives a lot of insight into where C# is headed: https://www.youtube.com/watch?v=aUbXGs7YTGo


Personally, I enjoy C# becoming a multi-paradigms language like Common Lisp. Multiple inheritance and a good macro system would help close the gap. I don't need safety guards in my language, I'm looking for the right options when I need them.


With default methods in interfaces, we already have multiple inheritance of behavior, which is really the interesting part (data can always be aggregated and wrapped).




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: