Hacker News new | past | comments | ask | show | jobs | submit login
Understanding the .NET ecosystem: The evolution of .NET into .NET 7 (andrewlock.net)
246 points by alexzeitler on March 21, 2023 | hide | past | favorite | 347 comments



Been using .NET for years now for backend web development after having taken a break from C#. It is such an improvement over the old .NET framework. When I started building my first backend with it, I was surprised how much was included and "just worked". Need to add authentication? Few lines. OAuth? Also built in. Response caching? Yes. ORM? EF Core is pretty good. Need to use env variables to override your JSON config? You are going to have to build a... just kidding that works with one more line too.

Coming from NodeJS, the amount of stuff that could be added with a single line from an official package was great. No more worrying about hundreds of unvetted dependencies.


It really is like finding enlightenment after having to figure out which third party package is best for every little thing in Node. Visual Studio is a pretty powerful IDE as well.


Jetbrains Rider is great for .NET too, although I didn't try backend developement with it.


I'm a web developer that has used VS Code for years, but these past few months have been using Rider for developing a Unity project. One thing that really stands out to me about Rider is the "Refactor" (Ctrl+Shift+R) feature; extremely useful.

It allows you to just build things rapidly without worrying much about patterns/naming because once your vague ideas solidify and you realize where you went wrong, it's really easy to just Ctrl+Shift+R and make any codebase-wide adjustments instantly.

It's also great about picking up sub-optimal implementations or patterns with helpful warnings, and you just Ctrl+. to have it auto-fix for you. Using these Rider features combined with Github Copilot, I was able to pretty easily learn intermediate level C# because it's like having a mentor working along side you.


Except botched C# style rules. Rider still insists on Java style guidelines for .net projects. And sometimes, it spawns a never ending background process that only way to get rid of is to restart the app. Otherwise, it's okay.

edit: been using Rider on Linux since a few years now


I've been using Rider style rules for two years now exclusively in a VS based team and aside from some linebreak rules I need to tweak here and there I've seen no issues ?

But I use a .editorconfig tuned stylecop and roslynator on all projects so maybe that overrides some defaults.

Mind sharing which Java style rules are you seeing ?


Rider highlights pascal cased class methods/attributes and suggests to use camel case, for example. I haven't changed any style settings. This is Rider default. Maybe its a Linux thing.


That's just a few .editorconfig lines away from begin consistent for the whole team.


> Rider still insists on Java style guidelines for .net projects.

No, it uses Resharper as the backend.


It’s awesome. What I enjoy most is, that all the different tools feel the same, and you can use the same kind of IDE also for web developmen, JavaScript and SQL. Rider also has webstorm and datagrip bundled as a plug in, so you can also do everything in one IDE. I prefer though to use the standalone tools for that, they have just less features than rider/IntelliJ, but i would consider that a plus. Less menu items, less tool windows.


The Datagrip integration is so cool whenever I need to write raw SQL queries inside Rider


I migrated a while ago from VS + ReSharper to Rider. Mainly backend work. It’s been great.


I’m using it for backend work day in and day out and it’s a joy.


Can you install it [edit: the .Net 7/8 platform that sounds interesting, not the IDE] for free on a Linux server and get the same benefits? (This isn't advocacy, it's a literal question about something I don't know.)


Yes:

https://learn.microsoft.com/en-us/dotnet/core/install/linux

Also VS Code, an open source IDE with first class support for .NET languages:

https://code.visualstudio.com/docs/setup/linux


Also emacs has a pretty good csharp language server integration. I personally use the Spacemacs "dotnet" layer.


Yes, everything is completely free, open source.


Everything but the debugger sadly


I know! How crap is that? As a result, no debugging on a Raspberry PI.

It turned me right off C# outside of work. There are plenty of other languages that are better suited for hobby development.


Monetizing software is so incredibly indirect. Obviously companies don't make any money on free frameworks and tooling. These days, using C#/.NET maybe/kinda/sorta increases the chances that you might deploy on Azure or use Azure services. That tiny (or not so tiny) uptick easily funds all C# / .NET ecosystem development.

I'm not sure why they do not open source the debugger. I suppose it is obvious to some extent that some companies (JetBrains) are able to charge for high quality tooling. Microsoft makes a trickle of money from Visual Studio professional.

Though the objective function and decision variables are somewhat opaque, this is clearly an optimization problem.


The why is Visual Studio. The VPs running that div are a bunch of Muppets who are constantly trying their best to destroy the OSS .Net projects reputation.

The cynic in me suspects all the drama was directly related to Scott Hanselman suddenly losing interest in bloggin.


I prefer Rider as a C# IDE whether it's Windows or Linux, although the last time I was doing this there were still a handful of things you needed VS for.


Yes, since VS2022 became 64-bit app it is much more pleasant to use now.


It better be at a 30GB install size.


Install size depends on the features you pick when you install. Web / service / library development is small, desktop / mobile / C++ gets bigger as the toolchains + sdks + emulators are big. If your VS install is 30 GB and that's a problem, run the installer, click modify, and uncheck stuff you're not using. A lot of devs check off everything "just in case" but since you can add other features when you need them, it's better to start with what you know you'll use.


A 2TB nvme drive is less than $200. If VS is saving significant dev time, people would install it even if much larger install.


I'm not sure it's 10x as nice as competitors, which is the context here. If it were just about space who cares, but when your competitor does the exact same at a much smaller size and ultimately better price too, it's a little confusing what is going on with VS.


The competitor doesn't ship a full OS SDK for all kinds of development.

Go see how much GB, Apple or Android development stacks require.


    $ pacman -Qi emacs | grep Size 
    Installed Size  : 111,46 MiB


I guess if anything you is Emacs Lisp, targeting the Emacs OS, then ok.


It needs to be 10x nicer to make up for a 10x disk space footprint? What if it were only 1GB and the competitors were only 100Mb? Would it still need to be 10x nicer?


I think people question why VS is so freaking huge when it's contemporaries are not.

Most of that disk size is supposedly in the toolchain (requiring 2-60GB of disk space alone[1]). Why is this so big? Modern toolchains from other vendors are not this large.

[1] https://learn.microsoft.com/en-us/visualstudio/releases/2022...


I have spotted someone that has never done Apple, Google, UNIX or game console native development.

It turns out native code for all kinds of OS workloads take some space.


Also transparent compression is a thing in case you need to squeeze out more space. And I don't mean the shoddy NTFS active compression. It received support for much more efficient compression algorithms since win10 which are accessible using compact.exe[0], albeit passive rather than active so there is decay once files get modified.

[0] https://learn.microsoft.com/en-us/windows-server/administrat...


Come on, my whole OS fits 3 times into that, let alone every single other programming language’s build tools with debug symbols, everything combined.

So what exactly does VS save you? Hell, I can’t even imagine what’s inside that.


IDK if it's related but I have like a 50% install success rate. Seems to be complicated.


Mine's generally around 10GB as I omit all the mobile dev stack and images.


> ORM? EF Core is pretty good.

We're moving to .Net, and I was surprised by how poor the built-in DB stuff is. It's like either assembly or Python, but nothing in the middle.

That said I've also been impressed about how nice it is to get stuff going. I used C# back in the .Net 1.1 days and yeah massive difference in ergonomics.


EF Core when it first came out was pretty rough but they have been adding a ton, especially in .NET 7 and coming up with 8: https://learn.microsoft.com/en-us/ef/core/what-is-new/ef-cor... Most of the pain points for me are solved now. Like someone else mentioned, Dapper can fill the gaps.


>It's like either assembly or Python, but nothing in the middle.

Dapper seems to be in the middle and it is pretty popular


Yeah, but from what I saw it doesn't help much with master-detail setups? Like, inserting or updating an order with order lines etc. We rely heavily on those.


Use transactions => fails rollback. No need to correct failures.

Important concepts: .NoTracking and setting the EntityState correctly ( mark as deleted, updated, ... )

Additionally, sometimes it's easier to update things granularly, instead of all at once.

I suppose the annoyance comes from updating in bulk on a POST and trying to map everything?


Maybe also check PetaPoco. But at this point you're getting closer and closer to code-first EF Core anyway. :)

https://github.com/CollaboratingPlatypus/PetaPoco


I haven't actually tried either Dapper or PetaPoco, only perused their documentation. But I was sold on LinqToDb after seeing how it supported CTE and seeing our close code to generate updates [1] and joins [2] ended up looking like the actual intended SQL.

  [1]: https://linq2db.github.io/#update

  [2]: https://linq2db.github.io/articles/sql/Join-Operators.html


"seeing how close"


I use the ORM with Servicestack, OrmLite, and it seems to handle those 1 to many references well.

https://docs.servicestack.net/ormlite/


It can. Basically you separate the query response by the parent and child in the row using `SplitOn` and it can materialize it.


It is pure SQL, so it should, I think?


>We're moving to .Net, and I was surprised by how poor the built-in DB stuff is

Right now EF Core is probably the best ORM that has ever existed. What exactly is missing?

Although for performance you would probably reach for something like Dapper but that is not an ORM.


I really like EF Core but I feel it's just recently started to hit parity in a lot of areas with EF 6(though it has moved well past in other areas), and it's missing odd some stuff that's table stakes these days. Stuff I'd like to see:

* Support for "FOR UPDATE" and "SKIP LOCKED"; it's easy enough to tag the queries and modify the SQL in an interceptor though.. Yuck.

* Transaction attributes ala Spring; Though I can make do without them it's nice being able to specify what sorta transaction party a method is interested in.

* Better support for "bulk updates". SQL Alchemy, as rough an onboarding experience as it is, has really good support for executing db-side update logic.

That said, I largely agree with you and nobody should overlook LinqPad. Anyone interested in babysitting the generated SQL should be using it; too bad it's not on Linux :| :( ;(

The optimizations they cover in the docs should all be done by default IMHO; optimizing models, polling db contexts, and etc. I also open and close a db connection at app start which further reduced the first request latency after putting a process in rotation.


> Right now EF Core is probably the best ORM that has ever existed. What exactly is missing?

Wish I could agree but they would have to fix the very slow time to first query when using big models (+500 tables in our case). Compiled models is not a solution for us since our model changes a lot and the compilation is just as slow. It's disappointing because it used to work fine under the ancient Linq2SQL library.


Can't you generate the compiled model as part of the CI/CD?

Also maybe you might find a benefit from splitting your context into multiples. I am considering this option for one of my code bases


This is what I plan to do and I might use something like Husky to ensure models are built before code is committed. It IS sort of a PITA and I wonder if they could introduce a system on top of content hashes or the like to verify optimized models match the source in dev and etc.


I can but the DX would still be miserable, starting a local instance would still incur the cost of building the model, and a dev need to do that a lot of times during a day.

I can't split the model without massive refactorings, and even then, some tables are common across all modules and would need to be duplicated. Your advice is unfortunately the standard answer in my case, I guess EF Core is just not for me, really disappointing but "c'est la vie".

edit: ha you probably mean commit it in source control so other devs can also use it? I guess it's a compromise, would still slow down our DbContext refresh command a lot.


Are your models code first or reverse engineered?

Are you relying on model conventions or spelling out everything in modelBuilder calls?


Well that was my point, either you're writing a lot of code yourself ("assembly"), or you use EF ("Python").

We're not used to something like EF, perhaps it would work for us. But debugging generated queries due to performance issues is something we'd like to avoid. For now the decision was made to not use EF.


You can easily echo them to the console or debug window.

To be honest you should keep all ORM queries fairly simple if you can. Where clauses fine. Inserts, updates, deletes, ORMs save so much code, and so much pain when you add new properties/remove them.

But if a query is more than a few includes or joins you should be handcrafting it with FromSQL() or loading it piecemeal using Load().

And don't even think about using it to make complicated reports, that is not a good idea. Make a stored procedure or view.

And that's especially true if you are using anything other than SQL Server. I've seen abysmal performance myself on MySQL/Maria on moderately complex EF queries. I've not really looked since EF 6, but it used to love making nested selects instead of JOINs, which were fine in SQL Server but terrible performance-wise in MySQL. Postgre I've never used with EF in anger so can't comment.

You can use EF with Hot chocolate to make a GraphQL endpoint really easily, but I'd imagine that's an easy way to saddle yourself with serious performance problems unless you limit the levels it can go. I'd be interested to hear if anyone's using it and how they find it?


What I personally dislike about it is, for the easy stuff, I'm not sure it really saves that much code over a micro-ORM like Dapper, and for the hard stuff, well, everyone's a Linq wizard already so they're tempted to use it, especially if not in the habit of writing much SQL. With today's tools you can even get Intellisense on inlined SQL queries.

Also the lazy-loading and the in-memory provider for tests are both kind of misfeatures.


EF Core generates pretty good queries for common cases. I'm not entirely sure about really complex analytical queries, but you might want drop down to SQL for those anyway.

There is one gigantic footgun in EF Core, that is the decision between single query and split query. If you choose single query in the wrong situation you can end up with truly pathological queries. I might blame EF Core here a bit for a dangerous default, but to be honest the other choice would be dangerous in a different way, so there is no obvious good default choice here. This is a part that you need to understand to use this ORM, and fortunately it generates warnings now and kind of forces you to choose the strategy.

The one other aspect that helps to generate good queries with EF Core is to use "Select()" for any case where you want to request fewer columns than available in your tables. I find it quite natural to write queries this way in any case.


I think you might find Linq2Db[0] to be the right fit for you then, which brands itself on being typesafe SQL in C#.

[0]: https://github.com/linq2db/linq2db


You can use EF Core also with plain SQL queries, if you don’t like/trust the query builder. You can also disable change tracking completely, if you prefer to only INSERT/UPDATE directly with SQL statements. You get a lot of awesome features, but nobody forces you to use them.


Well that sounds a lot better than what my coworkers told me. Will definitely check out EF Core.


Yeah, EF's Interpolated Execute methods are a massive step up over raw ADO.NET ("assembly"), even if you don't use most of the other modeling tools and context tracking.

The Interpolated family of methods take nice, clean string interpolation like $"Select * from Table where Id = {Id}" and make sure that is properly parametric SQL queries (ie, avoiding things like SQL injection attacks).

It's a killer feature and I have some idea why it lives in the EF side of the house rather than being generally applied across all of ADO.NET, but it should still probably be a more reusable library of its own beyond just EF.


EF handles migration and you can re-use the DbConnection and execute plain SQL.

If you don't want to debug difficult queries, then extend it to use Dapper and use the best of both worlds.


You don’t have to use migrations, it’s an optional feature. It works equally well to just generate entities from an existing database, that is migrated/set-up in any other way (as long as the schema is not crazily complex).


Not my project/not affiliate but Evolve is wonderful for handling db migrations


Haven't heard of Evolve before, but used DbUp in the past with success too: https://dbup.readthedocs.io/en/latest/


Can someone familiar with both EF Core and the Java ecosystem’s Hibernate/etc describe how are they different?


For performance, .NoTracking() and not calling .SaveChanges() on every loop already does wonders.

I usually call .SaveChanges() when i % 20 == 0


We'll need tracking (if I understand correctly, we need previous values). When calling SaveChanges often, how do you handle rollback in case something fails?


You can use .NoTracking() and an SQL query for bulk changes ( and minor changes). Eg. Updating one column.

Works faster than with .Tracking. The SQL script runs in your transaction, so rollback runs as usual.

The i%20==0 condition with .SaveChanges() is used with .Tracking(), yes.


Either wrap everything into a transaction or just call SaveChangesAsync only once. The exact behaviour depends on the database here, under the hood this uses database transactions.


I'd kill for anything close to EF for Node. Selecting complex structures from a database in any of the existing JS ORMs is just painful.


I've enjoyed the ORM in adonisjs: https://docs.adonisjs.com/guides/models/relationships#preloa...

Having used it on an actual product and dealing with some of the pain points, it's my go-to since the typescript version (v5) came out.

The ORM uses Knex.js internally, which is very simple to drop into if you just want a query builder. Having Knex be accessible also makes it simple to just write your query in plain sql as well, or as the Lucid ORM has available, just fragments of your query (say the join statement) as raw sql: https://docs.adonisjs.com/reference/database/query-builder#w...

Along with debugging, printing out the sql, and support via the Adonisjs REPL "Ace", it makes for a very nice experience.


These are more akin to Dapper which is a thin layer on top of SQL. Currently I use Prisma for JS, it's a bit better, but if you haven't used EF then you probably don't know what you're missing.


I've used EF quite a bit in the past. While there may be some features missing, Lucid an ActiveRecord implementation, which I would figure would fall into the "ORM" category.

Which features would you say are the key ones that make Lucid seem more like a query mapper (Dapper, Knex) than an ORM (ActiveRecord, EF)?

Specifically, in my Adonis projects, I'm mostly working with the Model objects through the ORM methods, and only dropping to Knex/SQL when necessary (complex CTE, etc). Since it's such a Model-centric seeming way of development, it naturally seems like an Object-Relational Mapping to me.


Context tracking - selecting multiple entities, updating and pushing them back to the db.

Selecting complex dtos, this isn’t query building. A lot of magic turns this into sql.

    TopPaidMayors = Cities
        .where(c => c.state.govoner.party ==‘dem’)
        .select(c => 
            c.name, 
            highestPaid = c.mayors
                .orderByDesc(m => salary)
                .take(10))
        .orderBy(c => highestPaid.First().salary)


Well you put a fairly normal select query there, which is can be accomplished in Lucid as well. Additionally change tracking exists since it's an ActiveRecord implementation.

It's not all magic as well. Looking into the internals of EF, ActiveRecord, Hibernate, or other ORMs reveal patterns that once familiarized can help reason about the behavior of complex queries. I only state this to try to work against the commonly found wisdom of "big frameworks are magic" that tends to scare away learning developers from hoping to understand them.

There are intersections and disjunctions of feature sets between the various ORMs, with some features for EF still only available via extensions (or nonexistent). I don't think this makes the Lucid ORM any less of an ORM.

Again, I like EF Core. I simply think that as far as node-based ORMs go, that Lucid is the one I've had the best experience with, so wanted to highlight it.


Query builders are a thin wrapper on raw sql. Real ORMs that understand the relations between objects are not.

There are a ton of joins and sub-selects in that query. 8 very succinct lines. Can you do anything close in Lucid? I don't think so.


Well I've put sources and links to multiple pages about the ActiveRecord implementation. The Lucid documentation for the ORM "query building" (which it's the same in EF, LINQ has "Integrated Query" in the name) does track entities and subentities, which is how you can query things update them, then later call `.save()` to persist them back to the database.

However, you seem set on making this a combative conversation rather than a collaborative discussion, so I'll end my participation here.


I had hoped you'd put some actual code in either of your last two replies.


Mikro ORM is the best I had come across while avoiding Prisma.


Prisma is a bit higher level, Mikro is more a query builder. EF though is way beyond both.


What would you like build-in db stuff to be like?


Well I mean DataTable and friends can handle master-detail for example, but you gotta do a lot of plumbing to set it all up, especially with autoincs involved. Was kinda expecting it to be less work.

Ideally I'd like to supply some selects, fill up some DataTables with master-detail data, manipulate it and commit changes.

But yeah, maybe I gotta check out the latest EF stuff and see if I can't convince the others...


DataTables are a construct from .NET Framework 1.1. You owe yourself (and would be doing your employer a massive favour) to check out EF Core.


Well that'd explain why they feel clunky :D

Will def look at it more carefully.


EF Core: 1. load order and related line items 2. edit anything 3. call SaveChanges()

Definitely check it out if you haven't recently!


Just do yourself a favor and don’t use datatables directly. If you want to be as close to the database as possible, check out Dapper. It EF core should cover nearly all Dapper functionality and much more.


I am just old fashioned, as much stored procedures as possible, why waste network traffic.


The new asp.net took a lot of good concepts from the node ecosystem and feels really modern. It has a lot of batteries included. I think .net is an awesome platform to build backends. I wouldn’t use it for frontend though. Razor and Blazor never really convinced me.


Blazer Server is perfect for internal applications and dashboards though. It's just so easy to use, especially if you plug in any of the community made component libraries like MudBlazor. I had pure backend devs actually happily make frontend for once. It's essentially Phoenix's LiveView but in C# and that uses already proven SignalR.


For me Blazor is a mess. It’s component system is even worse then Angular, a complete OOP-mess. Yes it’s easy to use, but some things don’t work and it takes ages to find out why.

I prefer next.js for the frontends.


I have been having a ridiculous time trying to find out just how to override an appsettings json variable (DBConnection string) with an environment variable. Could not find any good answer. What is the right way?


Configuration is applied in layers. If you’re using the default setup you get the following layers applied in this order:

1. appsettings.json

2. appsettings.{env}.json

3. user secrets (only in Development environment)

4. environment variables

5. command line args

You can full customize the setup if you desire, there are packages to support things like external secret stores. If you Google ‘Asp.Net core configuration” there’s a MS page that goes into great detail on all of this.

Anyway, your env vars must match the name as you’d structure it in a json object, but with the periods replace with double underscores. So ConnectionStrings.MyConnection becomes CONNECTIONSTRINGS__MYCONNECTION, FeatureFlags.Product.EnableNewIdFormat becomes FEATUREFLAGS__PRODUCT__ENABLENEWIDFORMAT, etc.


Awesome overview, stuff like this is often hard for newcomers to find/figure out. I'm sure it's in the docs somewhere but people often miss it.

One nitpick: environment vars don't have to be capitalized. You can do ConnectionStrings__MyConnection so the casing matches what you see in appsettings.json.

And a word of warning: make sure to understand how the configuration overrides work. If you have appsettings.json define an array of auth servers with 5 elements, then in appsettings.Production.json (or env variables) define an array of auth servers with only 1, auth servers 2-5 from the default appsettings.json will still be there! (something similar to this may or may not have caused a scare in the past)


Yep, it really just works with key/value pairs, so config overrides happen to individual keys. The config system itself doesn't really have a concept of nested objects or arrays. The config provider that reads the json file takes the path, like JobSettings[0].JobName, and transforms it into into a key like JobSettings:0:JobName (IIRC).

I tend to avoid using arrays in config because of the unexpected behavior, and the risk of overriding something you didn't mean to. Anywhere you'd use an array you can usually use an object and bind it to a Dictionary<string, Whatever>, then ignore the keys.


Word of caution. Azure functions implements appSettings differently.


Thank you - I had read that portion of the docs but for whatever reason that part of it just didn’t click. For some reason it made me think I could only use some special subset of env vars that were prefixed with ASPNET_CORE or similar


You can have multiple appsettings files and are additive. Eg, you can have appsettings.production.json which only contains an override for the connection string in the base appsettings.json.


I believe GP is asking about using runtime environment variables passed in through the shell or injected when starting a Docker container, i.e. the 12factor.net approach.


I think it's just:

builder.Configuration.AddEnvironmentVariables();

Then you can use an env variable like "Foo__Bar=X" to override Foo.Bar from your appsettings json.


If using the default builder, environment variables are included automatically as a configuration provider.


Probably not the right way, but if all else fails just prefix with

  System.Environment.GetEnvironmentVariable("NAME") ?? ...


> I was surprised how much was included and "just worked".

A simple HTTP server? Maybe I'm missing something, but when I needed it I hadn't found one.

I believe, the closest it has is System.Net.HttpListener which is a very different thing from your typical Golang's net/http.Server or python's http.server.HTTPServer.

I believe at some point they had switched from HTTP.sys to Kestrel, so at least it doesn't need the admin privileges anymore. But this whole thing is so much related to ASP.NET it's pretty hard to figure out how to create a simplest HTTP server without anything else forced upon you (no services, no "web applications", no router, just plain and simple bare HTTP protocol handler). So my impression is that maybe .NET can make certain complex things easy, but it has some issues with keeping simple things simple.


It sounds like you want minimal APIs: https://learn.microsoft.com/en-us/aspnet/core/tutorials/min-...

If for some reason you don’t want the framework to handle things like routing or request/response deserialization/serialization then you can go bare bones and implement everything via custom middleware: https://learn.microsoft.com/en-us/aspnet/core/fundamentals/m...


… how often are you hand-parsing HTTP requests and hand-crafting HTTP responses one character at a time?

Productive devs want the request/response wrapper objects and routing constructs to handler methods to get work done and can still drop down into fine-grained request/response crafting as and when required.


> how often are you hand-parsing HTTP requests

That's exactly what I don't do, and what I want to see available in a standard library.

But I quite frequently implement custom request handling before any routing happens (if there's even any routing). That's super easy to do in Go, Python or Rust, but when I needed something comparable in C# I haven't found any similar composable independent pieces that I can join together in a way I see fit.


As a sibling commenter mentioned, there’s a minimalist functional web server available out of the box, that can later have other more complex components added on:

https://learn.microsoft.com/en-us/aspnet/core/tutorials/min-...

It’s .NET 101 stuff.


I'm sorry, I think I really must be missing something out and/or explained myself poorly, but I don't see how this is comparable... Doesn't `WebApplication.CreateBuilder(args).Build()` creates a whole web application type thing? In my understanding it's something comparable to `gin.Default()` or `flask.Flask(__name__)`, rather than lower level basic `http.Server{Addr: addr}` or `http.server.HTTPServer(address)` (which still doesn't require any manual HTTP protocol parsing).

And this stuff is ASP.NET Core, not a bare .NET [Core], isn't it? What I'm talking about is something comparable to just Kestrel, except that I failed to find any documentation on using it "raw" without the whole ASP.NET thing (maybe I misunderstood what it is and it's tightly coupled with the whole framework?).


WebApplicationBuilder is the mechanism for configuring the kestrel web server. Kestrel is the foundation of Asp.Net Core, there's no separating the two. But Kestrel is a totally modular system. If you configure your request pipeline with a single custom middleware then that is literally the only thing the server is running. If you use only minimal APIs then the middleware that handles their routing is the only thing running. Asp.Net Core has dozens of bells and whistles but none of them affect your app if they aren't part of your request pipeline.


WebApplicationBuilder itself is mostly the base .NET Generic Host [1] (which is "bare" .NET and is used as a common host for dependency injection plus common utilities such as configuration and logging) plus activating the Kestrel hooks for HTTP middleware. Kestrel even at its most "raw" is always slightly higher-level and closer to Flask or Express (JS) as a middleware-focused HTTP execution engine.

(This partly reflects the classic HTTP.SYS role in IIS/Windows as well, because Window's HTTP.SYS is surprisingly high level for a "raw" kernel component for hosting web servers. From my understanding, most of "Kestrel" under-the-hood is just a cross-platform semi-recreation of the HTTP.SYS abstraction machine on top of things like but maybe not exactly libuv/ioring. So yes, everything is "naturally" higher level in .NET than Python's lowest level just because it assumes a higher-level "OS server" base.)

Also, yes, the boundary between "Kestrel" and ASP.NET is really hard to define at this point. Almost all of ASP.NET is just "Express-style" (though much of these middleware patterns in ASP.NET I believe predate Express) middleware that is cumulatively stacked on top of each other as you add more high-level ASP.NET features, and at this point all of them are just about optional depending on what you are looking to do.

Even many alternatives to ASP.NET at this point are built on top of the core basics like WebApplicationBuilder, they just diverge at which sets of middleware stack on top of that.

As others point out the recently expanded "Minimal APIs" experience is most tuned for "Flask-like" out-of-the-box behavior: https://learn.microsoft.com/en-us/aspnet/core/fundamentals/m...

That's as low level as it gets in .NET, but not so much because of "strong coupling" but because "everything is middleware" in .NET.

[1] https://learn.microsoft.com/en-us/dotnet/core/extensions/gen...


Thank you very much for your comment!

I don't have any issues with the IHost and builder patterns. I actually like those - although I've only used the very basics, so I don't really know about the intricacies and possible drawbacks.

Thanks for clearing my misunderstanding about the coupling. I really thought Kestrel was something different, have not expected it to be this high level. It being a replacement of HTTP.sys totally makes sense, of course.

I've found and read https://learn.microsoft.com/en-us/aspnet/core/fundamentals/m... and it started to make more sense now.


There's an ancient historic low-level API, that I just remembered, which you can explore that still remains around for backwards compatibility but isn't recommended for new code: Kestrel was (via a long scenic route) forked from System.Net.HttpListener [1] which is the closest to a strict bare-bones HTTP.SYS wrapper that has existed in .NET.

There's a long issues thread on HttpListener should be more strongly marked deprecated [2] to avoid people accidentally using it despite today's recommendations to use Kestrel/the "Most Core" parts of ASP.NET. One fun part of the thread is an example repo of the absolute most "bare-bones" and "raw" Kestrel bootup possible [3], including a "TODO: implement TLS handshake here" bit.

[1] https://learn.microsoft.com/en-us/dotnet/api/system.net.http...

[2] https://github.com/dotnet/platform-compat/issues/88

[3] https://github.com/davidfowl/BasicKestrel/tree/master/BasicK...


>In my understanding it's something comparable to `gin.Default()` or `flask.Flask(__name__)`, rather than lower level basic `http.Server{Addr: addr}` or `http.server.HTTPServer(address)` (which still doesn't require any manual HTTP protocol parsing).

Its not. Keep reading.


You just add middleware before you register any controllers (or, leave out all that stuff entirely)


This works in practice, but comparing to Python, it's like pulling in Django when all I need is in the standard library (although some people do and their projects are successful, that's for sure). I just don't want to bring a whole industrial grade CNC machine to do something a handsaw would be perfect for.


It's nothing like pulling in Django.


Mind sharing couple of examples? For me it's more natural to do custom headers stuff on webserver's side.

Disclaimer: looking from sysadmin's POV.


You can just use Kestrel without anything else.


I'm definitely a fan of the "batteries included" approach, but I am ambivalent on EF because I feel like it is still a bit too magic and gets abused. Though it's not like it's a big deal to use whatever else you prefer instead (I am a big fan of the inline SQL with Dapper approach).


I stuck to DB-First model + LINQ + SaveChanges() and largely managed to keep out the magic quite successfully for a .NET6 web project last year. Records are fantastic when composing queries. I didn't touch inheritance or any fancy mapping strategies -- one table, one class.

The only bit of framework-specific / hidden magic debugging I really had to do was the realization of AsSplitQuery() when creating objects composed of independent datasets, AsNoTracking() for a decent perf bump, and single group-by's are fine but when you start nesting them it gets really hairy really quickly -- they usually ended up becoming views.

Otherwise change-tracking worked wonderfully for batch-updating (but everything I did was short-lived) and outside of group-bys, the LINQ -> SQL mapping (and vice-versa) was extremely predictable, in both EF Core generating the SQL I expected, and creating the correct LINQ from the SQL I knew I wanted.

9/10 would use again; inline sql is for nerds


Yeah, if you stick to a very narrow subset of what it can do then you will have no problems. Hopefully everyone else on your project is on the same page about what that subset is.


I mean if your alternative is no-orm, inline SQL, then the “narrow subset” I’ve chosen is precisely competitive, and is highly effective, and IMO strictly an improvement. If your alternative is a different ORM, then my opinion is moot, but at least I’ve never seen an ORM be worth the headache (which is why we avoiding using the full feature-set in the first place)

I don’t know why one would have an issue with not using the features they’re not looking for anyways. Keeping everyone aligned on patterns/usage is half the point of code review, and it was managed there without much trouble (you can’t really do the truly magical incantations without quite a bit of setup)


I’m a fan of Dapper for doing the tedious mapping and result set handling stuff but still sticking to SQL. I don’t really find writing LINQ instead of simple SQL queries is much of a time saver.


I'm not a fan of the LINQ syntax itself so much as the fact that I'm not longer dealing with arbitrary strings being smashed together that happen to form a valid SQL statement, and all the benefits that inevitably come with not having stringly-typed logic (like being able to refactor properly, type safety, proper autocomplete, no-typos-at-runtime, etc). It maps closely enough to SQL itself that the negatives of having to use an "intermediate" language/api aren't significant, and the magic can be made negligible, so it's largely pure gains.


The tools are good enough that my IDE can detect SQL strings and offer suggestions, plus once the database starts getting used by multiple applications even type safety won’t really make it safe to go renaming existing columns.


What is DB-First model ?


There was a specific feature in the .Net Framework version of EF called “database first” that would analyze your existing database and generate an xml file describing it that EF would use to generate models on the fly. It was pretty horrible and thankfully isn’t supported by modern EF Core. However the term database first stuck and it’s really come to refer to any any scenario where you manage your database using a tool other than Entity Framework migrations. Could be a dedicated migration tool, Sql Server database project, whatever. Then you either create EF models to match the tables or use a tool to generate them.


Coming from 10 years of spring and NodeJS development, last year of .NET has been incredible. I share that "batteries included" experience. So much useful functionality is just packed right in. And there's a lot of backward compatibility and support.

So much better than spring and NodeJS.


nestjs provides a comparable experience for nodejs as well


When you update .NET libs years later, we still don’t have to change any code in our own software. With anything nodejs, even months and sometimes weeks, updates of any kind means everything breaks. Not sure how anyone builds things with it that don’t need updates outside security fixes. Smaller companies might have software running years or decades that only need security updates and not new features. A lot of our stuff is over 10 years old and needs only security updates; none of that is node (or in the npm ecosystem for that matter) as we simply have shot ourselves in the foot with that too many times already.


It's been a few years (2018?) since I used NestJS but back then our experience with it was far from stellar. It lacked documentation beyond the basic "getting started" examples (just checked, it doesn't seem like they've improved much on that front), it had quite a few footguns and as soon as we strayed off the beaten path, things tended to become painful, especially on the GraphQL side of things and general 'plumbing' like interceptors, schemas, and data validation.

Internal error handling was sometimes abysmal too, a misconfiguration of certain dependencies in `AppModule` could leave the application in a broken state on startup where it wouldn't bind to its port and no error messages were printed to console. On a few occasions I had to spend an hour or more reading and understanding NestJS source code to resolve those issues, which could have been avoided if they had better internal validation and error logging in place.

That's not to say it was all terrible, some aspects of it were genuinely good, but the overall experience and many hours of needless pain it caused left a really bad taste in my mouth. Back then, at least, it felt like a Lego set where the pieces didn't all quite fit together.

Depressingly enough, it seemed like NestJS was the best that Node.js world had to offer which made me quit the ecosystem altogether.


.NET has been doing a lot of things right. My startup's codebase is nearly all .NET 7: landing page, web app, Windows service, API. The main non-.NET code is vanilla JS in the web app.

I've been keeping a close eye on Next.js, which is very well done, but I love how versatile .NET is. With one language, I can write all of the above, and my dependencies are minimal thanks to .NET's rich standard library - a refreshing change from the NodeJS apps I've written and maintained.

With native AOT, .NET can even be compiled to native binaries. Recently, I wrote a Windows password filter DLL in C#, which would have been unthinkable some years ago.

It's a really enjoyable stack to work with. Kudos to Microsoft for what they've been doing with it.


Nextjs (and its contemporaries) work really well with .NET. Toss in your API calls into Next, write a beautiful front end, and now you've got separate front end and back end that works phenomenally together and can be tested separately using their strengths.

I'm more shocked by you sticking with Javascript. Surely with your adoration of C# you would have moved to its closely related sibling Typescript. Although I guess there is that slight initial hump to move to TS from a purely vanilla project.


> Nextjs (and its contemporaries) work really well with .NET. Toss in your API calls into Next, write a beautiful front end, and now you've got separate front end and back end that works phenomenally together and can be tested separately using their strengths.

Is it somehow different than using any other language for your backend? Seems to me you're describing any frontend/backend split, nothing specific to either Nextjs or .NET here.

> I'm more shocked by you sticking with Javascript. Surely with your adoration of C# you would have moved to its closely related sibling Typescript. Although I guess there is that slight initial hump to move to TS from a purely vanilla project.

Some people prefer to stick with vanilla JS because that's what the browser ends up running anyways. Personally, TypeScript tends to get more in the way than be helpful for certain type of projects, while for others, TypeScript helps a lot but tends to be when the codebase involves a lot of contributors of varying skill-levels rather than a small circle of contributors with relatively high knowledge of programming and JavaScript in particular.


Feel free to replace Next with another framework if that's more you're liking. Same benefits apply.

I'm not so sure experts of JS would avoid TS just because they're experts in JS. The opposite even, being experts they know how many footguns JS has, and TS just comes with far too many benefits for any project that is to be maintained for longer than a week.


Frontends often have a much shorter lifetime than backends. A good example are banks, they change the online banking frontend approximately every 10 years, but a lot of backend systems are running since the 80s/90s without complete rewrites.

It makes a lot of sense to decouple them, and just re-write the frontend to whatever technology is the best for its time. It’s also not uncommon to have multiple frontends, for example a web application and also a native android/iOS app.


How do the costs of running the .NET 7 codebase compare with running something open? I like .NET a lot, have worked with it a lot in the past. But when I tried my own startup, I picked node instead and don't really regret it. I'm getting back into .NET these days though at a new job again, and .NET 7 will be interesting to dive into. Still not sure I'd want to be that tightly-coupled to Microsoft subscriptions though. I'd tried getting a couple .NET projects off the ground back in the .NET 5 days or whenever .NET Core first became a thing. It was surprisingly expensive to run things in Azure, and to use tools too. So that played a role in me just picking node for my startup even though I'd have preferred to stick with C# and .NET if I could, so I could become an expert in those tools.

But yeah, it's definitely a huge plus to have access to .NET's massive standard library. The documentation is amazing, too. If the costs to run in Azure and to be tightly-coupled with Microsoft rthat way are reasonable for a lean/small startup or self-funded startup, then I may give it a shot again.


.NET IS open and not really tied to any Microsoft Subscription. I really hate when people keep parroting that without understanding. Rider is nowadays a better IDE for C# and it's not owned by Microsoft. You can host it wherever because it runs wherever.


Nah, Rider only better on cross platform Web development, and even then it fails short for Blazor.


In my experience, Rider is much a better IDE for development, especially on large codebases. In our few million LOC codebase, VS lags/hangs a lot, navigation is slow, search (like searching for a log string in a codebase) is awfully slow. Refactorings are not useful w/out Resharper etc.

VS has a lot of useful stuff around profiling, performance analysis etc, but as a code editor it's pretty bad.


VS lags/hangs a lot with Resharper installed. As someone who doesn't like/trust Resharper, I have a very different experience of VS performance.

(Also, Roslyn built-in refactorings have gotten so good, I increasingly feel like I should just develop more in VS Code because the language server is the same.)


It's just like any other containerized Linux app. The only thing worth noting, that base asp net core api will idle like 200mb memory.


> Still not sure I'd want to be that tightly-coupled to Microsoft subscriptions though.

No need for MSDN subscriptions these days, although they do come with some perks.


> How do the costs of running the .NET 7 codebase compare with running something open?

.NET has been open source since .NET Core 1 in 2017. The source code is on GitHub: https://github.com/dotnet/


What do you mean open? It is open


Building my startup on .Net as well. Been a huge fan for over a decade but haven't written professional C# in over 8 years. It's been a dream to write OSS b2b .Net based IT tooling for years now(C# or my language crush F#); and here I am :)

Using Asp.Net 7 for the server component with React and Vite as the FE tooling. Been a HUGE learning curve the first couple months particularly around the DI, options pattern, Asp.Net auth and Identity and etc but everything is falling into place now. Also SignalR is much more low-level than it pretends to be lol.

It would have been easier to go straight NodeJS as I'm quite proficient in writing even framework level code in it(down to the sockets), but I believe .Net is the better option for this long term.


Fortunately NextJS and .Net isn’t strictly an either/or proposition. I’ve moved most of my new front-end development to NextJS at this point and use C# for most everything behind it.


A few years ago, the authentication/authorization story was quite tough to configure in this setup. Did things change recently?


It really just depends on what you're trying to do I guess. The main auth boundary for my projects is typically the server gateway/api, and that's very straightforward to set up, whether you want simple auth or role-based or to inspect JWT claims or whatever.


My big problem with .NET is that a lot of line-of-business apps were written using ASP.NET Web Forms, but there is no upgrade path other than “rewrite most of it”.

It feels like the pain everyone went through upgrading from Python 2 to Python 3.

It also doesn’t help that .NET Framework has its support cycle tied to the OS, and hence is 10+ years. This means that businesses can be lazy and just leave these old apps to fester and still be technically “supported”.

As web standards evolve, I’m seeing these web apps slowly break.

It’s actually a big problem and Microsoft doesn’t seem to care much. It’s only in .NET 7 that they’ve finally introduced some incremental migration features, but they’re buggy and incomplete.

As a random example of the issues: .NET Core 1.0 broke DataContract deserialisation because of a missing thing in the BCL. A decade later this is present now but they still haven’t fixed the service client generator tool!

Every time I’ve tried to migrate an app, it’s one breaking issue after another with 3-year old GitHub issues that have no responses from Microsoft.

Talking about the awesome future of .NET is great and all, but you've got to give people a path to get there...


The writing was on the wall a long, long time ago!

These companies had 15 years to do it. And it was fairly easy to get webforms + MVC running side-by-side on the same site so you could gradually migrate.

Too late for that now though, you can't run webforms + MVC Core side-by-side.

You could put a load balancer in front of the old app and start moving end points to a new code base.

It's akin to moaning that MS haven't got an upgrade path from IE7. Or your Adobe Air app has stopped working.

On the plus side, doesn't 4.7 still have a huge support window because it's tied to one of the windows server versions?


> The writing was on the wall a long, long time ago! [...] These companies had 15 years to do it.

I pointed this out in 2013 only to be literally shouted down by my manager. I did so again in 2017 under a different manager on the same team with the suggestion that we investigate using SPAs and RESTful frameworks only to be told that Razor Pages was the way forward which, while an improvement, didn't do any wonders for our ability to attract/build technical talent or create rich user experiences.

As a sibling comment points out (in a sentiment that I've echoed on HN before):

> ASP.NET Web Forms are a complete trainwreck and an abuse of HTTP and other basic web development standards (e.g. by using javascript: URLs and POSTing forms for every single interaction with the page). It is broken by design.

You can probably see why developers that chose this framework in the first place aren't interested in upgrading... learning actual web standards and state management isn't something they wanted to do in the first place.


> You can probably see why developers that chose this framework in the first place aren't interested in upgrading... learning actual web standards and state management isn't something they wanted to do in the first place.

This is why I'm personally extremely skeptical of both Razor Pages and all of Blazor (Client, Server, Unified, whatever). All of that feels a lot like "Web Forms Again, this time with more C#" to me. Parts of Blazor especially may as well be ASP Classic `runat="server"` and look just like it to me. It kind of feels like a lot of ASP developers have already forgotten the hard problems of ASP Classic and ASP.NET Web Forms and have been doomed to recreate them cyclically.


Classic management f**-up. Some products/companies just die, because of bad management that doesn’t react to change. Get a new job ;)


> Get a new job ;)

Done :-).

> Some products/companies just die, because of bad management that doesn’t react to change

Some don't have to because they get paid no matter what.


Sure, but at some point the last sane developer either quits or retires, and then they are completely f*ed.


15 years of management experience. Oversaw the reduction of personnel expenditures by 90%. They'll do fine.


> The writing was on the wall a long, long time ago!

The vast majority of ASP.NET Web Apps could not have migrated to .NET Core 1.0, it was missing too many features. It was missing types like 'securestring', and had zero support for Workflow Foundation, Windows Communication Foundation, etc...

At the time, .NET Core was also advertised as "an alternative platform for Linux apps", not as a direct replacement for .NET Framework.

It was only in .NET 5 that Microsoft changes their tune and started calling Core the "replacement". At the time, something like half of complex enterprise apps might be able to migrate across, but they would have encountered a long list of breaking issues with a note saying "we might fix that in an upcoming major release". That's after a partial rewrite.

You can't go to a business that has a web app that's "not broken" and suggest migrating it to a definitely broken platform that's very much still playing "catch up".

Support is improving in .NET 7, and the upcoming .NET 8, but it's definitely not 100% and pretending that it's the "end users' fault" for not jumping onto an incomplete and buggy platform is not helpful.

I'll list some random GitHub issues for you to perouse. For large enterprise apps, many of these are showstoppers for incremental or seamless migrations. The workaround is always "rewrite everything from scratch using wildly different technologies that aren't direct replacements."

    IIS app pool recycle throws 503 errors
    https://github.com/dotnet/aspnetcore/issues/41340

    OData core libraries now support OData v4 
    https://devblogs.microsoft.com/odata/announcement-odata-core-libraries-now-support-odata-v4/
    (.NET Framework is v1-v3, and Core is 4.0+, a breaking change!)

    Workflow Foundation didn't start getting migrated until .NET 6, and not by Microsoft!
    https://github.com/UiPath/corewf

    dotnet-svcutil ignores most of the settings
    https://github.com/dotnet/wcf/issues/4887

    dotnet-svcutil silently failing to deserialize responses
    https://github.com/dotnet/wcf/issues/4163

    Reuse of Types not working with WCF dotnet-svcutil tool for .NET Core
    https://github.com/dotnet/wcf/issues/4277

    Visual Studio 16.8 breaks SvcUtil build targets
    https://github.com/dotnet/wcf/issues/4431


I wasn't saying anything about .Net Core 1.0, I was talking about MVC 1, 2, 3, 4 or 5 that came out 15 years ago. I meant any sane business should have migrated away from webforms before Core even came out.

By MVC 3, which was 2011, it was bloody obvious webforms were not the future of .Net web development.

And you could run both in the same project with a small bit of effort.

Here's a Scott Gu post from 2009 saying so:

ASP.NET 4.0 makes it easy to implement clean, SEO friendly, URLs using both ASP.NET MVC and now ASP.NET Web Forms (you can also have applications that mix the two).

https://weblogs.asp.net/scottgu/url-routing-with-asp-net-4-w...


> you can't run webforms + MVC Core side-by-side

Besides the obvious IFRAME approach, one could have a single solution with both a ASP.NET Web Application project and another MVC (or Blazor) project. Both of them would reference shared models/services via some .NET Standard 2.0 project. IIS can host ASP.NET Core.

Then start refactoring and rewrite the old HTML4/CSS2 into HTML5/CSS4, while the Server Controls and Master Pages become Razor Components.

There is definitely an upgrade path.


That generally requires a reverse proxy or content-switch. That reverse proxy will then have to lie to one or both apps about their URLs, etc...

Microsoft is recommending YARP for this: https://learn.microsoft.com/en-us/events/dotnetconf-2022/mig...

That sounds okay, but you'll hit all sorts of fun technical challenges. Good look making WCF client certificate authentication work through this! There are also performance gotchas with buffering, etc...

There are not-atypical scenarios where during a migration, an app might be behind 4+ reverse proxies, all of which are different products.

E.g.: CDN -> App Gateway WAF v2 -> Kubernetes Ingress -> YARP -> Legacy App.

You'll quickly discover all sorts of fun interactions between these and 15-year-old ASP.NET code!


The end result is that .NET Framework is the Python 2 of the .NET world, and a major slice of anything .NET that I happen to touch on for enterprise consulting.


ASP.NET Web Forms are a complete trainwreck and an abuse of HTTP and other basic web development standards (e.g. by using javascript: URLs and POSTing forms for every single interaction with the page). It is broken by design. ASP.NET MVC 1.0 came out in 2009, there was plenty of time to modernize those apps.


I'm not sure you're addressing anything the comment above said.

Their complaint was that people invested a lot of resources into a technology that Microsoft promoted, and then that technology hit a dead-end with no good upgrade path aside from "rewrite it all!" What year that occurred is irrelevant.

Ironically you brought up MVC 1.0, which Microsoft did EXACTLY THE SAME THING TO, when they released Asp.Net Core MVC which also has no direct upgrade path. In fact, it wasn't until the last twelve months that Microsoft even tried offering anything when they realized it has been ten years and a lot of companies remain stuck to this day on .Net Framework.

Yes, Web Forms was poorly designed. But this is about Microsoft's poor upgrade offerings more than any specific technology, and an import lesson for people investing time/resources into Blazor today (given that it uses a proprietary WebAssembly compilation system and a proprietary back-end not dissimilar from WebForms in terms of lock-in).


It seems you have not kept up with .NET development. Web Forms have been obsolete since many years by now.


Technologies don’t live forever. At some point you need to upgrade, and at some points there will be some major breaking changes.

.NET 4.8 is still fully supported, and there is no end-of-life communicated yet. It is a part of windows server 2022, which will be supported until 2031, that’s probably the earliest possible end-of-life date for .NET 4.8.


If people are looking at what technologies and companies to invest in, knowing that they may be left high and dry with a "full rewrite"-level of breaking changes is a valid and relevant critique. You want to know you'll be supported through difficult transitions, and that isn't unreasonable.

I don't understand why people are coming out of the woodwork to tell everyone that full rewrites are a perfectly normal and expected regular occurrence that you should expect from your tech choice. Python 3 is famous for how much they hurt themselves by ignoring the problem. Many companies never did migrate from 2 to 3, and instead picked tech with better guarantees. I expect the same to occur with .Net Framework.


The big difference between the two is that the classic .NET Framework is still supported for as long as the underlying version of Windows is (and to remind, it still ships in Win11). So old .NET apps don't have to be rewritten to continue running securely.


Do you have any example what technology gave you a clean upgrade path from 2005 until now, without the need to do major changes?

With .NET you can still, use most of your code from 2001, just the UI frameworks (and WCF) are not supported anymore on .NET 5+. They are supported at least until 2031 on .NET 4.8. So you can easily move your business logic to a newer version and leave the UI on an older version. But seriously, who is still happy with a web application built in 2005?


Even many of the desktop UI frameworks are supported on .NET 5+ (on Windows only). Both WinForms and WPF are back in "support" in .NET 5+ (.NET 8 includes [small] updates for both WinForms and WPF). Not every UI component moves forward, but a crazy amount of them are still compatible after decades.


.NET Framework will be supported for much longer than that. It is used in many internal Windows components including MMC snap-ins, ADFS, and other Microsoft technologies like Exchange and Sharepoint.

Or maybe it will not be supported as in "we won't provide support for any applications needing .NET Framework, but they may or may not still work"?


Or to phrase it another way: Microsoft themselves failed to migrate off .NET Framework to .NET Core for the same reasons their customers are also stuck.

PowerShell Core has been available for years but still is not included with Windows.

Similarly, even new web UI components such as Admin Center are written using .NET Framework.


> Microsoft themselves failed to migrate off .NET Framework to .NET Core for the same reasons their customers are also stuck.

I'd wager it is more a question of priorities. I bet Microsoft itself sees Windows Server as legacy tech as they want everyone to move to the(ir) cloud.

Partially it is also a organisational problem. Everything you ship as a Windows component incurs a debt, which is exactly why .NET Framework wasn't able to evolve further.


Imagine it like a Git repo.

Old .NET Framework was a main, Windows only, branch of .NET.

At tag "v4.x", a new branch named 'Core' was created. Core was a multiplatform version of .NET.

There were three tags in the Core branch: v1, v2 & v3.

Then Core jumped from v3 to v5 and was merged back into the main branch. ".NET Core" has replaced ".NET Framework". Also, v5 dropped 'Core' from the name and was named just ".NET 5".

Development now happens on the main branch and has already reached tag v7. A new tag is created each year.


This is a very helpful analogy. Thanks.


I'm traditionally about as far from a "microsoft ecosystem" dude as it gets. The first MS anything I've touched in my career has been the last year or so in csharp, and I love it. Great performance, excellent language, sane architectures. ASP.net is awesome for APIs.

I use Rider from Jetbrains, develop on a mac, deploy to Linux on AWS, it would blow the mind of 2003 me to know this was the case.


I spent a lot of time as a Java developer and have recently moved into a spot where I'm being asked to work with C#. A few things rub me the wrong way, and I'm wondering if it's just ignorance and / or me being stuck in my old ways.

- Unit testing with mocking is kludgy compared to Java and Kotlin (Moq vs Mockito). There's no mocking of concrete implementations for technical reasons, perhaps unless you shell out money for a paid product. This leads to folks putting interfaces everywhere so that code is testable.

- Along the lines of the above, everything feels a lot more corporatized or tied to Microsoft. The open source ecosystem and tooling around C# isn't to the same level of the JVM langs.

- The project structure feels odd. Solutions hide a lot of files from you, whereas JVM langs generally show what's actually there.

When it comes to actually writing implementations, I don't mind it, and it's far ahead of, say, Java 8. ASP.net might even be ahead of Spring in a few ways. But with Java 20 and Kotlin around, I don't feel compelled to move to it as a new default.


There are other libraries than Moq. Nsubstitute, FakeItEasy, etc.

With Moq and most mocking libraries, you generally just need to make a method virtual for mocking concrete classes.

Most mocking libraries use Castle Project's Dynamic Proxy, so they should be able to inject things without the need of interfaces. Interfaces for everything comes from the .net community's obsession with patterns, abstraction, clean architecture, and DDD.

You can use vscode with file based projects. With Rider/Visual Studio you can show hidden folder so long as they are in a folder or subfolder of a project.

Otherwise, you'll need to add a solution folder and items to the solution folder.


Right, marking methods as virtual is the other route I saw. Are most C# codebases opting to do that rather than add interfaces? It feels weird to edit signatures like that for testability, but maybe it's just what I'm used to - I'm putting all of my objects in constructors already for testability's sake, and I'm no stranger to making a factory or two.

VSCode has been my way forward for file-based editing so far. It's not so bad - just feels like I'm doing things that I ought not to be when I'm searching for DLLs or a random "scripts" folder in the root folder of our repo in Rider. In IntelliJ IDEA, all of the folders are just sitting there, ready for quick edits.

I think C# overcommitted on interface-driven design. It's good in some instances, but the more I work in code the more I think that the majority of it is needless and oftentimes harmful to maintaining a healthy code base.

Thanks for the pointers! Nsubstitute looks cleaner than Moq at first glance.


That is a good question. I'm not sure. I've seen a lot interface usage for testing web apps or backend services at companies when I was doing consulting work.

I personally use stubs for higher reuse, favor integration/functional tests over units, and InternalsVisibleToAttribute to allow the use of internal classes / methods from test assemblies. Its been a long while since I reached for a mocking library. I probably use structs, statics, and extension methods more than the typical dotnet dev. And for DI, I have no qualms just using concrete classes or base classes.

I also built my own extensions to xunit to enable DI in test methods, since I do more integration tests.

Extension methods for interfaces can be very powerful along with the newer default interface feature. So using that where it makes sense, I get.

In dotnet itself, interfaces are only heavily used for key extension points, so you see them more enforcing specific idioms like IComparable<T>, IReadOnlyList<T> or adapter/extensions for System.Data.x, Microsoft.Extensions.x, and Asp.net core MVC. For the adapter types of things there is generally a base class that implements key interfaces and you can create stubs or mocks from those.

For the files... for stuff in the root folder like .editorconfig, gitignore, etc. I generally create a virtual Solution folder and add those files to it.

For folders with larger amounts of files, I sometimes create an empty project or use specialized project for it like node or powershell project which can be installed as VS extensions, especially if its for a project where the team lives in Visual Studio.

That said, I do most of my coding in JetBrains rider or vscode these days and keep a terminal open. When I'm on a windows box, I have nano and neovim installed, so I can just quickly edit scripts that way or just do code /path/to/file as needed.


Really liking your approaches and wish our code base followed them more as well.

Mocking leads to “interfaces everywhere”, I agree. The latter also results from the attitude of putting interfaces all over, even if there’s just a single concrete implementation that makes sense at any point in time, like a file hashing routine (which of course should just be a function but that’s not a thing in C#). Dependency inversion gone overboard?

My biggest qualm with mocking is setting up a detailed test, where every method called is specified, alongside their order etc. You’re reimplementing the entire method essentially. Those tests I despise.


The best way IMHO is to start simple and then add abstractions and interfaces over time. That will mean that, yes, when you come in to a project late there will be lots of those...

There are other practical reasons for using interfaces other than just "patterns" and "testing" though. Perf is one; you can avoid a lot of unnecessary mapping by being able to pass concrete types between modules when the receiver is accepting a shape instead of a concrete type...


Is hot loading the default in JVM-ville circa 2023 ? God I hope so, I remember downloading these third-party builds and 180 characters of args or something to get that going + whatever voodoo was needed to make your janky framework respect that, make your IDE use it etc.


> There's no mocking of concrete implementations

That was always halfway between oxymoron and forbidden magic.


> There is no mocking of concrete implementations

That's an obvious anti-pattern in the first place, especially when you know the optimizer could inline hot code and optimize the function out.

> Everything feels a lot more corporartized or tied to Microsoft

Remember J2EE and the confusing javax stuff? They're the same and fortunately both are waning out.

> The open source ecosystem and tooling around C# isn't to the same level of the JVM

Isn't that because of the open source community's refusal, rejection and resistance against Microsoft? Not only open sourcing is a first mover takes all, but specific to Microsoft they have been known to be hostile to open source in the past, most prominently one of the former CEOs of Microsoft blatantly called Linux cancer and what do we have today (https://www.theregister.com/2001/06/02/ballmer_linux_is_a_ca...)

Also, it is very well known Microsoft tried to spread FUD in the past in order to kill Netscape Navigator and tried to kill Java until it is almost being antitrusted to the extent of Baby Bells (https://en.wikipedia.org/wiki/Breakup_of_the_Bell_System?wpr...). Still Microsoft is not friendly nor hostile towards open source given that they let Mono lived, despite using Microsoft's trademarks and patents (you heard me right, dotnet has several patents and ECMA standards before!)

It's not like we dotnet people don't know the past shady shit of Microsoft, but I do believe in a convicted criminal could learn from the past, correct itself, move on from the past and be a better person off the record in the long run.

It's like a big bully tried to beat off a skinny nerd but now the nerd is as big as the bully now, and all of a sudden the bully found its conscience and begged for apology. It is natural for the nerd to not accept it in the first place. Except when both are nerds and the analogy may sound kinda odd.

But I also do understand not all people thinks like that especially for the open source community who is wary of Microsoft may attempt to commit a FUD to the open source community and they do have their memory and thus reactions, I don't have any means to control it. In fact most ordinary people would still choose to reject a convicted criminal in their community instead of accepting even if they have stopped the behavior, because it is a social stigma that indicates this person is of high risk

And contrary to Java world, the Oracle situation is getting more and more heated due to its predatory licensing agreement and people are flocking to other Java distributions. This could tear the Java ecosystem apart because it is the .NET Standard situation again and I'm pretty sure histories repeat. By the way, fragmentation is what ultimately killed MIPS and I think RISCV is on the watch.

Alas, time as always will tell, just like it's either hit or miss. We should look for a longer vision and see how it shaped out in the futures.

> The project structure feels odd

No it isn't. It's the multiprojects structure of Gradle, although I'm not sure if you ever heard of it in the first place given your reaction. In JS ecosystem this is aka monorepo but this means .NET Ecosystem actually have monorepos for almost the last two decades!


What's wrong with mocking concrete implementations? I've worked on pretty large-scale services with AWS and we did it all day everyday. I think it worked fine and don't know if adding an interface on top of them would help.

I don't really remember J2EE, but I do run into weird Javax imports from time to time. +1 on them going away being a good thing.

For the open source stuff, I don't really care that much about the politics or history of it. Maybe I should, but at the end of the day Java just seems to have the libraries I need.

And for the project structure, maybe I should have worded that differently. The structure of nested projects makes sense, but the dependency management through Nuget and the hidden project files in IDEs just seems needlessly indirect - just show me files and let me add text for a dependency like in Maven or Gradle. To be fair, I think this one is partially just me being new to the .NET world.


The Oracle situation doesn’t get heated at all, and that’s just not how Java vendors work — basically everything is 100% open-source OpenJDK, some companies just take it, add some marketing bullshit, possibly backport some bugfixes for older versions, and sell a support license for their “vendor”.

It is pretty much what Red Hat Linux does to linux.

Here I expanded a bit more on the topic: https://news.ycombinator.com/item?id=35249701


I love the new dotnet ecosystem and use it professionally but am I the only one who thinks two years isn’t long enough for a “long term support” release?

Obviously you’ll be able to run older versions of the framework so long as the underlying OS supports it but depending what type of environment you’re in this means you’re likely going to have to update perfectly working applications every two years.

Do I think that it will take a massive amount of work to update from dotnet 6 to 8, or 8 to 10 in a few years? Probably not, but that could change at any time depending on the whims of Microsoft on that particular day.

I get that the current technical zeitgeist is that if you’re not releasing a new version every other day you’re falling behind but five years seems a bit more reasonable to me.


You actually get three years of support from .NET for Long Term Support (LTS) release. You can find the full support policy here: https://dotnet.microsoft.com/platform/support/policy


The upgrade process from 5 to 6 to 7 has been a total breeze, and I’expect things to continue in that matter.

There was a massive amount of churn between the introduction of .NET Core and Core 3, then 3 to 5 was quite a bit less so, and since then it’s been a non-issue. Pretty much just flip a version flag and get new features.


> Pretty much just flip a version flag and get new features.

Sure, but in many cases for an application that is "done" we're not looking for new features, per se - we're looking for security patches. Moving from one major version to the next incurs a whole bunch of testing and validation that otherwise might not be needed (depending on your environment and industry).


Yep 6 to 7 was just changing a 6 to a 7 in a project file


I view frequent upgrades in the same way as LetsEncrypt's short certificate expiry. It encourages you to create a system that you trust to be upgradeable, rather than a beast that sits, untouched for fear it rears it's head in some ugly manner.

Frankly, places that put off updates have always been a bit of a mess from personal experience and I'd rather have smaller more frequent updates that take a day or two every few years than massive irregular ones that aren't feasible to apply.


> but am I the only one who thinks two years isn’t long enough for a “long term support” release?

No, I feel the same. It really is a completely different mindset compared to .net framework. And I believe it is recommended to keep upgrading, even in maintenance mode. I noticed that Microsoft removes target frameworks from their nuget packages as soon as they are no longer supported. That could be problematic if you need a security update and are on a no longer supported target framework.

An “enterprise” environment that is used to Microsoft products is not used to that.


>> 6 to 8

If you are using BinaryFormatter you certainly will have some work ahead of you.


It's been suggested to avoid BinaryFormatter since well before WCF was built and WCF itself has since been deprecated after a decade or so of service.


It may be a self inflicted wound but a wound nonetheless.


Odd question maybe, my company uses C# and I've been picking up more backend responsibilities. However, I'm a Linux and Neovim user. I do have Windows and Visual Studio available but I just find it awkward.

Is anyone here having success with Linux and Neovim for C#? Even trying to learn more about the language is awkward as so many resources go straight into VS.


https://github.com/OmniSharp/Omnisharp-vim is a thing, but I don’t know how good it is. I would probably go with VSCode or Rider (and their respective Vim plugins), as they are quite productive for .NET.


I’ve been using Rider in Ubuntu for a year or so, and not really missed VS. Only once when looking into the identity UI templates which required some scaffolding stuff in VS.


I strongly recommend Rider, even for those using Windows. It's a great piece of software.


Why pay for half of the experience of IDE from the OS vendor, specially when having MSDN licenses already paid for?


Modern .Net is pretty focused on web apps. Many of us never need to touch C++, GPU, or the other things you mentioned. I've been using Visual Studio for almost 20 years and IMO Rider is simply a superior C# IDE. It's been my daily driver for going on three years. Of course I keep VS installed because of legacy tools like the WinForms designer that aren't supported in Rider, but that's something I need maybe once a year on average?


I bet WinDev and MAUI teams have other point of view regarding that.

Also, again why pay twice.

Maybe when I start seeing Rider demos done at BUILD.


What half is Rider missing? And of course you should use VS if you're getting it for free while having to pay for Rider.

I find Rider a much more smooth experience than VS. It's faster in almost every way (loading solutions, searching, etc).


Rider is great. If you haven't tried VS in a while, VS2022 moved to 64 bit and the performance is much, much better than 2019. There appears to be a lot more coming, many of the preview features focus on making as many interactions as possible asynchronous e.g. loading projects/refactoring operations/Intelliense completions etc.


Full stack Windows development, across all Microsoft products and various kinds of Windows SDKs, GPU debugging, hot code reload for .NET and C++, visual tooling for architecture workflows.


Rider does full stack (eg MAUI, Blazor, WPF, Xamarin etc for frontends and can publish to various cloud services) and has hot reload. What Microsoft products and SDKs does Rider not support?

As far as I can tell GPU debugging is C++ only, where I thought we were discussing C#, same with the C++ hot reload. However CLion does both.


SharePoint, Dynamics, SQL Server SP in .NET, Sitecore (not MS but big player in enterprise CMS), 3rd party component libraries for VS designers, WDK, integration with Azure DevOps workflows, two way modeling of code (not on professsional), mixed language debugging, REPL for all languages, graphical tooling for parallel debugging, ETW, DockerDesktop replacement GUI, ...

We where discussing what a Visual Studio full install offers, with a single license.


I’m using it on Xubuntu with Rider as well. I’m using the CLI to scaffold identity including UI.


Rider is pretty much the only decent option for me, if Linux is a requirement. However not even Rider supports .NET Hot Reloading (Edit and Continue), because Microsoft hasn't implemented runtime support for it in Linux. Doing so would probably work against their vested interest in rolling in Visual Studio subscriptions. Yet EnC is a significant timesaver for my projects, to the point where I find it hard to work without it.

https://github.com/dotnet/runtime/issues/12409

So the most efficient C# development solution for me is going back to Windows and Visual Studio. As they want it to be.


> So the most efficient C# development solution for me is going back to Windows and Visual Studio. As they want it to be.

Hot reload works mostly fine in Rider on Windows.


Sadly, OmniSharp (the LSP for vscode and nvim) isn't all that great. The performance is incredibly bad, easily orders of magnitude worse than VS and Rider.

There is this alternative LSP, which I plan to try out still: https://github.com/razzmatazz/csharp-language-server

At the moment I'm stuck with Rider, which is terrible (there is an official vim mode plugin), but I still have to port my bindings over to their vim config. Telescope etc. isn't available, which is a sad day.


Can't comment on neovim. VS Code works well on Linux for another option.


I mostly work with Powershell, but dabble in C#. Both omnisharp and the Powershell CoC plugins for Neovim seemed to break often. I've switched to VS Code for Powershell and Visual Studio for C#, as sad as that makes me.


This is one mark I have against C#, it is not very stock vim friendly (plug-ins likely exist however). IDE support is almost a requirement for C#. I find Python better in this regard.

You can always use vs code however.


I use vim plugins for whichever IDE (VS, vscode, Rider) I'm using - not as good as full-featured as pure vim, but what I needs it's more than good enough.


I do a lot of F# on Mac/Linux and use sublime. All the nice IDE-like features are accessible through an LSP package for your code editor.


One more vote for Rider on a Mac.


I'd be curious to know what is the actual adoption of .net core. I.e. of all the actively developed applications (not just new projects), what is the .net core / .net framework split.

I found that the upgrade process is not seamless. Asp.net core has little to do with asp.net MVC. Winform introduced all sorts of contraints. The BCL is full of small changes or features missing.

People are less vocal than for the python schism, but I'd expect a lot of projects to be stuck on .net framework, not the least because of incompatible dependencies.

Nullable reference type is another massive breaking change coming if they make it more than a compiler warning, which I suspect is the long term intention.

Not breaking your users used to be a laudable goal of the .net team. I miss it.


The move away from web forms has indeed been painful. Mostly because our web forms were pretty stupidly implemented, though. It’s a legitimate downer, though. It’s been a lot easier to migrate back-end services.

Nullable reference types are completely optional. You can turn them off at the project level. I find they make more sense for some projects (domain logic, etc.) than others (EF and API projects for example).


Another thing to keep in mind is Microsoft shops don’t tend to use anything else. So while I’m sure many have not transitioned, it’s basically a forgone conclusion. The competitors to most .net apps are excel and saas.


I just ported a pretty large domain specific library from VB with .NET Framework 4.61 to C# with .NET 7. It took me probably 3 to 4 days to do with the assistance of Instant C# converter.

There are lots of gotchas here and there and it does require a pretty reasonable understanding of both legacy and new frameworks.


I'm interested in this case!

I have a bit app: Razor, MVC, EF, custom nuget packages, reflection, very advanced expressions ( linq) and my IoC is Autofac ( .net 4.7.2).

101 projects in one solution.

Have you got any pointers on gotchas? Did a quick attempt ( 1 evening) and got blocked on my nugets+ Autofac.


Getting all your private nugets onto .NET standard 2.0 first will help immensely. You can add precompiler directives for anything strictly incompatible. After that it’s just the usual dependency hell that plagues every modern ecosystem.

Also have a look at the MS upgrade/migration guides for .NET. They’re exhaustive and extremely helpful.


Thanks, that's indeed exactly what I tried.

I'll have another try later this year probably :p


> 101 projects

Yikes. This is in C# I assume? If in VB, I'd run one project through Instant C# and see how many problems you encounter. There are lots of VBisms that are simply not in C# (like statement for one, xml literals, etc...).

Run the simplest project through the `upgrade-assistant` tool. It'll convert it to .NET Core. I saw the sibling post recommend .NET Standard, but if you don't plan on using anything legacy this is more headache than its worth.

But yeah, I hear you on the nuget dependencies. Some of them were never ported to .NET Standard or Core so you are left trying to recompile them manually yourself.


Hey, that's not nice. Let's bite ;) .

It's c# yeah. But I believe the project is much cleaner and frankly better to understand than all other projects i've encountered for this size. I'm using DDD, so DDD knowledge is a requirement to navigate this in a breeze :) :

- https://snipboard.io/D03VWg.jpg - General overview of the architecture. Small fyi: Connectors => Autogenerated nugets to call the api's

- https://snipboard.io/9M24hB.jpg - Sample of Modules + Presentation layer

- https://snipboard.io/ybp6EH.jpg - Example of Specifications related to catalog ( = products )

- https://snipboard.io/lE9vcK.jpg - How specifications are translated to the infrastructure ( here I'm using EF, so I'm using Expressions a lot), but plain old SQL is also supported. A query is basically a list of AND/OR Specifications where a hierarchy is possible. This would translate to "(QUERY 1 ) AND ((QUERY 2) AND (QUERY 3))" in the Infrastructure layer.

- https://snipboard.io/7rVBpk.jpg - . In general, i have 2 List methods ( one for Paged queries and one not for Paged queries)

Additional fyi: Is V2, so has some legacy code. Uses my own STS. Has 2 gateways ( the ShopGateway that is used to develop new sites and the BackendGateway for the Backend). Enduser frontend is in MVC for SEO purpose, Customer backend is in Angular ( SPA). The basket is a NoSql implementation on top of SQL server.

The enduser frontend supports a hierarchy of themes ( so it's insanely flexible to create variations of pages for other clients).

There are more projects involved outside of this solution, eg. nuget repo's usable accross solutions (JWT, Specifications, ...) and "plugins" for a standalone project that is installed for end-users for syncing local data. So it's +101 projects :)

It's used for eg. https://belgianbrewed.com/


It looks very nicely organized. Looks like your biggest challenge will be converting UI projects.

I even hate to suggest it because it's double the work, but I'd convert all your libraries first to .NET Standard 2. And then make sure they will work fine with your .NET Framework 4.x UI projects.

The reason this sucks (I didn't realize it at the time) is that .NET Standard 2 is stuck with C# 7.3. Which means you are missing out on all the C# cool toys. You can't even use .NET Standard 2.1 (and C# 8) because it's not supported by the .NET Framework 4.x.

Then go on to the UI projects. Automated conversion may or may not work. Probably not if it's complex. The middleware won't simply convert, particularly if you had routing, filtering, etc... You may have to create new projects. I was able to copy/paste lots of code (routing for instance) but how it was wired was different.

If you are successful, you can go back and upgrade the library projects from .NET Standard to .NET Core.

Good luck.


Yeah. .net standard also has some system.net issues as far as I'm aware ( noticed at work).

Good news is that the nugets are already on it!

Automated conversion didn't go okay enough in a short timeframe ( even with the Microsoft docs).


> Not breaking your users used to be a laudable goal of the .net team. I miss it

Is it that bad once you are on .NET Core? Asking as someone still on Framework.

Framework -> Core ofcourse is breaking, what did you expect when going cross platform and totally rearchitected?


It’s fine while on core. Maybe adjusted the app setup 2-3 times, fairly trivial stuff, an hour or so in the last year


I am sure cross platform is useful to some users but not to me. So to me it's all breaking changes for little benefits.

I am not saying .net core is bad, just that the migration is a lot of work. And you need to disable all sorts of compiler warnings unless you are ready to rewrite pretty much all your code to make it nullable ref type friendly. And some day those warnings will be errors.


> And you need to disable all sorts of compiler warnings unless you are ready to rewrite pretty much all your code to make it nullable ref type friendly.

Not really. Just disable nullability checking — you can do it project-wide by removing the Nullable tag from your .csproj, and you can continue ignoring nullability.

> And some day those warnings will be errors.

[citation needed]. A lot of code is written without nullability in mind. That includes the .NET Core BCL, AFAIK.


> That includes the .NET Core BCL, AFAIK.

The BCL was completely annotated with NRTs for the .NET 7 release, minus some #nullable disable spots.


We moved from ASP.NET MVC on Framework 4.8 to .net core without it being too painful. Mostly we needed to re-work the authentication pipeline stuff, but the netcore way ends up being much nicer anyway


I am mostly a Python person, but I must confess I'm impressed by both modern .NET and Java's Spring Boot. I architect for two teams, one using Java, the other using C#, for building services and the amount of boilerplate code, that used to be a huge turnoff for me, has been impressively reduced.

There's still a lot, but it's now more or less comparable to what you'd expect you'd end up with on a similar app built on, say, FastAPI.


I'd recommend taking a look at F#.


Isn’t there still lots of boilerplate, it’s just generated for you?


The boilerplate is pretty minimal nowadays. In 2020, C# introduced top-level statements, which removed a lot of the ceremony.

You used to have a class, and there would typically be a separate Startup.cs file. Today, this is a complete .cs file for creating an API:

    var builder = WebApplication.CreateBuilder(args);
    var app = builder.Build();

    app.MapGet("/", () => "Hello World!");

    app.Run();


Nope. Take a look at Minimal API in .net 6 and onwards.


"Hello world" is a one-liner Since .Net 6.


I love c# and dotnet. I've been holding off on flutter and praying Maui becomes a real thing


Northern Europe runs on C# and .Net. I am not joking. There are more C# jobs in Denmark than Python, Node.js, Ruby, and PHP jobs combined.


It is surprising to me that C# and .NET is comparatively under-represented in North America where it was born. Nothing but good experience in developing and operating software with it, both desktop apps (Win/Mac) and backend services (Linux).


I like .Net and C#, but I've had a lot of experiences that were not good.

I personally hated working with Visual Studio 2017. 2019 was an improvement, but one so small it was still awful. I recently used it again (2021?) and was cynically surprised that they FINALLY got scrolling that isn't forcibly line-based. Why did they even bother now?

This and some other small things that used to make the experience painful are now solved, but they still haven't really gotten on top of the freezes and crashes.

Visual Studio being bad is one thing, it being the only officially supported IDE is the other. They should not only officially support third party efforts like Jetbrains Rider, in the past I've gone as far as saying they should discontinue Visual Studio in favor of it.

Now for the nice part: Almost all "new .NET" announcements/tutorials focus on VS Code, an IDE that sucks a whole lot less.

On another note: Transforming a 20 line T4 template took 4-5x as long as the actual compilation on the last .Net project I've seen.


Weird how experiences differ. I found VS to be one of the best IDEs I've ever used. Of course in conjunction with C#.

Yeah it's big and takes some big chunk of resources, but that has never been a problem in my daily work.


I don't think it is that under-represented in North America. There are large swaths of "flyover country" that are deeply invested in .NET and C#. It was never "hip" or "cool" for Silicon Valley, but a lot of the Fortune 500 outside of FAANG/GAFAM/what-have-you is quietly built on it. Scott Hanselman termed it the "dark matter developer" experience that most .NET stuff just doesn't trend on Reddit or HN, it just kind of quietly exists and gets things done.


The differences were always there, but they change over time. I remember seeing a map ~15 years ago showing market shares of dev stacks in different countries. If I remember correctly, Europe on it was predominantly Java and Delphi, Australia and NZ were mostly .NET, and US and Canada were a mix Java and .NET with the former having a slight edge.


It's kind of hard for me to mentally square that with the amount/types of publications that exist for it, though of course it makes sense on further thought.

What I mean is that I found it very hard to find "interesting" stuff built with it. You get the official documentation and that's it - for .Net Framework, anyway. Few blog posts about issues and their solutions, cool demonstrations or just general musings. It's like people don't care. And maybe they don't - it's just their job, after all, and they can't wait to go home to their wives and kids rather than think about some technology more than they have to.

As a newcomer (forced through an apprenticeship) who does think about technology more than he has to I found this bewildering and demotivating.

Microsoft seems to have realized this and seems to have put a reasonable amount of effort into building a community around the new .Net and making it seem exciting/interesting rather than something that's just in a really slow death spiral. Oh, and building things that one can easily get excited about. Non-nullable types come to mind.


Great observation. Search results for C# are awful. Full of 2000s style, for-profit, consultancy-selling, lukewarm-quality blogs. None of the high grade stuff enthusiastic users from other ecosystems put out. Not to beat a dead horse, but Rust is a good, contrasting example here. In the case of C#, user generated content is written during work hours. It is a worker’s language. In some other ecosystems, content is written on weekends, by enthusiasts (who might not be employed in that ecosystems as there’s no jobs available…).


Same in New Zealand.


Both of those places used to have craptons of mission-critical Delphi code. Going to .NET seems a natural transition from that.


I wouldn’t hold your breath on Maui. I hold no hope for it.


i don’t think I will ever bet on any .NET desktop UI framework again. Even MS is going to Electron or similar for their own stuff. I expect Maui to be put in maintenance mode soon or just to be forgotten.


My problem is many 3rd party companies simply are not upgrading from 4.8 which means I’m stuck on 4.8. It’s too much work for me to reproduce all of these 3rd party controls. Some of these companies are defunct, some don’t see the demand to justify even the small effort, whatever the reason the loss of backward compatibility has been a huge loss to the comercial ecosystem. Microsoft dose not have the power it used to. Practically every large tender I see requires that must be run in the browser. I’m pretty sure everyone is sick of Microsoft.


I have high long term hopes for Maui Blazor. It's such a simple thing, a bridge between native-compiled .NET code and a platform-provided web renderer. There will always be a need for both of those things in .NET, always, so the bridge seems sensible to keep alive.

Not sure about Xaml based Maui...that may well go the way of Silverlight.


Is that like Tauri except it’s dotnet instead of rust?


https://learn.microsoft.com/en-us/training/modules/build-bla...

"In a Blazor Hybrid app, Razor components run natively on the device. Components render to an embedded Web View control through a local interop channel. Components don't run in the browser, and WebAssembly isn't involved. Razor components load and execute code quickly, and components have full access to the native capabilities of the device through the .NET platform."

I'm not familiar with Tauri but skimming it seems similar. Except that Razor syntax allows C# inline with HTML so you can avoid JavaScript altogether. Maui Blazor also uses Xamarin to support mobile, and can compile to WASM to run in a browser.

(What i skimmed) https://tauri.app/v1/references/architecture/


Avalonia is a real thing and works quite well.


Uno Platform is another popular one. (I haven't tried either.)


I'm too old to believe that, even if it becomes a real thing, it won't die exactly like the rest and leave someone supporting a dead framework or doing a rewrite on the next doomed thing. Those kinds of things depend too much on the rest of the world sharing interest in keeping it alive. Apple and Google aren't going to do that unless it benefits them, and it gives them a EEE attack surface to use against MS.


Unlikely - unless they absolutely crush in terms of performance.


You should also give the Godot game engine a try, it's actually really good at making GUIs


For anyone unfamiliar who just wants to see something running, then assuming the DotNet Core SDK has been installed (see https://dotnet.microsoft.com/en-us/download) the following creates a folder, a solution, a minimal web API (4 lines of code), and starts it running.

mkdir MyApp && cd MyApp

dotnet new sln -n MyApp && dotnet new web -n Api

dotnet sln add ./Api/Api.csproj && dotnet run --project Api

Many devs would want more features out of the box so they'd replace "web" in the "dotnet new" command accordingly (use "dotnet new list" to see the options, eg mvc, grpc, or react). But this is the minimalist option just to 'get something running'.

Edit: For background, new sln creates a new DotNet Solution and new web creates a new (minimal) C# web Project. Then sln add tells the new solution that the new project is part of it.


As a developer who has been using .NET for many years, C# has given me unprecedented pleasure and efficiency in programming, thanks to a copy of C# code that can be used simultaneously for Linux, Windows, MacOS, Android, iOS, excellent tools, and IDE support.

C# is still one of my favorite programming languages (including syntax, semantics, and standard libraries). Although it is a bit complex now, it has also created many excellent designs such as async, await, and so on.

VS and VS Code are also IDEs that I still use today, although I rarely write C# code.


As a fan of O'Caml, I have to ask: what's the status of F# on .NET?

...and is it used much compared to C#, or is its use at least growing, or is it stagnating/dying?


I have a side project where, for some years now, I've supported an app built using F# with Xamarin.Forms and the Fabulous MVU framework. Before .NET 6, things were generally pretty great, and I thought this was an impressive achievement considering how many moving parts the combination of .NET with F# + Xamarin + Fabulous entails.

As .NET 6 and MAUI started to come on the scene, stuff went haywire pretty badly, tooling issues like breakpoints in Visual Studio no longer working in my projects, obscure build errors, confusing build warnings, dependency hell particularly with Xamarin.Android NuGet packages and Xamarin.Essentials. I'm still not up to date with all NuGet packages because doing so breaks my app at runtime. I'm in this halfway point in regards to use of the PackageReference project type. Things haven't been smooth lately for F# and Fabulous projects.

Things are slowly getting better though, and I would say that my experience is probably not entirely typical due to the inclusion of Xamarin, which introduces a whole additional layer of crazy. I think if you were to use F# for backend web services for example, then your experience would probably be a great deal more palatable than mine. I don't think F# is stagnating or dying by any means, but I do feel that it is still a second class citizen to C#. I hope MS continues to work towards this not being the case, because with all the "batteries included" of .NET behind it, I think F# is a great functional-first language.


F# works, it gets updates and there is good (not perfect) tooling. Everything you can do with C# can be done with F# too. Although most APIs don’t feel very natural in F#. F# has only a small ecosystem, so you may not find the right library for every task, and need to use something object oriented from the C# world instead.


Seems to be about the same as it's ever been. Still being updated and worked on, and still very few people use it.


> Still being updated and worked on

It is, hands down, my best experience working in a functional language. I do hobby work in it, and it seems super nice, but I'm not sure where everyone is.

> still very few people use it.

Looking at a chart of GitHub and StackOverflow usage[1], OCaml/F# seem almost steady compared to the other functional languages, my suspicion is that Rust absorbed a lot of programmers looking for functional concepts in programming languages.

[1]: https://tjpalmer.github.io/languish/#y=mean&weights=issues%3...


yeah I’d say usage has been steady, maybe with a slight uptick in interest it feels like.

It’s a great platform, in some ways it’s like a secret weapon…

I joke that F# is kinda “easy-button Rust” lol


My impression - the people who use it historically (and I've met a few) aren't the companies that typically proponent open source software and the like. They are "closed source" shops (e.g. finance, insurance, etc). Even I am not willing to post on more than a throwaway account. I'm currently using it for large scale production systems in a very large public company powering a large data volume product with very high peaks of customer traffic and it works a treat. We decided to try C# 9 after some F# because "higher management" - compared to F# the dev's have found it verbose and painful still, despite the new features.

The productivity benefits are small and sprinkled - I don't personally think there isn't one "killer feature/app". It isn't just one thing, its little things that add up. Given the team jumped from other ecosystems (e.g. Go/Node/etc) they found F# easier to approach than C#. This is the perspective of the team I run and it comes up in PR comments (I do less code writing these days). Comments like "don't have to do this in F#", or "we need a framework for this because C#" are known to occur. Easier unit test writing, less dep injection headaches, concise function passing, easier inlining of math for perf, easy mocking/stubbing, unions, etc etc.

The big weakness to me is that the people that use it typically don't want to flaunt it, and that means good mentoring, the best/simple patterns to use, etc and management buy-in are not really public. Communities that you can join are not into large scale apps, meaning good scalable patterns and lessons learnt are hard to find.


the core F# language is great, however, what tends to suffer is the F# specific libraries (you can use all .NET libs, but I'm talking about the F# specific ones) and community. There are some things that are getting developed, but my experience is that things get abandoned, progress is slow, or things are somewhat clunky. But if you are writing your F# without any need for F# specific libs then it is great and you can interface it with C# code easily.


My experience is good, small but very active and engaged community, interop with C# is seemless, I architect a large healthcare care system, 5% F# and 95% C# and very happy


Any tips on how to migrate a very large ASP.NET MVC app (framework 4.8), that is still in active development, with tens of thousands of users, without a complete feature-freeze? Am I able to migrate to .NET 7 in small increments over a long period?


We have a similar sounding app in terms of scale. We just bit the bullet and a couple of us spent a few weeks doing the upgrade in one branch while the rest of the team worked in another. We'd periodically merge in to the upgrade branch and tweak things as needed.

It wasn't too bad, really. Mostly it was updating any new controller methods to use the new attributes.

We originally planned to upgrade incrementally (one project at a time) but to be honest, it looked that was going to be far more fiddly and annoying that just doing the full thing.


The devblog had a post on something about that last year: https://devblogs.microsoft.com/dotnet/incremental-asp-net-to...


F# also is kicking ass.

Were running .NET 6 + F# in production with ~4 developers and its been a breeze.


Exactly the same boat. About to switch to a C# role, so not exactly a world of difference, but I am going to miss F#. I can't help but feel that the success of our small team is majorly down to F# and the traps it helps you avoid.


Whatever happened to Xamarin? It is still tied into this ecosystem?


Xamarin merged into .NET and got rebranded MAUI ("multi-platform application user interface").


It’s a tired thing to say, but the “.NET” naming is so confusing on what’s what.

Which is sad because there’s so much to like about .NET


1. .NET "Core" ... I tell myself "C is for Cross platform." This is usually what people seem like they mean now when they talk about ".NET".

2. .NET "Framework" ... I tell myself "F is for Former." This is the older, Windows specific version.

3. .NET "Standard" ... I tell myself "S is for Specification." This is just the spec which defines what Core and Framework must implement.


I think .NET "Standard" is deprecated. Per this article. https://devblogs.microsoft.com/dotnet/the-future-of-net-stan...

Though reading through it, I am still confused.


.NET Standard specified a common API that was implemented by the .NET Core and .NET Framework runtimes, allowing library authors to easily target both platforms.

.NET Standard solved a problem at the time but as elucidated in the article, it turned out not to be the right solution in the long term. .NET Framework is essentially in maintenance mode and won't be receiving new language features, so new .NET Standard versions don't make much sense given the overhead they introduce. Newer versions of .NET will still be able to build and run libraries targeting .NET Standard but no new .NET Standard versions will be released.

Greenfield projects now typically target what was previously called .NET Core. With the removal of .NET Standard, .NET Core was renamed to .NET to reflect the fact that it's the whole ecosystem as far as any future development is concerned.


So the bottom line is that .NET Core can't call a .NET Framework library. It must be converted to .NET Standard first. And despite deprecation (let's call it that), .NET 5, 6, 7, 8 will continue to support calling into .NET Standard libraries.


I am not sure this is (still) correct - I am able to call into a .NET framework 4.8 library from .NET 7.


If it's built against .NET 4.8 but only uses the API surface subset that corresponds to .NET Standard, that works - but then it's effectively a .NET Standard library with metadata not properly reflecting that.


Since .NET Fx 4.7 or 4.8 libraries consumed via NuGet are "assumed" .NET Standard 2.0 unless proven otherwise for maximal compatibility with old code.


That’s super helpful, thank you.

Where it still gets a bit murky is:

> “ .NET has many different implementations, including the .NET Framework, Mono, and Unity. Each of these is a separate platform with separate Base Class Libraries (BCLs) and app models. .NET Core is another separate platform.”


You don’t really interact with or care about mono as A normal c# dev


Mono is almost entirely merged in .NET 8. Very few parts of "mono" exist now as a separate runtime.


It's just .NET going forward (for now) if you don't need to worry about any old stuff.


The issue is with Googling (or Binging if you’re so inclined). If you google ".NET foobar", you might get answers for .NET Core, but you might also get old answers for the .NET Framework, written at a time when there was just one .NET, or when .NET Core was not as relevant and not the “real” .NET. Of course, it’s not super important if “foobar” is something simple and generic, like “.NET split string”, but when you get into the murky and obscure stuff, it might be hard to figure out if you’re reading an article about .NET Framework or .NET the-artist-formerly-known-as-Core.


Suggest using ChatGPT instead of google.


Then go ahead and ask it, and tell us the results. A good query would be “How are assemblies loaded in .NET?”, and if the response talks about the Global Assembly Cache (GAC), app.config, or assembly redirections, it’s not applicable to the modern .NET.

I’m skeptical, ChatGPT is not magic, and it was trained on ~15 years’ worth of “there is only one .NET Framework, so we might as well call it .NET” content, ~5 years of “.NET Framework is for serious business, and there’s the cross-platform (mostly-)web .NET Core” content, and 1 year of “.NET 5 is the future of .NET”.


Except .NET Core doesn’t exist anymore - it is just .NET.


True, but I only recently started learning .NET and I had to come up with a mnemonic to help filter out and through older docs and resources. It gave enough clarity to gain traction.


What about Mono and Unity?


"Mono is an open source implementation of Microsoft's .NET Framework." [1]

Unity looks like it uses its own fork of Mono, or IL2CPP, which is a fancy thing that converts the intermediate language produced by C# into C++ code, which it compiles. [2][3][4]

[1]: https://www.mono-project.com

[2]: https://docs.unity3d.com/2023.2/Documentation/Manual/overvie...

[3]: https://docs.unity3d.com/Manual/IL2CPP.html

[4]: https://docs.unity3d.com/Manual/Mono.html


Unity was Mono, and still is in some places. Now, it's moving to the cross platform .Net runtime.


As of .NET 8 most of what "mono" was has merged directly into one cross-platform .NET runtime. There's very little "mono" left, and most of it is on the path to disappearing/further merging, including the parts of mono that Unity uses.


Assuming you're not looking at legacy and are on .NET 5.0+ it's not confusing, it's just the one ecosystem, no core or framework stuff, just .Net.

If you're before that, well, good luck.


> no core

Except the docs, which still say core all over the place, including the url's. ;-)


> It’s a tired thing to say,

And yet, here you are, right on the dot. First comment on any .NET thread.


And in a thread about an article that explains the very thing commenter is "confused about"


Isn't it with .Net 7 it becomes simply .Net and confusion with Framework, Standard, Core is now gone?


Yes. Has been that way since 5.


Which specifically wasn't named .NET 4 because that would be confused with .NET Framework 4.x :) Which I'm currently stuck on for work :(


Isn't Java world a lot more confusing? You have Java language, Java compiler, Java class library, Java Virtual Machine, Java bytecode, Java runtime environment, Java development kit (OpenJDK - is it open? how many forks are there?), Java SE, ME, EE, JavaBeans. What can you use where, what is the license, who made what, what role does Oracle play? So many questions in a universe where google doesn't exist to clear up confusion, no?


I Googled “.NET assembly loading”. The second hit is “Understanding How Assemblies Load in C# .NET”, [0], from July 2020, 4 months before the RTM release of the unified .NET 5. While this post mentions .NET Core by name once, this post is completely irrelevant to .NET Core/5+ (which threw out the Global Assembly Cache, assembly redirections, and generally simplified everything). How do you find articles that are actually relevant to the modern .NET and not the Framework?

The Java stuff is easy. From the perspective of a Java developer, most of the things you listed (except for ME, EE and JavaBeans) are just “Java”. And they always have been.

[0] https://michaelscodingspot.com/assemblies-load-in-dotnet/


These are different things: JDK already includes an implementation of API, compiler, and runtime. OpenJDK has many builds, not forks. Java EE is now known as Jakarta, and "Enterprise JavaBeans" were replaced by Spring. Clarifying just in case; otherwise I agree that the ecosystem might be confusing, especially if you consider the historical development and legacy code.


If you start now, you can ignore everything before .NET 5. And then there is no confusion left. Just don’t read any tutorials/documentation that were written for an older .NET version.


If you could rename it, what would it be and why?


Another good question would be - if you were Microsoft and wanted to plan a roadmap to take the .NET ecosystem forward from a couple of separate implementations (Windows-only .NET Framework, Cross-platform Xamarin) how would you go about this in a way that doesn't cause a big furore like Python 2->3 did?

Because I imagine you end up with what MS did - define a standard that can be used for building common code between new/old, continue support for both, quickly iterate the new while it's still new, be very open about support timelines and document everything pretty thoroughly. You'd just end up with a different set of names for that standard (.NET Standard), and new implementation (.NET Core) that make sense to you, but probably still confuse some people.

I don't think it's perfect, but .NET developers will (or should) all grasp the relationship between .NET Standard/Core/Framework (and now plain ".NET") pretty easily.


The only frustration I've experienced in actually working with it is that it leads to polluted search results.

Trying to find solutions to ASP.NET Framework problems usually requires discarding a whole bunch of irrelevant ASP.NET Core/.NET ones.

It's the same with Visual Studio/Visual Studio Code.


> Trying to find solutions to ASP.NET Framework problems usually requires discarding a whole bunch of irrelevant ASP.NET Core/.NET ones.

My experience was this was always a problem even before later versions of ASP.NET. There was so much backwards incompatible changes between ASP.NET 3/4/5 and ASP.NET MVC 3/4/5 and even things that searching for ASP.NET MVC 3/4/5/6 would disagree with ASP.NET 3/4/5 non-MVC recommendations.

If there was a point where googling ASP.NET problems was clean and unpolluted it probably only existed briefly in 1.0.


That's both reassuring and a bit depressing :P

I'm young enough to have only caught the tail end of Framework in my professional career but I do recall a lot of Razor Pages content showing up when looking for tutorials on MVC.

It's certainly not unique to .NET. Being stuck on an old version of Elasticsearch can turn into a nightmare when trying to find a quick a solution to a query problem.


It sounds like the issue you have is introducing breaking or backwards incompatible changes at all while retaining the a similar name. Which is a fair point.

I’m not sure what would help in that case other than using something else than “.NET” to identify it.


Precisely. It's certainly not an issue exclusive to the .NET ecosystem but it is a minor annoyance in my day to day life. On the plus side it only affects work on legacy projects that haven't been migrated yet. One day will be the last day that someone needs to add "-Core", "-Code", "-.NET5", "+Framework" etc. to their search queries and that will be a happy day!

Griping about a minor inconvenience isn't to say that I don't think it was the correct decision. The new .NET is already far better than Framework, in how the platform is progressing and the development experience.


This is a similar problem with ChatGPT and AI in general. While some things may not change much, you can't generally say "how do I x in .NET" without the chance of getting a really old answer.


Being more specific is definitely helpful. I've found that ChatGPT is less frustrating than DDG/Google, probably just due to the conversational nature. You can ask ChatGPT to correct itself if it doesn't give the answer you want. e.g. "Could you write that using .NET 7" or "Use <.net7 language feature> to solve the problem".

Whether it spits out the correct answer is still up to chance but the narrowing does seem to work quite well.


Doesn't run on Solaris and doesn't run on FreeBSD. An 'LTS' release means ... about 3 years.

No thanks...


LTS releases are every 2 years and there is a major version release every year.

.NET 6 LTS

.NET 7

.NET 8 LTS

FreeBSD support has been in the works for a while:

- https://wiki.freebsd.org/.NET

- https://github.com/dotnet/runtime/issues/14537


"Long Term Support (LTS)—These releases are supported for three years from their first release."

From my perspective, this is not "long term" ; but maybe others view it differently.


It is expected to have preliminary FreeBSD native build in .net 8. There are already PRs for that


How relevant is Solaris these days? Does it even make sense for them to add support when such a tiny fraction people would use it?


It is still used by some, but most importantly it shows a lack of portability. In the past the view that "all the world's a VAX" hampered portability, now we have "Everything is MacOS/Windows/Linux"... yet lots of properly designed software is easily ported to all kinds of platforms, even the BeOS-derived Haiku...


> most importantly it shows lack of portability

If it already runs on two Unix-likes I don't see the absence of Solaris support showing lack of portability. I actually think Solaris usage is the _more_ important piece: software is made for users and if those users don't exist (or there's such a small number as to make the opportunity cost of development not worth it) then what is the point?


[flagged]


> You now have “.NET 4.8.2” applications that cannot be upgraded to “.NET 5”. Try explaining that to a non technical person.

“A common convention in software is that new major version number of a platform indicates that software from earlier major versions may require changes, often quite significant, to function” is pretty easy for non-technical people to understand.


Usually non-technical persons do not give a faint fart about 'major' 'version numbers'. Is it a thing or goes somewhere? Yet about 'platforms' and 'sofware conventions'.

Is it working or not? Usually that's what they care about.

So it was ok in the old system but now there is an improved super shiny trendy best superpower new one and the previous ok is not ok anymore? [puzzled faces]


> Usually non-technical persons do not give a faint fart about ‘major’ ‘version numbers’.

This type of user is actually usually, IME, easier to explain it to:

“It’s a different platform and our software will not work without changes”. The less they “give a faint fart” about details, the easy it is to tell them the effects and have them accept and move on.

At most, you might occasionally need to invoke a car analogy: it’s like trying to use accessories designed for an older model year, when there has been a design change between model years – some things just don’t fit right anymore.


The different platform argument was easy to make when it was called ".NET Core". Almost impossible to make when it's also called ".NET", quite deliberately.


No, really, version numbers within a broader brand that can indicate incompatibility are not something non-technical users with any familiarity with modern consumer society are unfamiliar with.

I get that techies sometimes have preferences for perfect and eternal backward compatibility within a named product line, but trying to pass this off with “how will you explain this to non-techies”, and then treating non-techies as both more ignorant and more concerned with technical details than they are, is not a good way to promote that preference.


"Our app was seamlessly upgraded from .NET 3.5 SP1 to .NET 4.0 and through to .NET 4.8.2. I don't believe you when you say it won't immediately run on .NET 5.0. My PS5 plays PS4 games.

Upgrade it today as I'm off to play golf."


> My PS5 plays PS4 games.

That sounds like an outlier, though. Typically, consoles were not compatible between generations. PS1 games didn't play on PS2. PS2 games didn't play on PS3. Etc.


The first version of the PS3 had hardware PS2 emulation. I think this was removed in later versions.


Why does a non technical person care what version of .NET an application is on?


They are writing the cheques, so if you tell that person upgrading from 4 to 5 is very complicated, needs time and money they will probably think something is fishy.


But .NET 4.8.2 isn't the right name, is it? It's .NET Framework 4.8.2. .NET Standard 2.0 has also been around for six years now, so the upgrade path has been around for a while.


That's why I wrote it in quotation marks. .NET Framework was always shortened to .NET in the common office vernacular.


So maybe be accurate when asking someone to write cheques then?


dotnet upgrade assistant or dotnet try-convert can help with that.

- dotnet try-convert: https://github.com/dotnet/try-convert

- dotnet upgrade assistant: https://dotnet.microsoft.com/en-us/platform/upgrade-assistan...


>> Try explaining that to a non technical person

Why would a non-technical person care?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: