What the .NET team does on the technical side is almost universally wonderful. They've (ironically?) been a bit more successful than the Java folks at establishing multiple serious languages of different kinds running on and interoperating via the same runtime.
But it's still a Microsoft project and their sword keeps looming over its head. There have been some heavy-handed anti-developer choices in the past and one can safely assume that that has been their EEE-genes kicking in and it will probably not get better, but worse, the more they build up a community they feel they have (or should have) control over.
Wouldn't want to dive into that pool.
.NET is only as good as the parts Microsoft doesn't have too much control over and that doesn't even include the VSCode extension (or VSCode itself - Use VSCodium and you'll see).
> bit more successful than the Java folks at establishing multiple serious languages of different kinds running on and interoperating via the same runtime
But there was plenty of serious alternative languages running on the JVM? Scala, Kotlin, Clojure. All of which have more adoption than F#.
> They've (ironically?) been a bit more successful than the Java folks at establishing multiple serious languages of different kinds running on and interoperating via the same runtime.
Can you explain more? I'm less familiar with the .net side of things.
not OP, but I'd guess they are referring to F# and C# interoperability. It's trivial to call into each other from within the same project. I'm not as familiar with this on the JVM side myself. How easy/common is it to have multi-language projects? Because in .NET it's very common to have C# interface to F# projects and vice versa with minimal work.
It’s extremely easy. I have no idea what OP is talking about. I currently work on a mixed scala/java project. It’s also one of the biggest features of Kotlin, interoperability is seamless.
I regularly program in both Scala/Java and F#/C#. Interoperability in both is excellent. But I would not call it “trivial” in the Scala/Java case. Calling Scala code from Java is occasionally a pain (the converse is easy); however I don’t think I’ve ever had an issue calling F# code from C#. I think most of the friction wrt Scala lies in the fact that Scala reinvented the wheel in several places (eg collections; “operator overloading”) which are not obvious to use in Java.
Of course. But that isn’t the fault of the JVM or Oracle. That’s the language designers that made Scala. Look at Kotlins interop. It is rather seamless.
It kind of is because .NET CLI includes more stuff to facilitate the interop (eg. operator overloading, reified generics, value types). It limits your choices when implementing languages on top but it then makes it usable from different languages.
makes me remember about that meme where it depicts microsoft as an org with many divisions pointing guns at each other. devdiv is the good guys while sales dept. is the hated one.
This is what I use, but damn, does it make coding a bit annoying having to have an instance of the filesystem rather than using static classes like Path.
2MB for a standalone “Hello World” binary in C# is a similar size to one in Go. Do both languages produce similar-sized binaries for larger projects as well?
If so, would C# be a good choice for writing cross-platform command-line applications?
It seems that C# can produce easily-distributable binaries, while also hitting a sweet-spot in language design - not super complex like C++ or Rust, and also not overly simplistic like C or Go.
When Go first came out, it was better than C# in terms of support for concurrency, stand-alone binaries, and cross-platform. It seems that C# has done a lot of catching up since then, though.
I have not begun really using Go in earnest yet, but at least in the cloud engineering world it seems to be the language of choice (along with Python) due to the ease of using its concurrency capabilities
Because it was made to run on Linux from the start. I remember being excited to work with .net core on Linux, then discovering that most of the system libraries for network programming and related were either not implemented or badly working, but you would only find out if dug deep into low level stuff. After that, I swore of using it seeing how much the hype for cross platform didn't match reality. Hopefully they fixed it, but seeing as there are other choices of languages, it wasn't so bad.
I was looking for similar benchmarks but couldn't find anything conclusive. I found the benchmark game [1] but only had .net 7 AoT (not .net 8) benchmarks and memory consumption seems too high.
I'd ditch golang on a pinch as soon as I can use a decent type system, and C# seems like a great alternative.
Crystal would be my ideal but its compiler is still too slow, sadly.
I'm very curious to see how they managed to pull this off since from my knowledge of the development patterns, .net has historically been much more allocation-heavy (not that it isn't very efficient with allocated data).
I do have a prototype project, and while it allowed to get going quickly, I found the remote debugging story useless, as Visual Studio 2022 SSH support is a steaming pile of garbage. (Doesn't use the windows systemwise ssh install, but some crappy library that only supports password auth, or some wonkny certificate format, and outdated ciphers, which is understandably disabled, wasted an hour and insecured the target machine, and gave up finally).
I used SQLite, and the System.Data.SQLite, because reasons (legacy). It is a bug ridden piece of work (on linux), with no sane way to contribute. Also it is useless on M1 Mac, as the author doesn't give a shit about compiling it for arm (or setting up a CI pipeline on github or gitlab for example, where it is supported).
Had plenty of problems with single file app bundle deployment, as much as it eased deployment initially, lots of libraries broke on it.
It has been working okay for us (in other former projects) generally well, but personally I am missing some things from the JVM, where VM tuning and monitoring and remote debugging are all a smoother ride in my experience.
Oh, and async broke the C# language (or rather the standard library) and killed the CLR interop with other languages, it was a big mistake to release it with these defaults. I really like F#, but async everything made it uselessly difficult in my opinion. But it has nothing to with .Net on Linux. It works, and is way better than the nodejs world, but I think more ideas should be copied from the JVM world to ease the Ops side, or they should be more clearly communicated.
i just recently switched from VS 2022 remote to VS Code, (windows->linux, C++) and it works very well. i'm not sure i have many reasons to keep VS 2022 around right now
Year and half I'm using it.
Bugs occurs from time to time but in comparison, VS 2022 now feels like big nope.
And that in situation when VS 15/17/22 is provided by company and Rider I bought for my own money.
Bit snarky PTSD induced rant: Thanks for the unsolicited advice on the situation you have nigh no context on.
A bit of context: I could put decent logging and metrics inside the buggy System.Data.SQLite library, to know why it fails to do its job in remote Linux machines... which would be quite and effort (Comparable to forking and fixing it, which might be the goal, but getting the info for the problem is rather an interactive problem. Even opening the database didn't work). Also Logging from inside libraries is... a bit divisive topic.
At our shop logging and metrics are not always the best answers, when *developing*, they are mostly useful for *operating* a solution.
I once was CTO of a now defunct startup and we ran a C# backend against a postgres db inside Docker containers on Linux.
This was before MS was on board with Linux and all, but Mono was already super good tech. It wasn’t as fast as real .NET but it was still way faster than eg Python or Ruby, the popular backend languages of the day. There was even a half decent cross platform IDE (MonoDevelop), most of the dev team wasn’t on Windows.
I guess this is not very actionable knowledge anymore but I assume that if C# on Linux was productive and fast in 2013 it won’t have gotten any worse today.
Somehow linux/mac people seem to avoid .NET out of sheer cultural avoidance, and Windows people seem to avoid most of the OSS ecosystem like the plague for awkward “i was at a .net conf and all the talks were about MS tech” reasons but actually the two combine really well. C# is a great language with unparalleled tooling. It’s easy to learn for people with nearly any background. It’s fast, it’s easy to refactor, it’s easy to debug and it runs everywhere. For over a decade now.
My current company runs Elixir because we’re a chat API and the BEAM’s parallelism/robustness story is uniquely suited to our product. If we’d be in a different domain I’d choose C# (on linux, with postgres) again in a heartbeat.
You're right, Mono was pretty good even in the very early days. I had a small C# Mono app deployed on macOS in a similar time period (possibly close to 10 years ago now) and I was amazed at how well it worked at the time.
It also felt like magic somehow, after having my .NET development tied to Windows for so long before that...
There were a lot of caveats then and quite some disparity in how it would run on Mono vs normal .NET Framework.
> I assume that if C# on Linux was productive and fast in 2013 it won’t have gotten any worse today.
I am. I've been developing with .NET since the start and my current workflow is really the smoothest and most productive (and most enjoyable!) I've ever had:
I develop solely on Mac (JetBrains Rider), compiling/debugging Mac binaries locally - no containers - unless I need any external dependencies like eg. Postgres.
On M1 MacBook Pro the experience is blazing fast - and Rider can be pretty heavy but Apple Silicon eats it up. (And to be fair to Rider, it was still pretty good even on Intel machines).
Everything is built and tested on Linux (and sometimes Mac) runners in GitHub.
Then I deploy direct to cloud-based "disposable" Debian VMs. If it's a small/hobby project, I can get really great performance out of even the smallest VMs.
For those interested in the details; I most often run production .NET ASP.NET apps as a systemd service on Debian using the native/in-built .NET Kestrel web server, and almost always (currently) use Cloudflare Tunnel as a reverse proxy for ingress traffic. No inbound ports open in Linux. I've got multiple production systems running like this including some load-balanced using Cloudflare Tunnel load balancing features, and it works really nicely.
The .NET team have been laser focused on performance since the early days of .NET Core and they haven't let up on this. The whole experience is night and day compared to "legacy" .NET Framework development.
My entire workflow is now Windows free - and for me there are no longer any compromises compared to developing/deploying on a Windows stack (in the early Mono days, there used to be a lot of compromises compared to a then first-class Windows experience, but no more...)
I have some strong (negative) opinions about what Microsoft is doing to the Windows ecosystem - which is partly why it's been a delight to no longer have to use it in any part of my day-to-day - but I will extoll the benefits of .NET all day long. The .NET team really do an amazing job. *
* For the purposes of generosity, I will briefly overlook MAUI and the slight mess that is the .NET GUI based app story...
My first experience developing for .NET on a non-Windows OS had me running Visual Studio & SQL Server Management Studio in a Parallels virtual machine (VM) running on a Macbook Air with only 8GB RAM, targeting .NET Framework. My most recent non-Windows experience had me running JetBrains Rider & DataGrip natively in macOS on a (Intel, non-Apple silicon) Macbook Pro with 32GB RAM, targeting .NET 6.
The experiences were night and day: while my first experience was serviceable, my most recent experience was great & comparable to the work I'm currently doing in Visual Studio 2022 & SQL Server Management Studio 2019 in Windows 10 on a Lenovo Thinkpad w/ 32GB RAM, targeting .NET 7. And the macOS laptop + multi-monitor experience on a Macbook Pro is so nice that I absolutely prefer coding on Macs now instead of Windows.
> My first experience developing for .NET on a non-Windows OS had me running Visual Studio & SQL Server Management Studio in a Parallels virtual machine (VM) running on a Macbook Air with only 8GB RAM, targeting .NET Framework.
I'm actually amazed that you got away with that on an 8GB (presumably Intel) MacBook Air!! :)
> Rider can be pretty heavy but Apple Silicon eats it up. (And to be fair to Rider, it was still pretty good even on Intel machines).
Rider (like all JetBrains IDEs) gobbles RAM and sometimes needs its time to think.
But what makes it so, so much better than Visual Studio (besides a minimum of effort spent on usable design) is that it doesn't randomly completely lock up the UI for several (sometimes tens) of seconds.
Unlike Microsoft apparently, they know not to do their heavy computation on the UI thread.
I haven’t had VS lock up on me in ages. Probably since version 2019. A ton of work has been done to move everything out-of-process (and in doing so, make it async) and remove synchronous hooks.
Are you using 3rd party extensions? ReSharper? Lots of badly written (even tiny extensions that you’d think would be basically no-ops) can make VS lock up again.
The startup time of JetBrains Rider is so nice & quick. Going back to using Visual Studio 2022 after two years on the latest Rider version felt sluggish in a bunch of ways.
This hasn't been my experience lately. Are you perhaps comparing VS 2022 with Resharpen against Rider? Stock VS 2022 is quite a bit faster than Rider for me. And for the missing refactorings there is Roslynator.
I absolutely agree, .NET is so well designed on many levels: portability, core framework, documentation, support since the beginning. MS has proven that they can provide stable development platforms (well, except UI). If someone rejects .NET just because it's from MS, they are missing out.
Same workflow here on Mac. Working with a few windows devs and don’t have any issues. We deploy into Microsoft .NET containers currently.
Side note, great website! It’s so rare to see such an aesthetically pleasing site on here! Do you happen to have any open source repos where you implement a lot of what you talk about? I’m interested to look under the hood at your code and your versioning system (I’m sure more, too, but I only spent 15 mins on your site so far)
That's very kind of you to say, I appreciate that. It's relatively new and still quite a bit unfinished (in my eyes...)
Having spent the last decade or so buried deep in quite a big project, I have shamefully very little to show in terms of open source - but that will likely change quite a bit over the coming year.
I do plan to talk more on my blog about my approach to versioning and various dev/CI tooling.
Always happy to chat to like-minded people so feel free to ping me a message via my contact form if you'd like to talk more tech :) Happy to share a bit more about versioning - the approach mentioned in my article still works really well for me.
We do, in the few things we haven’t yet migrated away from C#. I’m curious as to why you wouldn’t, it’s much cheaper than deploying on windows since you’ll likely need a license for that.
We use it for a range of things. Mostly related to the legacy software of when our developers were mostly C# developers. Which isn’t how things are now, as C# has often stood in the way of our ability to deliver business value at a rapid pace while keeping maintainable systems. Not so much the languages fault as how Microsoft manages a lot of the libraries. Like how the model builder for EF and OData (both official Microsoft libraries) don’t work together and how you need to extend basically any standard library to do any real work with them. So while it’s actually a decent language, if a bit verbose, and .NET is sort of solid these days, we honestly have very few success stories with it.
Some of our AD (now Entra ID I guess) integration operates on it while we slowly migrate it to Powershell (which is .Net but runs in Azure Automation). Some of our APIs run with it. We have a few azure function apps (in containers and isolated) which are being migrated to other techs, and non-function apps. So it’s not like it’s not “working” for us, it’s just that it works a lot less great than its alternatives, even despite a it’s huge leap forward in recent years. Honestly it feels like Microsoft mostly keeps C# around because it sells licenses to medium sized stagnating companies, considering how “half-finished” a lot of the libraries are and how poorly so many of them function together.
I agree that many of MS’s “open source” libraries (eg EF, OData) are half-assed, badly run projects.
I’m also still mad at them for competing against amazing stuff like ServiceStack (a real OSS project from the community, and the best way to run REST APIs I’ve ever seen) with “official” but worse stuff like ASP.NET Web API. They do this a lot, some OSS gets popular but instead of embracing it, they make a half-assed ripoff, proudly proclaim it “official” on conferences and all the C# devs jump off the cliff like lemmings, and the original OSS project dies/stagnates. MS might have officially embraced Open Source but they’re absolutely awful at fostering an OSS community.
But .NET itself is amazing! There’s this weird misconception in Microsoft land that if you choose .NET you must also use all MS’s half-assed libraries. Just use Postgres! Use a lean ORM like Dapper, use Redis, use whatever you want from the wider OSS ecosystem. There’s nothing about C# that forces you onto Entity Framework. Get out of that red polo shirt and look around, there’s a lot of cool stuff out there.
Yeah and they don't get how that is damaging for the ecosystem, regardless of how many marketing posts get written, like the recent ones about the value of .NET.
I have had two projects where the customer paid to rewrite their .NET Framework applications into Java, instead of the more sensible idea to migrate to .NET Core, as they saw it was more valuable to them to move completely away from Microsoft stack.
> I agree that many of MS’s “open source” libraries (eg EF, OData) are half-assed, badly run projects.
Just as a counterpoint to this, I have found Entity Framework (Core) to be an extremely nice and very productive dev experience, with a lot of constant improvements in every release. MS don't always get it right, but I definitely wouldn't call EF a "half-assed, badly run project" by a long stretch. (Though there are probably other MS projects I might be more inclined to describe as such!)
That said, they were pretty brutal in cutting off legacy support for those still using .NET Framework and trying to migrate. Through luck, I didn't have to suffer too much pain, but the migration story from .NET Framework to the new world of .NET (including EF to EF Core) has not been the smoothest for some scenarios.
OData is particularly atrocious, even the latest version (8.x). Entity Framework is getting better... EF8 may be the first version that achieves feature parity with (legacy) Entity Framework 6.x.
If you're still using .NET Framework at this point, that's your own fault (or, more likely, your org's). They have been communicating for _YEARS_ that .NET Framework was on its way out, and provided support for .NET Standard 2.0 as a stepping stone to .NET Core & beyond (i.e. .NET 5+). There are some apps & frameworks that don't really have a modern .NET equivalent, but you won't find me shedding any tears over the lack of a direct upgrade path for Windows Communication Foundation (WCF). =P
I agree with everything you said, except for this part:
> There’s this weird misconception in Microsoft land that if you choose .NET you must also use all MS’s half-assed libraries.
The way I see it, it's more the opposite really. People chose .NET (or more specifically C#), because, they want to use it's tooling and if you're not going to use it, then why would you pick C#? .NET on it's own is excellent, I'm not sure if I think it's as good as the JVM, but it's definitely gotten very good in the recent years, but why would you pick it if you didn't plan on using the "official" tooling?
I'm sure there are some good answers to that question. We just haven't found any.
Because it’s a great language with amazing tooling. It’s fast, robust, easy to debug, easy to refactor, and so on.
It’s also very easy to learn for novices and experienced programmers alike because it’s kind of the middle ground between many popular programming paradigms, and it has few surprising gotchas. It’s a fast way to get a heterogeneous team productive.
> Because it’s a great language with amazing tooling. It’s fast, robust, easy to debug, easy to refactor, and so on.
I agree but I also think the main issue with this is that you could've said it in reference to most programming languages in 2023. I'm sure people can have a lot of debate on that, but aside from the fast bit, most programming languages are frankly in excellent places today, and fast is sort of irrelevant for us specifically as we tend to use c/c++ for that.
I was helping some friends with F# homework, and being a Rust programmer, I had to ask: your project file is XML, and the order you list the project files matter, and it won’t tell you that you listed them wrong?
Compile errors are super long lines and don’t show an excerpt of the code with underline under what went wrong with suggestions on how to fix it?
I know I’m asking a lot here, but that level is available with Rust and Elm.
You are touching upon some of the reasons why C# is unproductive for us. Because aside from when its "included batteries" decide to in-fight, there is also the parts where the "magic" breaks. This is more anecdotal, however, since it's going to depend a lot on who you and your team are. The guy I was discussing with made a point about C# being very easy to on-board new developers in, which is probably true for them. It has been the opposite experience for us. But I say this as someone who comes from a team where we have very good results on-boarding new developers into our Typescript environment, and I doubt you'll find many people saying that this is a common feat of the JavaScript language.
I didn't particularly mind the new way project files are made up in .NET. It used to be a lot worse too. But at the same time, it can be cumbersome as you point out here and for whatever reason, it's these small annoyances that so rarely get fixed by Microsoft allowing them to pile up. I mean, I just said it used to be worse, and it really did, but then when they made it better they didn't make it great. Which is something I'll just never understand, especially because they did it recently, so it's not like they couldn't have taken inspiration from other languages and done it right.
Parent is talking about two issues in F# which are not present in C#.
But yes .. when you step out of the well-defined area in .net you have an "Apple moment" where trying to do something they didn't anticipate or actively dislike becomes wildly, unreasonably difficult and you get pushed down a hacking rabbithole. I'm quite proficient at MSBuild now but I shouldn't have to be.
You do have some tooling support for ordering files in VS and VSCode using Ionide. It is also not always the case that there is a right way to order the files and checking all permutations would get expensive fast. Basically, in F# that is part of the design process and helps you a lot later since you can just read the code in a linear way the same way the compiler does and not miss anything.
You do get the same information in the IDE from intellisense.
When order of declaration matters, it ceases to be declarative. “Let there be a function” is instead “make me a function”.
It’s entirely unnecessary, and even JavaScript can let you run a function that gets declared later.
The advantage when order does not matter inside the module is that you can list big things at the top of the file and the smaller things it’s made of below, rather than opposite.
The advantage when order does not matter outside the module system is that you can have mutually recursive modules.
The FSharp compiler reads the source code in a sequential linear way and throws an error if you reference something defined later. This also includes separate files, which is why the order matters. If file A is read earlier you cant reference in it anything from file B which will be read later.
I see your point and I agree. At the same time, our current backend is Elixir because the BEAM is a fantastic match to our product. It’s been the right choice but I miss C# every day. Never having typos, point and click debugging, renaming stuff in a second instead of 15 minutes of careful grepping, etc. I recognize eg the JVM has all this too but plenty other popular languages do not.
I think .NET-s unique strength comes from its batteries-included approach, very useful in enterprise environments where every piece of software needs to be audited.
That means Microsoft strives to provide a first-party library for every business case.
That said, you can still use ServiceStack and Newtonsoft.JSON if you wish to.
to be fair though, using Entity Framework at least for migrations has worked really well in my experience (and doesnt prevent you from using any of those other OSS tools you mentioned!)
Not back in the day. IIRC it was mostly FOSS and the author gradually shifted to a more commercial model when MS waltzed all over it with ASP.NET Web API. I’m quite impressed he made it through actually, and I can’t fault someone for going full commercial when a giant moloch whose ecosystem you’re supporting pulls the rug under you like that.
To be fair ASP.NET Web API was just the natural progression of ASP.NET MVC which is based on Monorail more than anything else. You're making it sound like the community was set on ServiceStack and then MS reeled them back in, but the truth is not many people were using ServiceStack back then, they were using ASP.NET MVC which had things like JsonResult and whatnot that would get you a long way.
That said, ServiceStack has transitioned to payware a long, long time ago and I'm pretty sure that is what's caused its demise, not ASP.NET Web API. As a single anecdotal data point, it's definitely the reason I'm not using it.
We’re switching into Powershell for operations. With azure automation.
We’re switching to Go for concurrency. C and C++ for computation (though to be fair, we were already doing this with C# and the integration between them has always been great).
Our generalist language has become TypeScript however. Not because it’s great technically, but because it lets us share resources better which in term has made us much more productive in meeting our business needs.
I don’t think C# is bad, I also think it’s much better today than it used to be. If your use of it remains within what works well I think it’ll be hard to find a language with better tooling, but if your needs go beyond that you’re going to have to fight it a lot.
C# approach to concurrency has always been better since its introduction. It does have Channels which offer more focused pattern for SPSC, MPMC, etc. scenarios but neither Java nor Go offer a comparable level of ease of use as C#’s hot-started tasks:
// The tasks will run in parallel
var data = service.GetData(id);
var user = service.GetUser(name);
Handle(await data, await user);
Re: the above comment on switching to Go for concurrency - probably one of the most ill-informed things I have read in the last few days. Doing this borders on incompetence due to the lack of analysis of the utilized technology - Go's GC offers far worse throughput and its concurrency story at best is "on par" with C#.
There is no async coloring in Go. In the example you cited above, service.GetData can be a plain synchronous function in Go. You don't need to distinguish between sync/async as this is defined by the caller and not by the callee. Any function can be made asynchronous without changing its signature. That's one of the critical advantages of having first-class concurrency. There is no "function coloring".
The simplicity of this approach and the advantages this brings to code structuring and easy refactoring simply cannot be understated.
Deferring to function coloring terms usually indicates a skill issue of not understanding that types are meant to represent what the code actually does.
Task-returning method indicates an asynchronously produced (deferred/delayed) result, a promise. If a language hides this fact (that the returned value needs to be awaited), it is grossly misdesigned - at most it can offer not blocking a thread, completely missing the point and leaving 20 years of improvements other programming languages have on the table.
But it's not as bad, yet instead in many other languages you are forced to manually schedule and then block/join until all of your tasks/promises/goroutines finish. Not in C#, that makes it as easy as it gets, which you can see in my example.
On top of that, you always pay for concurrency in Go, even when you don't use it, it is by definition always "colored" which further contributes to the overhead.
Also, in my example, the tasks will run in parallel. They won't in Go.
> Task-returning method indicates an asynchronously produced (deferred/delayed) result, a promise. If a language hides this fact (that the returned value needs to be awaited), it is grossly misdesigned ..
Very hard disagree. This depends on your computing philosophy. Go was designed based on the principles of CSP in mind and that code is produced by humans for humans first. The CSP architecture has proven to work and fit well for high-concurrent middleware. Not just for Go either.
You are not forced to block until goroutines finish as you claim. You can do other work or yield to the go scheduler. All the necessary busy work is taken care of by the runtime - which is terrific for folks who don't want to fiddle with low level details.
> On top of that, you always pay for concurrency in Go, even when you don't use it, it is by definition always "colored" which further contributes to the overhead.
And that is a completely fair compromise. Supporting easy concurrency natively in the runtime was an excellent design tradeoff since the computing world is moving to more and more cores. There are fewer and fewer uses for non-concurrent software.
> Also, in my example, the tasks will run in parallel. They won't in Go.
You appear to be confusing Go with NodeJS. If you have more than one core and GOMAXPROCS > 1, the tasks can indeed run in parallel.
Your example is not concurrency though, it’s parallelism. You’re consuming two different services at the same time and waiting for the result. What you would want to showcase is how you handle two different consumers calling a single service. The difference is rather important once you have many consumers calling a lot of services and your computation needs to be done on the right state.
As a side note I really don’t think C# does parallelism better than Java.
If two callers call the same method concurrently, whether it requires any synchronization or a more advanced primitive like Channel (for which C# has well-written implementation) or nothing at all is a case-by-case choice.
Java's parallelism by definition cannot be better because they had to retrofit a green threads design onto existing ecosystem, only ever addressing the hardware thread blocking issue with all the ceremony and bloat to do the most basic actions concurrently and/or in parallel still remaining in place.
Java requires far more steps to achieve comparable behavior for what C# requires you doing nothing or a method call at most. Dispatching array elements to a consumer in parallel? call .AsParallel(), and no, Java's Stream API requires more work. Want to fire off two asynchronously completing methods? Just use the example above, neither Java nor Go can hold a candle to this. Methods are CPU-bound and blocking? Easy - just use Task.Run to achieve the same.
I’m sorry, but it’s really not the same thing. You appear very confident, and sort of rude, in your argumentation, but there is just such a huge difference between parallelism and concurrency I don’t even know where to begin. I mean, you’re talking about (a)synchronous calls, tasks, and, what not, but what you need to solve is when two thousand sensors want to update the same data through the same service all at once. And they want to do this fairly often.
In reality it’s even more complex than this because a lot of the sensors carry data on behalf of each other so often you’ll receive the same data multiple times and so on, but there isn’t much need to get into that.
We do, very successfully (large online retailer). Moving off of Windows as a deployment target simplified our processes significantly. Developers use a mix of Windows and Mac to develop, running the services "natively", before we ship them as Docker images to k8s.
There are some "Windows-isms" buried deep in the older layers of the standard library, but these tend to be not relevant for backend services (anything that talks via the network).
I experience .NET as a very productive tech stack. If I were to found a startup tomorrow, I'd pick .NET for boring web service stuff.
I serve videos from my home Linux server using Jellyfin[0][1] and previously ran Emby[2] (from which Jellyfin was forked). Jellyfin is written in C# and runs on .NET 7.
I am a music collector and I don't see a problem with Jelly, apart from android app pausing after first song.
It recognized my Musicbrainz filled library immediately without any problem, it was 0 effort to have it fully available on the web with multiple accounts.
IME Jellyfin struggles to deal with a few edge-cases, and the ecosystem for Subsonic is generally better: more applications, more servers, and therefore ultimately more choice. Though Navidrome may be the most popular, there are plenty of clients to choose from (Supersonic, Sublime-music, etc. etc.) where such an ecosystem doesn't really exist for Jellyfin.
I have a single-file executable with the framework embedded, that I run on all of my computers, from desktops and laptops to cloud VMs to SBC's.
-it's designed to be run once after which it auto-registers itself to always stay running (linux:systemd, windows: task scheduler), while keeping itself up to date
-It contains my CI system (runs software from git, either one run per commit or "keep running forever" mode, UI thru local web server with even browser blazor support hacked in). I use this to run my other programs like servers, crawlers, telegram bots which I write in dotnet and almost always in a way that they run identical on Linux and Windows, x64 or aarch64 (arm). The CI system installs its own dotnet env and env variables to ease with dependencies for the programs ran with it
-It constantly updates its state to a relatively simple azure C# app which functions as a status webpage for all my machines and provides some basic controls like "restart machine" while sending notifications of unexpected machine on/off states
-The executable is capable of automatically updating itself through the central web app server. It does a dance of downloading new version starting the new version as "MRestartor_new", which is then started and renames the current "MRestartor_old", which then starts that "_old" which will rename the newly downloaded one to "MRestartor", then starts that.
-I've half-added a few extra features like remote console, remote VNC-like desktop too
My friend keeps telling me I've re-invented a shitty kubernetes but I actually want to, and need to run this on desktop windows machines (previous reason was Unity build server, now also need it for non-headless puppeteered chrome), and am planning to run some "infoscreen"-like software on it for some computers which would not be possible inside docker to start with.
Outside of this system, I've found the "single file executable" option to be a very pragmatic way to just have a "companion app" bundled with an Unity game for example, that the game starts, and if the game crashes, it sends a crash log. Functions with zero dependencies so even caught an Unity slip-up where Unity actually needed some almost-ubiquitous dll a virgin Windows that our publisher's PC had doesn't actually come with
I maintain three different projects for three business in .NET Core using Linux Docker containers and it's been very smooth.
The ability to have both C# and F# along with a ton of perfectly serviceable libraries is great.
I'm using ASP.NET Core for the web stack, swashbuckle to auto generate swagger docs, EF for auto generated database models. It's pretty great actually. Adding a new field is as easy as making a migration file, changing the API message, and adding one line to map the database model field to the message field. Everything just moves smoothly.
The resulting web servers just work. I've had effectively no issues with them at all. The only issues I've had were the result of Portainer not correctly shutting down a container, which really isn't related to .NET at all.
We were looking at that, but we found Azure Functions to be an even better overall experience.
Today, we develop w/ VS2022 on Windows, push to GitHub, and the GH actions take it to function deployments from there. Then, we monitor for any exceptions using app insights search from visual studio. The developer experience is flawless. I don't even have to touch the azure web UI to do the happy path. Finding exceptions using the native VS tooling is much easier.
All the pieces fit together really well now - managed identity alone saving us probably a FTE role in our org chart.
> we develop w/ VS2022 on Windows, push to GitHub, and the GH actions take it to Azure functions
There's actually likely some linux in that stack under the GHA and Azure functions. But this is very much like what I find - "is this .NET running on linux?" is question where the answer is frequently "yes" but also is not an important or interesting question.
Our company's app is written in .NET 7 (soon to upgrade to .NET 8) and deployed via Linux containers into Kubernetes.
It is a delightful ecosystem to work with. Productivity is huge, great tooling and IDE support even in non-MS ways. I joined this company 3 months ago and what I've been able to get done on this platform is honestly exciting. And I haven't had to sacrifice modern development principles.
At my company, every api server and azure function is executed in k8s as linux container. The license costs and performance footprint are a huge factor to not run on windows. And docker-images for the CI production.
Linux is just so much better IMHO. I would switch to linux if visual studio (not vscode) would run there natively.
You could try out Rider on linux. I use it since a few months (on Windows though) and it is pretty great. I used Visual Studio before and Rider is definitely competitive.
We do and it’s brilliant. Technically, we are using Microsoft official dotnet Docker images, not bare metal. Building is really simple and we can cross-compile from developer MacBooks. I expect similar success is possible with JVM but F# is too good to pass up!
I use .NET on Linux and the experience with Rider has been great. The workflow transfers really well between Mac, Windows, and Linux, and everything works the way you expect. The only problems I run into are that there are still things that are Windows focused. For example MAUI does not run on Linux which is a shame because we could use another cross platform GUI.
There are still bugs, for example I ran into one with Polyglot Notebooks not working on Manjaro or Pop!_OS https://github.com/dotnet/interactive/issues/3159
but hopefully that will just get better with time.
I think the biggest with .NET is that it spent so long being Windows only. A lot of people just developed workflows in other languages. It also took time before .NET core, now just .NET really became first class on Linux. With .NET 8 this will get a lot better, and to my understanding will allow the popular Game Engine Godot to be able to target the web with C# builds. This will allow them to go back to having one Godot binary instead of two.
The thing that has me scratching my head though is I can't believe they still don't have System.CommandLine in the standard lib. We want our cli programs please!
Not me personally (I'm not a dev), but where I work, we have two teams working on two separate projects written in .net that run in Linux containers on ECS. The devs themselves use Visual Studio on Windows for local dev.
These things work very well and seem to use very few resources. Those systems manage retail kiosks and similar stuff, and one of the two systems is actually on the critical path for operation, so it's not like they only do housekeeping once in a while.
We deploy in Linux containers to cloud run, and we're about to start shipping a cross platform language binary, so people can run Darklang on any system. Dotnet does cross platform builds from any OS.
I'm using the Mono CLR which is a great as a runtime system for statically typed languages; it's lean, efficient and easy to use and deploy (e.g. compared to LLVM), with an integrated, cross-platform debugger. It runs even on old embedded Linux images. Dependency on the .NET framework is possible, but not necessary.
I've used it exclusively on my servers for the last 5 years and am now using it on my laptop.
Server side, it's better than running it in Windows. I don't pay a license fee for the OS, so spinning up new VMs is super easy. And set-up, deployment, and long-term operation have all just gone off without any hitches. I've got Let's Encrypt certificates, Postgres databases, and so little overhead that I can't get by with small server SKUs.
On desktop, I still use Windows at work, so I've built a WPF/WebView2 and a GTKSharp/WebKit web app shell that lets me run identical code to my server apps, just as a desktop app. It was pretty fiddly to setup (mostly because GTK is hot garbage), but now that I have it, it just works.
My department is only .NET due to dependencies to specific third-party manufacturing software that is .NET and we use their SDK for a lot of stuff. 95% of our apps run on Windows, 5% on Linux. The Linux part exists only because management heard about it and mandated to try it, not because it makes any sense - the third party software runs only on Windows, so even creating 2 dedicated servers with Linux (out of several dozens with Windows) in every plant is just a major pain in the back and a complication that does not pay out.
But if you don't have dependencies and limitations to Windows and you can go full swing with Linux, it's a matter of overall cost and productivity, use what is best for you.
I do contract work for NASA and the main project I work on is transitioning from PostGraphile to an ASP.NET Core WebAPI (currently targeting .NET 7, but will likely look at bumping that to 8 once it's released if it's not too disruptive). I work completely in Linux, coding using Neovim, with Omnisharp for language features and code completion, and our applications are fully built and deployed using Linux containers. .NET on Linux has come a long way since the early days of Xamarin and Mono, it's quite enjoyable to use.
I am, for a range of applications including web APIs and background workers.
I've always favoured Linux for server apps (I used to deploy dotnet apps to Linux with Mono), but I also think Azure (and cloud in general) has, in a roundabout way, introduced a lot of .NET developers to Linux.
There's little reason not to deploy to Linux instead of Windows - it's cheaper and just a much better experience to work with for deployment and operations.
Most .NET backend services and web apps are now deployed on Linux (containers). The default .NET PaaS experience on Azure (App Service) also defaults to Linux.
I suppose it depends what you mean by linux, but if aws lambda on linux counts, then i work at a top 250 UK company who uses dotnet heavily for lambda workloads, especially async workloads. Its been great for us, no complaints. We likely would use it for our sync apis too if dotnet supported apollo federation v2, but sadly it does not.
We don't usually even think about it, but for web apis, function apps and other microservices that process data over http or message queues, it's more or less the default now that the production environment is a linux of some kind. But it isn't a thing that we spend time worrying about.
We are - we are in the process of transitioning our ASP.NET based backend from Windows to Linux. Linux machines are cheaper and easier to administer than Windows based ones.
Most of our backend APIs are written in C#. We deploy them on AWS lambda, so on Linux. Pipelines are Linux and Devs will use a mixture of Linux, MacOS or Windows.
This is for native AOT binaries, not for the typical case. Though this release and the previous ones did contain improvements for trimming binaries, which is what you need to use to make non-AOT binaries smaller.
Curious - why isn't it the typical case? .NET runtimes on Linux itself are rare enough, I figured AOT on Linux would be about the only use case, especially in containerized environments.
You can also publish .NET apps/services directly as container images [1].
Or you can distribute them as a single file, standalone, "ready to run" application, which precompiles your methods and includes the JIT. This results in a larger executable, but keeps all the functionality, including reflection and runtime code generation, intact.
And, of course, you can install .NET core directly on your Linux system, just as you would for Python or Ruby (where you also don't usually rely on the default installation).
Actually many people seem unaware that .NET has supported AOT since the beginning.
The reason that they are unaware, mostly is that using NGEN requires really wanting to use it, as it requires dealing with strong named Assemblies, aka signed .NET libraries/executables.
It only supports dynamic linking, and still pings back on the JIT for more complex code sequences, its original goal being fast startup time for desktop applications.
Thus before .NET Native (UWP only), Mono (iOS/Android), and now Native AOT, doing AOT compilation on .NET was largely ignored, despite NGEN being available.
ngen had a lot of problems though. It was very easy to accidentally invalidate it. There was no easy reliable way to "ship" an ngen output. There was no easy way to even check if its even loading. your best option was to run ngen on the target machine, which is fine for services as a deployment step. But it's not like you could build and publish an AOT .NET cli for example.
I think not shipping ngen output was desired. There was an NGen Optimization Service that would reoptimize code when net framework was updated and possibly with new profiling data.
Having the .NET runtime on the target machine isn't one of the pain points in a production deploy.
As others have pointed out, having AOT compilation, with its faster start-up time, along with smaller binaries is aimed at a different perceived pain point: the cold-start performance of lightweight function apps / AWS lambda.
.NET uses a lot of reflection, and you need to replace that with other mechanisms for AOT.
And for modern .NET (previously named .NET Core, the version that also runs on Linux) you can package the runtime with the binary. You can even create single-file executables there, though those are still not AOT.
The main advantage of AOT is startup time, and that didn't use to matter much for the kind of applications .NET was used for. It matters more now with serverless stuff like AWS Lambda.
I doubt it will reduce all that much. It’s still really convenient and it’s way more difficult to write and use a source generator than it is to just use reflection. Source generators will get used for stuff that really benefits from the extra safety and from the increased performance, but that still leaves many use cases for reflection.
I see most foundational libraries doing the effort to move to source generators everywhere possible. It's a big multiplier. Then it's up to the applications authors to choose if they care or not about AOT.
I think that is likely, though the current source generators are often a bit cumbersome to use. Though that is why they're adding interceptors, which while it is a bit of a controversial feature is kinda necessary to implement source generators without having an annoying API.
I looked into making a source generator over using reflection and quickly gave up on it. I swear it's easier to directly generate function implementations at runtime by emitting IL than to process the syntax tree.
I'm about to embark on writing a source generator for XML serdes, because while the system XML one does support source gen, and always has, it has some annoying limitations (same element name must always map to the same class within the same assembly).
Everything dotnet is doing for cross-platform is great and better than alternatives --- except one HUGE issue: There is no official GUI on Linux.
Linux's exclusion from MAUI (while 25% of world's developers' primary dev machine is a Linux system according to Stack overflow survery) creates a huge hurdle for those developing on Linux to build any cross platform GUI app with dotnet.
If I , as a developer developing a windows/mac/mobile app with official dotnet, am using Linux for development , I cannot create a GUI app.
Hey dotnet folks! -- let me put a god damn button on the screen!
Sincerely
A Linux dev
[Don't get at me with Avalonia/Uno etc. I am talking about "official" GUI. A graphical user interface is not a specialized domain, or some niche to be left for others to fill. It is a core element of a user facing app. ]
There's no "official GUI" (w/e that means) on Linux for anything though, so it makes sense.
Avalonia exists, and from the looks of it is better than all the crap MS has put out over the years (even on Windows, don't get me started on WPF shutters) so not using it because its not "official" is silly.
I never said I don't use or don't want to use Avalonia. The issue is not the quality of Avalonia/Uno. They are wonderful. The issue is not availability of third-party toolkits -- they are there. The core issue is being psychologically assured that the first party is interested in supporting my platform with its programming languages and libraries. If MS can support Mac, ios, and Android -- what exactly prevented it from supporting Linux ?? This conspicuous lack of support in the presence of support for other non-MS platforms is the most bothersome bit.
And supporting Linux can't be too expensive either. If Avalonia and Uno being open source with no backing of a huge corporation can do it in their stride -- then Microsoft sure as hell can do it. Again , this time economically --- why exactly is MS supporting Mac, ios, and Android -- not Linux ???.
I hope you see my point. The absence of MS support is not bothersome technologically. It is problematic attitude wise. If they are fine with not doing GUI on linux (while doing it on all the other platforms) --- what other features/enhancements they will be willing to hold back from Linux ??
One way to look at it is MAUI isn't more official than Avalonia or Uno, it just happens to be primarily driven by dedicated team from Microsoft. However, it does not mean that Avalonia or Uno are inferior due to just this fact.
EF Core in that regard is very similar - you can easily use Dapper instead because it builds on the same primitives from System.Data. Neither MAUI nor other UI frameworks have access to internals of CoreLib or adjacent packages living in dotnet/runtime.
In fact, having multiple libraries solving a particular task is a sign of healthy ecosystem, too much centralization more often than not tends to be harmful.
The issue is not the quality of Avalonia/Uno. They are wonderful. The issue is not availability of third-party toolkits -- they are there. The core issue is being psychologically assured that the first party is interested in supporting my platform with its programming languages and libraries. If MS can support Mac, ios, and Android -- what exactly prevented it from supporting Linux ?? This conspicuous lack of support in the presence of support for other non-MS platforms is the most bothersome bit. And supporting Linux can't be too expensive either. If Avalonia and Uno being open source with no backing of a huge corporation can do it in their stride -- then Microsoft sure as hell can do it. Again , this time economically --- why exactly is MS supporting Mac, ios, and Android -- not Linux ???. I hope you see my point. The absence of MS support is not bothersome technologically. It is problematic attitude wise. If they are fine with not doing GUI on linux (while doing it on all the other platforms) --- what other features/enhancements they will be willing to hold back from Linux ??
reply
The reason why Avalonia and Uno have support for Linux but not MAUI is purely technical one - Avalonia and Uno draw their controls by themselves, like with Skia(sharp). MAUI tries to be a successor to Xamarin and uses native controls drawn by the host instead. Naturally, this has always been a problem on Linux and supporting it is non trivial (what do you choose? Attempt to make it work with Qt, GTK, something else entirely? what about X.Org vs Wayland?). Realistically, supporting mobile platforms first alongside Windows and macOS is much more important and is a far better investment of effort (how many users are there with desktops/laptops on Linux vs macOS/Windows) and there are only so many people working on this. It is perfectly fine for GUI frameworks to take different approaches and pursue different goals, where Linux systems are much better served by Avalonia.
> The Azure Functions service is made up of two key components: a runtime and a scale controller. ... The Azure Functions runtime can run anywhere. ... The scale controller monitors the rate of events that are targeting your function, and proactively scales the number of instances running your app.
> Kubernetes-based Functions provides the Functions runtime in a Docker container with event-driven scaling through KEDA. ... Using Functions containers with KEDA makes it possible to replicate serverless function capabilities in any Kubernetes cluster.
I've just tried to move an existing project to this and VS complained that the SDK doesn't support this version, even though I can create new projects with .net8. (preview SDKs are enabled) Is that some real limitation, or is VS being weird here?
Hi, you need to make sure you're using "Visual Studio 2022 Preview", not just Visual Studio. Even with "Use Preview SDK's" enabled in the VS2022 tools and options, it will not work unless you have the preview version of Visual Studio installed. You can still create .Net8 projects in Visual Studio, but to build and run them, you will need to do it from the command line (i.e. 'dotnet run').
This is a result of their focus on better serverless support. Having done that recently these are really welcome improvements along with AOT stuff for EF.
Is static linking supported now? It's the one blocker for me to use C# in weird places. I'm good at hacking the underlying environment to make it work, but dynamic loading is not on the table.
I second the request to not use medium.
There are tons of better options.
Neocities, bear blog, mataroa blog, and so on.
There is no reason to suffer with medium and all the nagging.
But it's still a Microsoft project and their sword keeps looming over its head. There have been some heavy-handed anti-developer choices in the past and one can safely assume that that has been their EEE-genes kicking in and it will probably not get better, but worse, the more they build up a community they feel they have (or should have) control over.
Wouldn't want to dive into that pool.
.NET is only as good as the parts Microsoft doesn't have too much control over and that doesn't even include the VSCode extension (or VSCode itself - Use VSCodium and you'll see).