Somehow I actually fell in love with .NET Core 2.0. While their naming is still confusing and it's hard to know what will come next. I found myself amazingly productive with Razor Pages.
It's really cool if you don't need an SPA and you could do everything pretty damn fast. It's like WebForms, but without the Microsoft specifics and instead relies on the good old html5 part to make an app great.
I was talking with a friend yesterday. He got a position as a manager for a group of Java/Oracle devs. We started talking about how being a manager wasn't all its cracked up to be. We are coders and we are really good at what we do.
What makes coding more "fun" is the .NET framework. It had its headaches, but its a really nice and polished framework for getting things up and running. Especially with web building. .NET 4+, C#, and Core are Microsoft's message to the development world that they hear us.
That is one thing I always admired about .NET from afar: there was generally one fairly polished framework that you could reasonably trust. In Javaland I feel like I sometimes I have to second-guess everything to make sure I'm using the "best" implementation of something.
That and C# is an ISO standard which is pretty cool.
I'm a C# developer who has been learning React for the past year, and it really is an exhaustive job to look for the right JS library for every case, this is my process:
1. Facing a code task.
2. Asking myself, should I reinvent the wheel or should I look for a library?
3. Google for libraries solving the task, open at least 5 npm/github tabs.
4. For every tab, I should make sure the library is not too big for my needs/popular/still being maintained/not too many open issues.
-Closing the tabs that didn't pass this step.
5. Taking a look at documentation/examples to determine if it can solve my problem.
-Closing the tabs that didn't pass this step.
6. Testing the library and see if I can make it work.
7. Implement it in my project.
It's a really different environment, and it has its advantages, it's just that Microsoft has spoiled me a little with their out of the box implementations.
As a non-apologetic Java-fanboy I can attest to "I have to second-guess everything to make sure I'm using the "best" implementation of something." This is true. The Spring Boot libraries helps to alleviate that a bit.
> That and C# is an ISO standard which is pretty cool.
Microsoft publish older versions of C# through ECMA, but that documentation is always behind the implementations (which are all now owned by Microsoft). The latest ECMA document is for version 5.0 of C#.
For years it was not really true. You need DI framework? Study all benchmarks and comparisons for 10 most common DI frameworks. Need ORM? Welcome to the NHibernate vs Entity Framework debate. Logging? Here you go with another 10 choices.
After a year-long pause in .NET development I got back into it recently and I can only agree. As a Linux dev, .NET core is an amazing product and it's incredibly productive compared to JS, Go or Java.
I'm also fairly exited for Blazor so I can write fullstack C#.
When I heard they bought github, my first thought was that I hope they enable this on github pages. Dynamic github pages would be amazing and it would attract a lot of new developers to Razor Pages.
actually I'm a java guy or to say it differently, a scala guy and i mostly use mvc to create my normal web stuff. however with razor pages I'm so much faster to do stuff. because it's a little bit simpler to setup. Your model is actually just "code behind" and you could still have a good structure. and as a bonus, razor view engine is like twirl (scala project) minus some quirks.
also you can still easily add typescript, etc. Or even make a SPA with the WebApi and SpaServices (heck I even use the Webpack Middleware). It's just amazing how integrated .Net Core 2 feels.
And the easy stuff is already integrated into the Framework, like Authentication/Authorization.
I've never had a secure cookie authentication running that quickly.
actually when I looked at .net core 1.0 I basically already tought it is abadonware, because of the new build system, because i wasn't a huge fan of c# and lot's of stuff just didn't worked, but since we needed to migrate our old WebForms application I somehow tought that before rewriting everything, it might be a good idea to look into it and it somehow clicked.
c# is really moving fast and the recent additions even makes myself as a scala programmer feel more at home.
SPAs too, it will depend on every project, of course, but a lot of projects don't actually need a REST API, and don't need a client side SPA with all the bindings and parameters coming and going, a lot of projects will be perfectly fine with server rendering.
If you want to build an SPA, their boilerplate[0] for it is also excellent and supports several (angular, react, react-redux... unfortunately they deprecated Vue support [1] because it was great, but that's easy to bootstrap from react starter) and they even support server-side prerendering with a sidecar
Its also like php^; whip stuff up quickly and easily, but keep your eye on the ball, or you get hit in the face by spaghetti code.
There's certainly something to be said for an easy way to get running quickly, when you can't be bothered with an SPA.
...but it doesn't scale. :)
(^ the comparison is more apt than you might imagine; razor pages are the 'one file per logical page' model for asp .net core, and very very reminiscent of webforms, and all the legacy that carries).
(... and don't even get me started on razor components, where every UI interaction (ie. every single browser event) involves a full round trip to the sevrer to rerender the component state, due in .net core 3.0.
I'm just saying: Microsoft have done some excellent work with .net core and kestrel but that doesn't mean every idea they have is a good one)
Sure, but... I suggest that it is a trade off in which functionality is exchanged for convention and simplicity...and perhaps not for good reasons, or in a way that encourages good outcomes by default.
If I was under the impression the ASP.NET team was on the ball and agreed with the decision they were making, I'd be more prepared to run with it... but I'm not.
For example, there's an open github issue discussing the concerns people have raised about the ASP.NET team suggesting that razor pages are better than MVC.
Read it yourself and you'll be extremely familiar with the various opinions on the matter:
well most of these concerns are from people that actually never tried to run with razor pages, which is really sad.
as said my background was basically 5 years or more working in mvc and even partially working on the framework itself. and I would say that Razor Pages is a good starting point for any application.
currently I think both, Razor Pages AND MVC is important, but it's so easy to use both, so there is no problem. If something grows to much you can easily split it into the old traditional MVC paradigma, but actually most of the time I migrate directly to a web api instead of dotnet core mvc.
(at the moment I have 2 mvc controllers, which are basically the same as web api 2 endpoints (no view/no model))
also razor pages is kind of MVVM which is common more (i mean in the javascript world more stuff is mvvm than mvc, not sure why java/c# can't follow that aswell?!) and more for UI related frameworks (which razor pages of course is) but most people didn't realized that on this github issue..
Exactly this, if you've got crazy logic going on it should be removed from the view and done in the controller. The only code in your view should aid displaying model data and defining the layout output. I usually do my best but of course there are exceptions, especially if using third party libraries for views.
That graph is completely misleading then. I can understand why they don't want to show actual y-axis values, but at least put relative values on that axis. Even "60%" and "100%" would do.
Not a .NET developer, so apologies for the dumb question, but why did MS release a second open source .NET implementation around the same time as they acquired Xamarin? Do .NET core and Mono compete with one another? Do they serve complementary roles? Why would I choose one over the other?
.NET Core and Mono are not exactly analagous. A better comparison historically would have been Mono to .NET Framework (i.e. classic .NET).
.NET Framework is a fairly expansive set of standard libraries bundled with a runtime - it's commonly used and well-supported, and dates back to about 2001, give-or-take a beta or two. There's a lot of current and legacy applications built on this out in the wild.
.NET core is effectively a do-over for the longer term in the form of a minimal set of dependencies that imports more of the standard library separately. There's a couple of reasons for doing this - primarily the parts that have been abstracted out mean that .NET core runs in a lot more places than the classic framework (including natively on devices like the Raspberry Pi, for example), and can also be (and is) fully open-source (the classic framework is mostly open-source these days as it is, but licensing problems meant that not every part of the toolchain could be opened up).
There's also the question of expectations when it comes to community changes. .NET Framework was adopted in many quarters on the basis that it was developed by Microsoft directly, and bureaucratically opening it up to community changes after the fact becomes problematic due to the massive number of stakeholders and their expectations.
I'm sure someone else can add more info but as I understand it this is the gist.
I know I've over simplified this for myself. But the way that I look at it is like so.
.Net Standard is an "Interface" - No implementation behind the scenes just a list of apis.
.Net Framework, .Net Core, Xamarin are all "classes" or implementations of .Net Standard.
ASP.NET Core only relies on .NET standard which is implemented by both .Net Framework and .Net Core. Which means it can run on both.
ASP.NET (Framework) does not rely on .Net Standard but directly links to the implementation of .Net Framework. Which means those libraries can't run anywhere and are instead limited to only .NET Framework.
I'm no Jon Skeet, or tech guru. So please take this explanation with a grain of salt and if I'm wrong correct me as needed. This is just how I've wrapped my head around it.
David Fowler (Microsoft engineer on ASP.NET Core) posted an "analogy in C#" gist a couple years ago that is similar to yours, but in code. It's a couple years old so is a bit out of date at this point, but the basic idea holds.
This is correct. .NET Standard as the name implies is the standard by which each framework tries to match. If you make any libraries that are .NET Standard compatible they will run everywhere. They used to have "Portable Class Libraries" but that was a mess. Also .NET Standard wasn't as good till recently thankfully. The earlier days were a mess since not everybody (framework) supported certain things.
You can't do disruptive changes with the baggage of current stack.
System.web itself is over 2 mb and you don't need everything from it, all the time. Cross platform was another feature which was hard to do w\o starting over
Hm, curious - why do you say wasm and XboxOne would lean toward mono? Doesn't Blazor target .NET Core? I would've assumed .NET Core w/ UWP would be first class citizens on XboxOne? Thanks in advance!
Edit: maybe I was confused about Blazor. Looks like it can work with ASP.NET Core, but the info I'm finding talks about building with mono.
I guess the more interesting question is, do you think Mono will remain important for the places you've mentioned, or will be replaced more and more by .NET Core? Especially given the UI goals with .NET Core 3 (and my own biased belief that .NET Core is the clear unabashed future of .NET).
For the WebAssembly client side Blazor it uses the Mono WebAssembly (though .NET Core on server) as the Mono compile chain for WebAssembly as it is the most mature implementation (also produces smallest output from their work on watchOS).
If you are using Blazor as a full desktop app (e.g. Electron) then you can use .NET Core on the client and that can talk to the javascript as a local server; but for running in the browser directly as WebAssembly it uses the Mono (WebAssembly) runtime.
XboxOne can run UWP and for that you can use .NET Core (via UWP) in a "shared" mode; however it also has a dedicated "game OS" mode (which most boxed/AAA games use) which takes priority over everything and has full hardware access, but that doesn't allow Jitting so you can't use .NET Core in that mode.
Mono and Unity have mature AoT compilers so can run in this mode (Xamarin also need AoT for iOS and Android which also don't allow Jitting, and Unity needs it for a lot of the platforms they support. Mono and Unity use different approaches to AoT though)
Mono is now sharing C# source code with .NET Core (and it goes both ways); but as far as I'm aware .NET Core doesn't seem to be rushing to fill any of the gaps that Mono serves well (iOS, Android, etc)
Though they are working on an AoT version of .NET Core for Windows/Linux/macOS https://github.com/dotnet/corert but that seems to be more focused on competing with golang and rust's AoT single exe offerings.
> I guess the more interesting question is, do you think Mono will remain important for the places you've mentioned, or will be replaced more and more by .NET Core? Especially given the UI goals with .NET Core 3 (and my own biased belief that .NET Core is the clear unabashed future of .NET).
I don't know what the future is for Mono. Mono currently has support for platforms that .NET Core doesn't specifically target - iOS, Android, native code through Unity, WASM, etc. It'll probably start to move toward specifically supporting those platforms while continuing to support newer versions of the .NET Standard.
.NET Core/.NET Standard is the future of mainstream .NET. .NET Framework is dead. I've heard it from Microsoft people directly. There probably will never be a 4.9, and there will definitely never be a 5.0.
I'd imagine at the least it would be in maintenance mode for years, although in principal I'm inclined to otherwise believe this. I don't think anyone was really convinced by the early talk of co-existence.
I'm pretty sure there will be updates that implement newer .net standard versions, but I doubt it will receive a lot of non essential upgrades for performance and such in the future.
.NET Framework was first, the major install on Windows only. Mono came after, to try and replicate .NET on non-Windows and with an open-source approach. Eventually .NET Framework code was open-sourced (at least read only in the beginning), which also helped Mono get better.
.NET Core was created to take the aging .NET Framework and rebuild for a faster, easier deployment, quick release cycle, and cross-platform on Linux and Mac OS; without hurting the decades of backwards compatibility.
Mono meanwhile became the base for Xamarin for mobile apps and Unity as a gaming engine. Even though Microsoft acquired Xamarin, Mono already had much uptake in both of these scenarios that it didn't make sense to attempt replacing it, so now we have Mono for mobile+games and .NET Core for the rest.
.NET Framework will still be around for a decade but is effectively retired and .NET Core 3.0 will fill in any remaining gaps for Windows apps that still need the full framework today.
My understanding is that Mono is being billed as a client framework (Monogame, Xamarin, Unity, Blazor) while Core is more for the server/systems target. To add to the confusion, ".Net Standard" is a library compilation target that should work everywhere (even the legacy Windows-only .Net Framework). Interestingly, the newly MIT-licensed Xenko game engine compiles to .Net Standard libraries which is how it supports so many platforms (via barebones launcher/shim projects for whichever framework implementation is desired). Edited for clarity, thx.
.Net Standard is what you build libraries against; its the abstract contract of the runtime.
Then .NET Framework, .NET Core, Mono, Unity, etc runtimes then implement that contract so a library built against .Net Standard will work on any of the runtimes.
From what I remember reading a post or listening to podcast is .net core is "server-centric" and mono is "client-centric" also relative "foot-print" in size.
.NET Core is superior than Mono for Server Software like ASP.NET Web Apps but Mono is more suitable for embedded environments like iOS/Android/PlayStation/Xbox,etc and is also what's used in Microsoft's new Blazor project for running C# Apps in Web Assembly (https://github.com/aspnet/Blazor).
I'm a huge fan of C# and ASP.net, but this transition hasn't went as smoothly as I hoped.
I tried .NET core a lot pre-1.0 and it seemed really fragile (especially on Mac and Linux, which was my main excitement around it).
As I was a bit nervous about that, I started a new project in 'classic' .NET w/ MVC5. It however seems a lot of projects are migrating rapidly to .NET Core and leaving support for legacy projects recently.
It does seem to be a giant pain to migrate a legacy asp.net app to asp.net core (very manual). Anyone have any tools/advice on this process?
Porting to 1.0 was a pretty big headache, there was a lot of stuff in full .NET that wasn't in 1.0, and a lot of open source libs hadn't been ported yet.
In 2.0 they had a bit of a rethink about how stripped down Core was going to be and added a heap of stuff in, plus a lot more (most?) big open source libs have been ported. So porting legacy apps is much easier.
However I don't see any point porting a legacy app unless there is something in core you want. Perhaps you'd like to move your app over to running on Linux servers? At my previous company we ported parts of our product so that we could sell cross-platform support without needing our users to install mono.
> Porting to 1.0 was a pretty big headache, there was a lot of stuff in full .NET that wasn't in 1.0
Yeah, this.
For example, originally they weren't going to include SqlDBAdapter or DataTables or any related classes. They really just thought everybody was going to be okay with using Entity Framework for everything and not having a generic, non-ORM database interface.
Most of my coding involves either extremely simple tables or where I want to manipulate tables for third-party applications that have over a thousand tables where I don't have access to the source code. Most of it is ETL or ETL-like. It's also usually in PowerShell or Python, but some of it is in C#. Making an EF would take ages for some of these applications, especially when the schema isn't always relationally sound (but it's third party so I can't change it). It still just blows my mind they they thought it'd be okay to make everybody box and unbox their database into classed objects instead of just letting you manipulate it as a DataRow. As far as I know, they still consider DataTable and the like to be "legacy".
That's not really correct. Raw SQL access (SqlCommand etc) was always part of .net core. You can't really call DataTable and friends a generic non-ORM data interface - it's an API unique to ADO.net.
I was an early adopter of .net core. It definitely had its quirks in the beginning. Recently I gave it another go, it's really improved a lot. Like it's production ready now. I'd say pick it up again.
I remember all of the KRE, KVM, project.json, etc. It was pretty rough, and that's from someone that's been using .net daily since the 1.0 beta many years ago.
It's stabile now. Just forget the alphas and 1.X ever existed. If you really want to get a hang of it, don't use Visual Studio. user VS Code, and the CLI tools:
There are some extremely opinionated assumptions underlying your question. VS isn't really bloated, given what it's trying to do- and in recent years it's been rock-solid stable. Visual Studio for Mac exists (old Xamarin Studio) and is coming along. And the engineering team has made its point on 64 bit clear that they don't believe it will bring the performance people like to say it will.
But more importantly VS and VS Code are two different approaches- IDE vs TextEditor. The vast majority people would say C#, as a compiled language, is best in an IDE. Either way, I'm pretty confident in predicting that there is no way Visual Studio gets "dropped."
VS isn't going to be dropped any time soon, there are so many features and plugins for it that it would be a massive job to replace it.
They have moved more and more features to run into separate child processes to prevent the memory use requirements for the main VS process. VS 2017 uses a lot more memory than 2013 (the previous one I used). My 8GB machine gets slowdowns every now and due to SQL Server getting swapped out.
Not neccessarily. IIS could also serve as a reverse proxy/load balancer, with a Kestrel installation behind it. (If I'm not mistaken the response header is only set in static resource responses.)
Is it even wise to design a system that uses un-encrypted backend traffic? The Snowden revelations did demonstrate that intelligence services are snooping on those.
Not over an actual network; but localhost or in-process it should be fine? Though the in-process IIS hosting is ASP.NET Core 2.2 as it got bumped from the 2.1 release https://github.com/aspnet/IISIntegration/issues/878
Meant for port sharing; multiple apps or subdomains switching either on host header or path; as you can't have multiple processes listening on the same port (80/443) on the same machine. Or changing which is run based on the path
I get the feeling that there has been a lot of low-hanging optimization fruit in .NET that went unaddressed because a) the original compiler was too complicated to change and b) there was no pressure to improve it because it only ran on Windows and there were no real competitors.
a) was addressed by Roslyn, which is apparently much more manageable, and now that .NET is supposed to run everywhere, it has to have competitive performance. Hence the big gains.
Yes, this is an excellent point I forgot. With closed-source, you only get to make perf improvements if your manager decides there's not a more important feature you should be working on instead.
For example, .NET doesn't do escape analysis, though I've read this is less of an issue for .NET than Java because .NET has struct value types instead of everything being a heap-allocated object.
In what? Bananas per second? No value on the axis means we don't know what _exactly_ is being measured, or have an idea about the overall state. They say 37% improvement but that's largely meaningless. 37% improvement of what?
If latency is 1 minute and they've shaved 22 seconds off (37%), neat. But it's still bad.
If anything, the deliberate omission of scale on that Y axis is really disturbing.
The first commit says "Vectorization of string.Equals". It now uses SpanHelpers.SequenceEqual. How is that vectorized when it's just an unrolled loop from what I see? Does vectorization not mean using SIMD instructions? It also means improving data dependencies?
Maybe there is an autovectorizer in the compiler that recognizes the shape of that loop and uses SSE to do it 16 bytes at a time instead of 8 at a time?
Bing is huge both in codesize and technologies used, but most of it is a flavor of Windows Server 2016 (soon 2019) + http.sys + C# + Razor + TypeScript for Frontend. C#/C++ for middle and lower tiers.
If you know anyone that use https://duckduckgo.com/ or https://www.searx.me/, then you know some people who use Bing (at least indirectly). Also, some people use Bing (directly) to find porn, apparently...
Ah, yes, I too use Bing to search for images of a specific nature with some frequency. It's quite likely the best search engine available for the... higher arts.
I do not know if they have done it on purpose, but kudos to the Bing team either way.
I use duckduckgo which is bing with a skin and added features. One of those features is "bang commands" which allows you to use a single site to search thousands of other sites individually. If I want to check if my local Home Depot has the particular deck screws I need, adding !homedepot to my duckduckgo query redirects me to Home Depot's search results page. This is very useful as web sites become heavier and heavier which results in a terrible experience just getting to a search box. Ditto Amazon and most other ecommerce sites. Plus it is nice if I don't get the results I am looking for on duckduckgo to just add !g to the query and get redirected to Google's results.
Sort of? It's interesting work but I don't know if I'd frame it as "networking performance potential of .NET".
The title is "How Raygun Increased Throughput by 2,000% With .NET Core (Over Node.js)". From the article:
> In terms of EC2, we utilized c3.large nodes for both the Node.js deployment and then for the .NET Core deployment. Both sit as backends behind an NGINX instance and are managed using scaling groups in EC2 sitting behind a standard AWS load balancer (ELB).
They're comparing their Node.js vs C# versions behind ELB and NGINX. It's barely touching the networking part of the stack directly. They gains are almost definitely all from a stronger compiler story and a concurrency story.
They say themselves:
> Node is a productive environment and has a huge ecosystem around it, but frankly, it hasn’t been designed for performance.
Definitely interesting... I think this speaks mostly to having a direct compile option for .Net Core that works well more so than the platform as a whole.
Personally, my first past at most things these days would be with Node.js simply because of productivity, but it's nice to see more options for performance growth as needed.
Years ago I spent a lot of time trying to convince some of our large bank customers that we could run their stuff on Windows NT instead of SunOS or AUX or whatever by noting that microsoft.com ran on NT. I'm not even sure if it was actually true at the time, now that I think about it.
Barring any bugs, the hope is that the code generated on Windows and Linux for the same architecture will be very similar, modulo calling conventions and ABI.
Then you come down to issues like Linux networking vs. Windows networking, Disk I/O differences which are interesting but from a .NET perspective less so in my opinion.
It is very interesting to know if running .net core on Linux gives you better performance than on Windows. If you don't get any perf. penalty then moving to Linux saves you a lot in Windows Server licenses! And if Linux is faster then there is really no reason to stay with Windows Server anymore.
They should have renamed .net to something else. Now we have to deal with the same issues as the python 2/3 but maybe even worst as it's harder to tell which apis are available looking at sample code.
Bing folks, are you running this on Linux? Common problem a few years ago was that perf of the Linux version of Core sucked compared to Windows. The only way to fix that is through serious dogfooding. I’m wondering to what extent Microsoft is committed to such dogfood on Linux.
Windows Server 2016. All the improvements in the post that helped us are the same on Linux. I will agree though there's more dogfooding to be done. It's happening slowly but surely.
Exciting to hear. .NET is truly a diamond in the rough on the Linux side, its only major problem seems to be catch-22 in the sense that few people use it there, so the ecosystem is slow to come in.
A few years ago is a long time for .NET core, it was in it's infancy then. It's a mature product these days. I've never been a MSFT-platform dev, but I enjoyed fiddling with .NET core for a while. The only annoyance big then was they were caught between two package manager formats