Hacker News new | past | comments | ask | show | jobs | submit login

I would share an unpopular opinion: .NET Core stuff is a going to be a train wreck (especially ASP.NET Core). The main reason for this is a development anarchy and "all and nothing" style of problem solving.

Just an example: ASP.NET Core supports so many host servers and runtimes so that nobody would tell you what it really supports. A little bit of this, a little bit of that, and nothing really works at the end. This is huge contrast to good old ASP.NET (Classic) which just works on IIS. Want a website? Cool, IIS + ASP.NET + Nginx/HAproxy and you are golden.

Another example: ASP.NET Core has a WordPress-style request processing pipeline which is called middleware. The problem is: every module gets every request and there is no way to lazy route them based on some criteria like ".gif" is handled by that module, "gen/.jpg" by another one. Ask people who use WordPress and install a lot of plugins. They will say you that it becomes sluggish as turtle. Why? Because every plugin handles every request even when it does not relate to the given plugin at all.




It sounds like you're just making this up.

1. The .NET Core and .NET Standard are all about making supported platforms very explicit. Anything that works against a certain standard is guaranteed to support it. A .NET Core webapp is just the same as a console app now, just started and surrounded by a webserver process. There's nothing fancy about it that would cause major issues in support, in fact everything is now easier.

2. Checking routes is really fast. A few if/else statements are all you need for any custom logic in your middleware. This is really not going to be an issue even if you do a few thousand lookups per request.

Wordpress is a completely different combination of badly written software in an interpreted language often running on slow webservers. Seems completely random to try and compare to .NET Core.


>every module gets every request and there is no way to lazy route them based on some criteria like ".gif" is handled by that module, "gen/.jpg" by another one.

Simply false. OWIN Middlewares can determine whether to pass execution to the next module in the pipeline or not. Simply call Task.FromResult(0) rather than Next.Invoke(context);

http://stackoverflow.com/questions/18965809/owin-stop-proces...

Some of your other criticisms may be true, but I figure that if Java can be so ubiquitous across platforms and servers, .NET core can manage it.


Here is the code for middleware init:

appBuilder.Map("/something/something", doit => { doit.Use<Pipepart1>(); doit.Use<Pipepart2>(); });

See that doit.Use<Pipepart2>()? Such construct implies that Pipepart2 should be compiled and loaded right now. This is simply not scalable. What I really want is Pipepart2 being loaded on demand only when the corresponding request is encountered.


Hmm. You seem to be talking about acouple of separate things - lazy instantiation of objects, lazy loading of modules/assemblies into the AppDomain, and early termination of OWIN Middleware modules.

You attach the middleware to the appBuilder via Map (or other methods) during application startup, so it will simply incur the assembly penalty at startup. An assembly is loaded the first time it is referenced in executing code in an AppDomain. This is fairly unavoidable in the .NET world, though I agree that it would be preferable if it were otherwise.

Secondly, that use thing is in fact executed during the map - it does all the configuration and setup that it needs to do on startup, and is ready to process requests that are handed to it. Internally it actually gets set up on an entirely new AppBuilder that the main one delegates to I believe, so it only gets invoked on a correct path match, not on every request - only the matching logic is invoked if the request makes it to that point in the pipeline.

The early termination was addressed in my previous post, of course.


Yes, assembly loading is exactly what I'm talking about.

>An assembly is loaded the first time it is referenced in executing code in an AppDomain. This is fairly unavoidable in the .NET world

Premature assembly loading was easily avoidable with Web.config where you specified the request filter and assembly qualified type name of a handler. Hope it will be covered in ASP.NET Core / OWIN someday.

Yes, you are right about the map: it allows you to select who handles what. Still Web.config stays a much better contender in that deep matter: it allows to attach modules without changing code. A common scenario like adding a custom auth module to existing web application is a breeze with Web.config.

Code-based map looks a bit clunky after experiencing the gifts of Web.config flexibility and efficiency.


The exact problem with Web.config modules is that they are never handled in your code, only in the config. It also is very IIS-specific, precluding it from working easily with other web servers cross-platform. With OWIN, by starting with code, there is nothing stopping you or someone else from making a config-file-driven middleware loader to accomplish the functionality you're looking for. i.e. app.LoadMiddlewareFromConfig("middleware.json")


The only thing that precludes Web.config from working cross-platform is the absence of reliable implementation of a cross-platform web server. For example, I don't even consider Nginx reliable on Windows; some nasty effects are in place. The best home for Nginx is Linux. The best home for IIS is Windows. Cross-platform sounds sweet in theory but in reality it goes down hill pretty fast with subtle defects.


What's stopping you from using IIS on Windows and Nginx on Linux? It's not like you're committing to one server for one application.


Hmm, fair. I personally prefer the Owin approach, since it allows for a lot more customization/configuration options, but I see your point.


I believe you are referring to the OWIN pipeline. Yes, every middleware object will "get" every request, but it can simply call the next middleware in the chain immediately if it wants to. All the middleware needs to do is inspect the request any way it wants. It's as lightweight as it can possibly be.

I personally advocate API-first development. The current Web API stack does this amazingly well and is a huge improvement over the monolithic ASP.NET MVC and Web Forms stacks of yesteryear. IOW: It just works, as you state about the original ASP.NET.

I suggest looking a bit deeper into the OWIN pipeline. You'd discover just how lightweight it is compared to the old monolithic ASP.NET Web Forms stack. Additionally, Web API isn't reliant on System.Web anymore if you choose to self-host; that means the stack from http.sys to endpoint code is even lighter weight.


> Yes, every middleware object will "get" every request, but it can simply call the next middleware in the chain immediately if it wants to.

This implies that middleware should be loaded to be able to decide what do with request. All those middlewares are loaded in initialization stage. Now imagine if there is a portal with a number of areas. Surely it can reach say 100 or 200 middlewares in pipeline. All of them compiled, loaded at init stage. They may reference other types too and this implies compiling and loading them and so on and on. The website startup time may become very heavy very soon.


The current ASP.NET stack is exactly what you're describing: hundreds of modules loaded by default and it takes lots of effort (and sometimes impossible) to not load it all if you dont need it. It's still pretty fast though.

.NET Core is the other way around and you load everything you need. If that's 2 or 200 modules, that's fine. This model combined with the lightweight hosting is what allows for the million req/sec benchmarks now.

200 modules isn't much at all. Servers today are also very fast, so this is not going to be a problem. What exactly are you concerned about?


At best this is conjecture. You assume that 200 middlewares will run slowly. Too many assumptions here and throughout this thread. The benchmarks for asp.net core seem to be almost contrary to the FUD showing up around here.


> Surely it can reach say 100 or 200 middlewares in pipeline. All of them compiled, loaded at init stage.

The flexibility of the new model means that you can opt into that if you want, or not. In other words, if you can get by with 1 or 2 modules, then that's all you need to load. This is IMHO nicer then the current model.


I'm confused, how is this different to any other web framework? PHP supports plenty of host servers (Apache, Lighttpd, Nginx, IIS...)

Most web frameworks support middleware, IIS does as well. You specify filters in your web.config and it invokes, or you can filter in your modules depending on how it's configured.

No framework can patch around lack of developer knowledge while still giving the support to all the weird edge cases we encounter, like having to patch third party software without altering it.


From a side view, .NET Core is not any different.

But when you start to dig into the details, you are going to find out that there is no Web.config anymore. Gonsky. There is that middleware thing which is configured from the code. Filters? Not anymore.


What? Have you even read the documentation or actually used .NET Core?

Filters are still there and even better now. There's no need for web.config because there's project.json which does everything you need. Code based configuration is great, especially for a statically typed language. You can still load things dynamically if you want, like you always could in .NET code.

You just listed changes in the framework as if they were bad without actually giving any reasons - or even being accurate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: