Hacker News new | past | comments | ask | show | jobs | submit login
.NET Core RC2 – Improvements, Schedule, and Roadmap (microsoft.com)
157 points by choudeshell on May 6, 2016 | hide | past | favorite | 109 comments



So happy .NET Core was delayed-- a very cool idea, but trying to write a library that targeted a few platforms was an absolute nightmare. Just knowing the difference between dnxcore50, netcore50, netcoreapp was confusing. Need ASP.Net to run unit tests was strange when I was targeting Web at all, and the dotnet CLI hadn't existed yet.

The standardization around the .NET Standard seems great. dotnet as a CLI is great. Kudos to the team for having the humility of pushing back deadlines and actually delivering an end-to-end solution that actually improves .NET development.


I feel your pain. I'm trying to get Scientist.NET working and it's far from easy on RC1. The issue is linked from my list of ASP.NET Core Library and Framework Support (https://anclafs.com).


Exciting stuff!

I'm going to be sticking with 4.6 for the foreseeable future for serious stuff because I'm a big fan of ServiceStack, and Web API feels like a step backwards, even in .net core.

Even for 4.6, the development of .net core has still resulted in some nice gifts, such as the task runner explorer in Visual Studio for Gulp/Grunt, and the auto installing of NPM and Bower packages when a project.json or bower.json package is detected.

For small/side projects though, .net core it is!

If Microsoft is still on a spending spree, they should really purchase ServiceStack and use it to make some serious improvements to Web API.


I'm surprised that Microsoft chose to go with an ecosystem that is as volatile as nodejs. By the time you read this post npm, gulp and bower will be obsolete. Grunt is already ancient news.


Both npm and bower seem to work OK in .NET Core. My only issue is the amount of data they dump into your project. The packages take up a lot of space so they take a while to restore and you need to be careful not to add them to source control. For example, by default the node_modules folder takes 18MB of a 20MB project and has over 3000 files in.


Yes, if you focus your vision on only the new and hot tools. NPM is still the package manager for node.js and that's not going to change for the foreseeable future. Grunt is still in use by lots of people and companies and definitely has its uses, but might not be everyone's first choice for a new project. Enough with the hyperbole.


I don't think there was any choice, the forefront of Web Development is on node.js/npm asset pipeline - if they chose any other solution they'd just be forever playing catchup.


I'm not sure if Web API is "that" bad. I usually use odata lately though ( for rest queries in the url).

But my API looks like my database objects and some have DTO's on top of them, together with automapper ( or manual conversion for performance reasons).

It's seperated in a different .Net project, so i have no issues with the difference of Web Controllers vs API Controllers.

I due plan to checkout Service Stack though, just because i see a lot of positive opinions here. But i'm not sure if it's enough to change.

Note: Watched the presentation... I like the versioning in ServiceStack. Had trouble with it on Web API :)


As a fan of .NET Web API and having not worked with ServiceStack may I ask what I am losing by not embracing ServiceStack?


There's just too many features in https://servicestack.net to be able to cover cohesively in a single comment. If you're interested, I'd start with the Projects Home Page https://github.com/ServiceStack/ServiceStack which tries its best to summarize its features in a short description as best it can (with links to further details).

Essentially ServiceStack is built with a completely different messaging methodology and simplicity-focused mindset, a lot the benefits of its approach is covered from the docs links on the wiki: https://github.com/ServiceStack/ServiceStack/wiki#documentat...

Or if you prefer to see what ServiceStack looks like in code instead, checkout the small, focused and documented examples listed at:

https://github.com/ServiceStackApps/LiveDemos

Disclaimer: I work full-time on ServiceStack.


You get to replace the over-engineered P&P crap that is WCF, Entity Framework and WebAPI with an extremely practical, elegant, fast and no-nonsense set of libraries focused on doing their specific job extremely well.

If you hate the current MS direction of having to reference a million DLLs into your project (i.e. ala NPM), then you will love ServiceStack's cleaner, less cluttered approach.


Or team have been using ServiceStack for 4 years. We are still locked into the pre-licenced version, but it delivered what we need. It is a pleasure to use. We still use Web API on occasion, but every time I have to use it in left frustrated and longing for ServiceStack instead! Big thumbs up. It is one of the best . net external libraries out there.


That answers the question not at all. The question is "why would you use servicestack instead of the incumbent options?".


True but I could see that various others had already explained the architectural benefits of it, do I was simply voicing our satisfaction. I suggest you give it a try. V3 is still free as in beer as far as I am aware. V4 has a commercial licence cost attached. I think it pays for @mythz to work on it full time.


I'll do my best to explain!

I feel as though ServiceStack was built from the ground up to be a framework for building a Restful API, whereas Web API feels like more of an afterthought to MVC.

If you follow the best practices, you end up with a well structured API from a programming perspective. - http://stackoverflow.com/a/15235822/969613

It's opinionated, and a result you have a clearly defined place for your services, your request/response DTO's, your logic etc. It's all very loosely coupled and testable which I like.

In contrast, Web API doesn't make it clear how to split up your MVC controllers and your API controllers. You are kind of left to your own devices, and I find that every developer comes up with their own system, and every Web API project is structured differently. With Web API, you also tend to keep all of your API controllers, models etc in the same project, which I think can become more difficult to maintain over time, and is also harder to write tests for.

For dependency injection, it comes out of the box with Funq, which I find to be really capable, but you can use any other DI framework, and it's simple to swap out Funq for your preffered DI framework.

For ORM, ServiceStack provides OrmLite.net, which I find to be fit for purpose most of the time. If you need the more powerful Entity Framework, that's easy to swap out too, but I find OrmLite to be fast, and easy to work with.

For caching, you can easily pull in Redis, and use it instead of ram for storing user sessions.

There's a bunch of authentication providers, you can add authentication for Google/Facebook/Linkedin/Github etc etc with one line of code, then just the configuration with the third party. - https://github.com/ServiceStack/ServiceStack/wiki/Authentica...

One thing I really like is validators. I know in Web API you have attribute validation on models, but validators in ServiceStack give you a lot more control. They allow you to intercept an API request before you even hit the service, and apply any logic/rules you need to.

Swagger is easy to configure too, basically one line of code and then drag and drop the Swagger HTML/CSS/JS, and it works. You can also document swagger using attributes on your DTO's in C#.

It has a built in Mapping library, a built in JSON parsing library (which is faster than Json.net), and logging is easy to do as well.

The developers are really good too. Literally any problem you come across, you will find a well thought out StackOverflow answer by one of the main developers, usually Demis (mythz).

I'm probably not explaining this very well! Maybe somebody more articulate than myself will come along and provide a better answer, but I recommend checking it out. It's massive (they call it ServiceStack because it is literally a stack of services) and when I need to use Web API, I just find it extremely lacking in comparison.

Hope this helped!


Awesome response! Thanks, Think I'll take a look at it more in depth tonight!


For building a HTTP API why would you tie yourself to a closed source non-free framework when there are so many good alternatives out there? besides dealing with the servicestack licensing you now have to deal with all the windows licensing too.

I mean, I doubt it has some secret sauce that makes it better over everything else for building a REST API.


ServiceStack isn't closed source, framework, IDE integrations, client libraries all on GitHub.

https://github.com/ServiceStack

Licensing means there are dedicate developer resources working on regular improvements all the time. Disclaimer, I am one of those resources :).

Regarding 'secret sauce', it's not a secret at all, as others have said it has a big focus on simple message based approach. Recent SO question was asked sums it up.

http://stackoverflow.com/questions/36962263/can-i-use-servic...

Edit: grammar


The guys who work on it do an incredible job, so I don't mind paying for the quite reasonable indie license to help them out.

If I was to do it myself, I would probably end up writing something similar, but not as good and with more bugs.

I work pretty much exclusively with startups so Windows licensing is covered by Bizspark.

Also, as has been mentioned, it is open source.


Whilst ServiceStack is a commercially supported product, it's not closed source. It's available under a dual commercial and AGPL/FOSS Exception (i.e. Free for OSS) licensing model and all its source code is available on GitHub. Whilst Windows is our Customers most popular platform, ServiceStack has been running cross-platform on Linux/Mono for over 5 years and we have several customers that host on Mono, e.g: http://blog.geni.us/2016/02/25/the-geniuslink-technology-sta...

It requires paid full-time developers to further develop and support all of ServiceStack's 60 NuGet packages and multiple IDE tooling support. Unfortunately we don't benefit from Windows Server Licenses or Azure Hosting that's paying for development of all past and future MS frameworks which are clearly benefiting from a superior business model. A commercial license is the only way we can sustain full-time development on ServiceStack, without it it'd just end up as another Framework destined for the abandoned .NET Graveyard heap as have nearly every other .NET Web Framework before it. I believe Nancy is the only non-MS alternative Web Framework that's still actively developed with decent market share.

As far as licensing models go, we aimed for the least friction option with a perpetual, royalty-free per-developer licensing model where any licensed developer can deploy anything they create with ServiceStack to unlimited Servers at no additional cost. Also all our client libraries are also free to use to consume ServiceStack's HTTP, MQ or SOAP Services giving ServiceStack Services maximum utility possible.

Obviously the bar is much higher for a commercial product and we need to deliver more value and productivity to justify the cost of the License. We're fortunate that enough Customers see value in ServiceStack that lets us continue making the most productive Framework and surrounding library ecosystem we can, we're even more fortunate that we're able to be 100% focused on development, we've never paid for any marketing, never paid evangelists, never paid for referrals, never called a Customer, never attended a Tech Conference or given a talk since transitioning to a commercially supported product ~3 years ago, etc. Yet we've been bootstrapped and profitable since Day 1 - all the success we've had has been through word of mouth from Happy Customers. Whilst we realize this isn't optimal for a Commercial Company and may change in future after we've completed the development tasks we've set out to do - it means for several years we've solely been product focused and have poured all our efforts and energy on further developing ServiceStack to make it the best development platform it can be - of which we've seen nothing else like it with a lot of its key features are a natural extension possible due to its message-based Services architecture that we believe already offers compelling value and advantage over MS frameworks that you might be used to.

In short, we believe ServiceStack offers productivity and value over and above its licensing cost and is the reason why our Customers see value in it.


A lot of what you described is in the new MVC Core. It would be interesting to see a comparison with Core.


I'd be keen to see that too.

The last time I looked at Core, I couldn't even add authentication to endpoints on a pure Web API project, that was a couple of beta's back though, hopefully it's gotten better.


If ServiceStack is great and useful, then it should be ported to .Net Core. I'd recommend doing that sooner rather than later.


It's currently the most requested feature request: https://servicestack.uservoice.com/forums/176786-feature-req...

Which will be our top priority after it's officially released, unfortunately .NET Core has been a continuous stream of breaking changes since RC1 that it's pretty much radioactive until we can be sure of a stable and complete surface area that we can bind to with confidence. We have a strong focus and expectation of frequent, incremental and backwards-compatible releases that trying to chase the .NET Core roller-coaster of breaking changes isn't an option for us.

The volatility is also holding up a lot of the .NET ecosystem and 3rd party libraries we depend on who are also waiting until the dust settles before committing to a port.

The end goal and promise of .NET Core is alluring, but I'm expecting the interim transition to be messy and confusing of what 3rd party libraries is/isn't supported and how stable /complete the support is. I'm hoping after .NET Core 1.0 is released that the focus will be on providing the necessary Framework features and support in order to make it easy for existing .NET libraries to support both .NET 4.x and .NET Core platforms in parallel as there's a real fear it can fragment the ecosystem in the same way that's plagued Python 2.7/3 over the last several years. It looks like .NET Standard will be able to simplify the effort required in theory, but given how unstable everything's been thus far it remains to be seen how complete/stable it will be in practice enough to be able to rely on it to support existing .NET 4.5 Customers.

Anyway I hope .NET Core can deliver a stable platform and enable a smooth transition with the necessary support to enable the quick uptake of the surrounding .NET ecosystem. Not being able to confidently run production systems on Linux has hindered .NET adoption and limited it's appeal to the vibrant Startup/Hacking community (like this one) for several years. If .NET Core can deliver it will be one of the best engineered development platforms available for the most popular platforms, but it needs to be great in order to compete with the momentum that the JVM, LLVM and Go have.


> If .NET Core can deliver it will be one of the best engineered development platforms available for the most popular platforms, but it needs to be great in order to compete with the momentum that the JVM, LLVM and Go have.

I would say any momentum is only in the startup and HN, Reddit readers.

On my little piece of the globe is Java and .NET as usual.

LLVM only plays a role in our iOS and Android projects.

Go is still "that language from Google".


Java/JVM has historically been more popular than .NET in OSS, Startups and Enterprises, it also seems to offer the highest paying Finance jobs - but this may be anecdotal. Based on experience and from what I've seen on job boards I'd have a guess that .NET is likely a close 2nd for developing Enterprise LOB Apps - I've also seen a lot of interest from Ruby/Python and PHP but this may just be regional as I don't know what demand is like outside U.S. Java has also seen a lot of resurgence thanks to Android as have Objective-C for the same reasons thanks to iOS.

LLVM is the backend for the Clang C/C++/Objective-C compiler suite that's quickly replacing GCC not to mention back ends for both Swift and Rust and also used by Emscripten - its appeal and usage extends far beyond just iOS and Android.

Both Rust and Go are the most likely replacements to C/C++ for future systems programming and the primary candidates for recent calls in rewriting existing UNIX utils in a safer modern language.

Go is also a force on the server: https://github.com/golang/go/wiki/GoUsers - that's moved a long way from just that little language from Google. From what I've read from Hacker News and Reddit, it's getting a lot of converts from C/C++/Python/Ruby and node.js.


But that is what I mentioned.

HN and Reddit are a startup bubble. I seldom use most of the cool stuff that gets mentioned here, or see it mentioned in job adverts for that matter.

At least on the Fortune 500 corporate world that we work on.

For example, besides iOS and Android NDK projects there are zero uses of LLVM.


I had started a project with .Net Core about 8 months ago. I figured that I should start a new project using the latest goodies, and there'd not likely be THAT may breaking changes. Boy, was I wrong. I went back to .NET 4.5, and now will wait for the release before I begin using it. But I assume that projects like ServiceStack don't have that luxury, and just have to deal with all the breaking changes. Hopefully that aggravation has slowed down this year.


I was finding it difficult to work out which 3rd party libraries and frameworks supported .NET Core so I've created a simple list with the details [0]. Feel free to send a pull request to add ServiceStack [1]. I left out libraries that I couldn't find an official commitment for, such as an open issue.

[0] ASP.NET Core Library and Framework Support - https://anclafs.com

[1] https://github.com/jpsingleton/ANCLAFS/compare


I have the same question as tucaz. Can you please enlighten us on what's so great about servicestack?


Attempted to answer above, hope I was able to help!


I am concerned that .NET Core has put .NET on a very dark path.

It feels like the developers have built this from a theoretical programming paradise point of view: Everything's "just" a handler in a pipeline. All dependencies are now "micro". Nothing has a hard dependency on anything else.

The result of all this is a framework that might be beautiful at it's core, but becomes gradually more prickly as you get closer to the development surface. It feels like their answer to usability is "Oh, we'll just throw together a metapackage for that." It scares me that they think they can wrap up all the complexity they've created in a way that won't lead to the developer having to dig through the .NET source to make stuff work.

It's worrisome to see how brittle the system is at this late stage. It shows that this has been built from the inside-out rather than outside-in with a focus on usability. All the ugly stuff that should be buried inside the framework has been pushed out to the edge of the system for the developer to deal with. Watch any of the recent ASP.NET streams to see numerous illustrations of this whenever they open up a Core project.

ASP.NET used to be a reprieve from all the hideousness of other tools like NPM and Node. Now I'm seeing all the same problems in ASP.NET. Have you tried deploying a DNX / Core app? Good luck if any file path is longer than 255 characters. Tried pulling in the dependencies from a project.json behind a corporate firewall? Good luck. It might work half the time, but it needs to work 100% of the time to be practical.

All these tiny metapackages will lead to dozens of projects all in intermediate undefined states whereas in the past we had a clear distinction - that repository from 2010? It's MVC3. Ok, we know how to upgrade that because the process is well-defined. I expect ASP.NET core will be more a case of deleting out all your semver numbers from project.json, executing the update command and holding your breath. I dread this the way I dread opening any Javascript/NPM project that's more than a year old.

Given the rise of Javascript clients with REST servers I question whether it's even worth sticking with Core when other languages can do the same thing with fewer pain points. I really want to see Core succeed but I'm just not sure about it given what we've seen so far.


I've used Core RC1 for a few months now and I haven't seen any of the issues you're worried about. I've had way more pain with TypeScript, VS, and node integration than anything .NET specific. From my point of view the .net piece just works, even with the RC bits.

It's just a bit more granular than before, but VS helps you a lot. All you have to do is type a class name and VS can suggest to import a library that you never included in your project. This alone, in my opinion, makes it a lot easier than before - I wouldn't want to go back.


The fact that one person hasn't experienced these issues doesn't invalidate my argument. I am sure some people will have no issues, but there will be some percentage of people who do have issues and I suspect that percentage will be quite large. I also suspect a lot of people who have not encountered issues aren't actually building software for an employer. As someone else mentioned, a lot of the more vocal people tend to be unemployed hobbyists who have no deadlines or who don't venture off the beaten path because they don't have any requirements.


You seem to suspect a lot without bringing any evidence; nothing but FUD. I can just as easily suspect that you're full of it.


I'm with Rener on this. He mentioned important areas where .NET Core falls apart. Would like to see it changing, but as for now almost nobody is going to use .NET Core for anything serious. .NET Core is selfishly designed for .NET Core developers, not for .NET Core customers.


OK to address those: The micro packages are Microsoft packages which means they'll be testing them thoroughly and independencies between them will be recorded via NuGet. The whole "delete numbers and update" bit makes no sense given the NuGet update command.

The point of current ASP.Net Core development is that they are being open about the sausage making process in order to avoid massive breaking changes in the future. Opening an MVC3 project has a defined update process because 3->6 has a massive update process. The team is aiming to only add new functionality going forward (at least for a very long time), they don't want to have to release Core 2.0 but 1.18. Things may break at the moment but that is almost certainly because you're hooked up to the MS CI build MyGet repository, certainly that is what the ASP Live streams are doing, so it is hardly surprising that things don't always work. They've just committed to 6+ weeks of fixing stuff before the RTM - stability will improve.


Have you raised bugs when you hit issues with Core?


As a developer working on a site with thousands of concurrent connections that recently went to production with RC1 and is implementing back-end applications as well, I've been looking forward to RC2. All this talk that .NET Core will never work for X or Y reasons is amusing - it works right now on large sites/projects, and has been a refreshing change for our team(s) with a minimal amount of headaches.

I would suggest trying it before knocking it at a theoretical level, especially the MVC Core stuff that combines 4.X's MVC and Web API and enhances it.


I've been using .NET core in "production" (i get less than a thousand users a day... so not heavy production) for almost a month now. I love the platform, and I love how it keeps getting better. Glad to see this timeline, and i'm really glad to see Xamarin a part of this.


Release Candidate 1 really should not have shipped.

I had a prototype product using DNX "Release Candidate" and then DNX was killed.

It was a prototype and nothing of real value was lost. I'm not holding a grudge, and I'm sure the people at Microsoft would have avoided it if they had a crystal ball.

Personally I love .NET, find C# extremely enjoyable and look forward to RC2 but the way RC1 ended left a sour taste in my mouth.


I have been using MVC since version 2, but I will definitely will not use Core for my first project. There are so many breaking changes that I cannot imagine a 1.0 will cut it.

Also .Net Core is a clusterfuck built from an architectonaut ivory tower, just look at all the github issues around datatables. Yes, they are perhaps old and outdated but there is a million production items relying on it. DevExpress, open-source Excel serializers, etc etc. After a decade in the framework they have earned a spot, and outright refusing them to include does NOT help you gain traction. Because it contained anti-patterns du jour or something, whatever. I have a ton of production code relying on it, that is and has been serving me and my customers very well the past 10 years.

Also, the datatables discussion raised issues about the database schema. The proposal from a softie with a 5-min stab at a generic albeit typed system for covering all use cases surrounding the data and column types in a databases, is just mind boggling naive. What is the average age of the people doing .net Core?


People relying on features that aren't in .NET Core can keep using full .NET, and that's the intended result. Full .NET is going to be supported for the foreseeable future.

However for those projects that don't need them .NET Core is an interesting alternative especially with cloud deployment to Linux or a BSD host.

I work at a large .NET based company whose core product could not work on Core, but we certainly are considering it for newer projects. I'm also many years past being a junior developer.


Well, it's fine if features are ported later or in seperate packages or in some other/better format, but what I understood around System.Data is that vital pieces are missing, without a clear vision on how to deal with these. And if they are not added before 1.0 it may have far-reaching consequences. See my comment below for references.


The data story in general for Core is something they have repeatedly said is going to be worked on incrementally. Entity Framework Core is explicitly being sold as a "use only if you have to run on Linux/Mac" otherwise stick with EF6.

They're also making it very clear that features can and will be added/ported to .Net Core in the future based on what developers need and want. The thing they are absolutely keen to avoid is any breaking changes from 1.0 onwards - the plan is that you'll see 1.1, 1.2, 1.3 not a semver 2.0 for a very long time. To that end the legacy of DataTables is "how can we do this better" not trying to support decades of old libraries from the odd. The 4.6 Framework exists to do that and isn't going away if that is what you need.


Can you go into detail or point me to links about the database problems? I've not heard of this. Are they ditching System.Data entirely or something?


Datatables are not gonna make 1.0, and I think it is underestimated how much they are still in use:

Longread which gives I nice overview: https://github.com/dotnet/corefx/issues/1039

DbProviderFactory: https://github.com/dotnet/corefx/issues/4571

Schema stuff: https://github.com/dotnet/corefx/issues/3423 https://github.com/dotnet/corefx/issues/5024

I don't care that much for schema as I'm not an ORM writer, but I do have used Datables a lot as a convenient data-mangler type for importing legacy data, converting data etc. They may be "dirty", obsolete or whatever but they get the job done.


I read it all. Seems like schema is more-or-less getting taken care of.

But I don't get the big problem here. If someone _needs_ DataSet/Table, why don't they just pull in the reference source or Mono? That beats bringing that into the new Core API and avoids bloat. If it turns out it's too critical, put it in 1.1 or something.


Mono might work, but the reference source license doesn't permit you to do what you suggest. You can't copy the reference source code into your project and use it. They're very specific about what "reference use" means.


Did you intentionally pick the one thing I didn't say was broken? Friggin' writev() still doesn't work, Scott! That breaks about 20,000 tools, often subtly. The fix somehow didn't make it into the latest insider build.

And I can't tell you how much I look forward to updating every Windows component then rebooting twice so I can maybe run cmake in a console window. Geeze, Scott, you've lost the rabbit.


You can't be personally uncivil like this on HN. We ban users who do it repeatedly, so please stop doing it.

We detached this comment from https://news.ycombinator.com/item?id=11701610 and marked it off-topic.


I would share an unpopular opinion: .NET Core stuff is a going to be a train wreck (especially ASP.NET Core). The main reason for this is a development anarchy and "all and nothing" style of problem solving.

Just an example: ASP.NET Core supports so many host servers and runtimes so that nobody would tell you what it really supports. A little bit of this, a little bit of that, and nothing really works at the end. This is huge contrast to good old ASP.NET (Classic) which just works on IIS. Want a website? Cool, IIS + ASP.NET + Nginx/HAproxy and you are golden.

Another example: ASP.NET Core has a WordPress-style request processing pipeline which is called middleware. The problem is: every module gets every request and there is no way to lazy route them based on some criteria like ".gif" is handled by that module, "gen/.jpg" by another one. Ask people who use WordPress and install a lot of plugins. They will say you that it becomes sluggish as turtle. Why? Because every plugin handles every request even when it does not relate to the given plugin at all.


It sounds like you're just making this up.

1. The .NET Core and .NET Standard are all about making supported platforms very explicit. Anything that works against a certain standard is guaranteed to support it. A .NET Core webapp is just the same as a console app now, just started and surrounded by a webserver process. There's nothing fancy about it that would cause major issues in support, in fact everything is now easier.

2. Checking routes is really fast. A few if/else statements are all you need for any custom logic in your middleware. This is really not going to be an issue even if you do a few thousand lookups per request.

Wordpress is a completely different combination of badly written software in an interpreted language often running on slow webservers. Seems completely random to try and compare to .NET Core.


>every module gets every request and there is no way to lazy route them based on some criteria like ".gif" is handled by that module, "gen/.jpg" by another one.

Simply false. OWIN Middlewares can determine whether to pass execution to the next module in the pipeline or not. Simply call Task.FromResult(0) rather than Next.Invoke(context);

http://stackoverflow.com/questions/18965809/owin-stop-proces...

Some of your other criticisms may be true, but I figure that if Java can be so ubiquitous across platforms and servers, .NET core can manage it.


Here is the code for middleware init:

appBuilder.Map("/something/something", doit => { doit.Use<Pipepart1>(); doit.Use<Pipepart2>(); });

See that doit.Use<Pipepart2>()? Such construct implies that Pipepart2 should be compiled and loaded right now. This is simply not scalable. What I really want is Pipepart2 being loaded on demand only when the corresponding request is encountered.


Hmm. You seem to be talking about acouple of separate things - lazy instantiation of objects, lazy loading of modules/assemblies into the AppDomain, and early termination of OWIN Middleware modules.

You attach the middleware to the appBuilder via Map (or other methods) during application startup, so it will simply incur the assembly penalty at startup. An assembly is loaded the first time it is referenced in executing code in an AppDomain. This is fairly unavoidable in the .NET world, though I agree that it would be preferable if it were otherwise.

Secondly, that use thing is in fact executed during the map - it does all the configuration and setup that it needs to do on startup, and is ready to process requests that are handed to it. Internally it actually gets set up on an entirely new AppBuilder that the main one delegates to I believe, so it only gets invoked on a correct path match, not on every request - only the matching logic is invoked if the request makes it to that point in the pipeline.

The early termination was addressed in my previous post, of course.


Yes, assembly loading is exactly what I'm talking about.

>An assembly is loaded the first time it is referenced in executing code in an AppDomain. This is fairly unavoidable in the .NET world

Premature assembly loading was easily avoidable with Web.config where you specified the request filter and assembly qualified type name of a handler. Hope it will be covered in ASP.NET Core / OWIN someday.

Yes, you are right about the map: it allows you to select who handles what. Still Web.config stays a much better contender in that deep matter: it allows to attach modules without changing code. A common scenario like adding a custom auth module to existing web application is a breeze with Web.config.

Code-based map looks a bit clunky after experiencing the gifts of Web.config flexibility and efficiency.


The exact problem with Web.config modules is that they are never handled in your code, only in the config. It also is very IIS-specific, precluding it from working easily with other web servers cross-platform. With OWIN, by starting with code, there is nothing stopping you or someone else from making a config-file-driven middleware loader to accomplish the functionality you're looking for. i.e. app.LoadMiddlewareFromConfig("middleware.json")


The only thing that precludes Web.config from working cross-platform is the absence of reliable implementation of a cross-platform web server. For example, I don't even consider Nginx reliable on Windows; some nasty effects are in place. The best home for Nginx is Linux. The best home for IIS is Windows. Cross-platform sounds sweet in theory but in reality it goes down hill pretty fast with subtle defects.


What's stopping you from using IIS on Windows and Nginx on Linux? It's not like you're committing to one server for one application.


Hmm, fair. I personally prefer the Owin approach, since it allows for a lot more customization/configuration options, but I see your point.


I believe you are referring to the OWIN pipeline. Yes, every middleware object will "get" every request, but it can simply call the next middleware in the chain immediately if it wants to. All the middleware needs to do is inspect the request any way it wants. It's as lightweight as it can possibly be.

I personally advocate API-first development. The current Web API stack does this amazingly well and is a huge improvement over the monolithic ASP.NET MVC and Web Forms stacks of yesteryear. IOW: It just works, as you state about the original ASP.NET.

I suggest looking a bit deeper into the OWIN pipeline. You'd discover just how lightweight it is compared to the old monolithic ASP.NET Web Forms stack. Additionally, Web API isn't reliant on System.Web anymore if you choose to self-host; that means the stack from http.sys to endpoint code is even lighter weight.


> Yes, every middleware object will "get" every request, but it can simply call the next middleware in the chain immediately if it wants to.

This implies that middleware should be loaded to be able to decide what do with request. All those middlewares are loaded in initialization stage. Now imagine if there is a portal with a number of areas. Surely it can reach say 100 or 200 middlewares in pipeline. All of them compiled, loaded at init stage. They may reference other types too and this implies compiling and loading them and so on and on. The website startup time may become very heavy very soon.


The current ASP.NET stack is exactly what you're describing: hundreds of modules loaded by default and it takes lots of effort (and sometimes impossible) to not load it all if you dont need it. It's still pretty fast though.

.NET Core is the other way around and you load everything you need. If that's 2 or 200 modules, that's fine. This model combined with the lightweight hosting is what allows for the million req/sec benchmarks now.

200 modules isn't much at all. Servers today are also very fast, so this is not going to be a problem. What exactly are you concerned about?


At best this is conjecture. You assume that 200 middlewares will run slowly. Too many assumptions here and throughout this thread. The benchmarks for asp.net core seem to be almost contrary to the FUD showing up around here.


> Surely it can reach say 100 or 200 middlewares in pipeline. All of them compiled, loaded at init stage.

The flexibility of the new model means that you can opt into that if you want, or not. In other words, if you can get by with 1 or 2 modules, then that's all you need to load. This is IMHO nicer then the current model.


I'm confused, how is this different to any other web framework? PHP supports plenty of host servers (Apache, Lighttpd, Nginx, IIS...)

Most web frameworks support middleware, IIS does as well. You specify filters in your web.config and it invokes, or you can filter in your modules depending on how it's configured.

No framework can patch around lack of developer knowledge while still giving the support to all the weird edge cases we encounter, like having to patch third party software without altering it.


From a side view, .NET Core is not any different.

But when you start to dig into the details, you are going to find out that there is no Web.config anymore. Gonsky. There is that middleware thing which is configured from the code. Filters? Not anymore.


What? Have you even read the documentation or actually used .NET Core?

Filters are still there and even better now. There's no need for web.config because there's project.json which does everything you need. Code based configuration is great, especially for a statically typed language. You can still load things dynamically if you want, like you always could in .NET code.

You just listed changes in the framework as if they were bad without actually giving any reasons - or even being accurate.


Like early OneDrive and Windows 8, I really hope a couple people got fired over this debacle. But like today's OneDrive, Windows 10 and .NET Core, I'm glad it's finally moving forward. I know HN has an anti-Microsoft bias from pg on down, but this is serious, proven tech that's moving in the right direction. Say what you want about MS, but their R&D budget is larger and more focused on developers than any other company, decade after decade.


Maybe my C# dev bias is showing, but I have to second SHanselman's question: what debacle? They've been working on it solidly for a couple of years now with good progress and transparency in the Github repositories. Hell, MS is even putting their C# language design meeting notes online - you'd have never seen something like that 10 years ago.

If you're talking about the tooling not being ready, I think this is a far better outcome than just holding off release until all the tooling is ready. I'll personally be sticking with ASP.NET 4.6 for a while, but once we've got a stable VS integration, probably move at some point.


You've been working with .NET core for a couple of years? And found it solid? It was announced 17 months ago and it's been a frustrating clusterfuck to follow, with the product and the core tools themselves renamed at least twice.

C# guy here! Love .NET! And found the open development wankery around .NET core (or whatever they finally call it) incredibly frustrating.

EDIT: Putting strategy documents online, documentation online, and then throwing it away isn't helpful. Maybe you dig the churn. Maybe that's why you're sticking with .NET 4.6. If you're in the trenches trying to build stuff against these bits (especially after they were called RC), you may have a different opinion on the impact of the way this was handled.


>You've been working on .NET core for years? And found it solid?

Please learn to read the text on your screen before commenting.


Wait, what debacle? Why do you want us fired?


Yeah, no debacle whatsoever.

And Scott, don't forget to give F# some love here, especially being as it is not on Roslyn and it's important its project system stay current with whatever son of MSBUILD becomes.

When .NET Core really starts to take hold outside of the Windows environment, and it will, there are people in those environments looking for a better functional-first programming solution than what they have, and it will be F#.


Totally agree on F#. I'm 100% paying attention to C#, VB, and F# and making sure everyone kicks butt.


Totally honest question: in the world of Javascript, Python, Ruby, etc. is Visual Basic still relevant? I understand the desire to have it around to drag VB devs to .NET, but is there a market anymore? Wouldn't it be better to just throw it overboard and focus on C# / F# and maybe Typescript as a JS superset?



There's one pretty cool feature that VB.Net has that C#/F# don't and that's an XML literal syntax... I wrote a service interface to a Flex client a few years ago (also having an XML literal syntax) and it was pretty nice going. Of course E4X didn't take hold and the syntax didn't make it into C#, but it was nice at the time.

These days I'd lean towards JSON.Net before anything XML based if I can avoid it. WCF could use a little love too.


Pretty much.

For example, we have a customer in health care area, where many of their researchers are using VB.NET for prototyping.

The type of stuff people on HN would use Python or R for, they use VB.NET.


Thank you for what you do for the .NET community! Especially for what you guys are doing with https://live.asp.net. I really enjoy all the discussions and banter between you, Jon, and Damian. It brings a much needed human-element to these big framework changes. I do feel like there have been many surprises along the way but you guys handled them just fine.

Just try to get some sleep Scott. ;)


You do great work. But it is sad seing the people that run Windows licensing and OneDrive destroy all the good you do.


Not really sure exactly where OP is coming from, but kudos to you guys for all the hard work. Just worked on a MVC6 app recently and I really like the new DI and front-end tooling features. Thanks!


Scott, love what you guys are doing, keep it up! I did C# for 7 years and recently switched to Python but with the latest, thinking about switching back.

Couple of questions: 1) will non-Windows platforms be fully supported and performant in RC2 and RTM? 2) is there a good place to follow along the non-Windows work?


I owe you a long, private email (I'm a fan of your work.. and an ex Microsoft dev manager) but the way this was handled was not good. A RC that wasn't, public fumbling, throwing away schedules, and you can't even come up with a sane name for the damn thing. Too early, too weird, and poorly managed. For such an amazing, important technology that's such a massive leap forward in size and interoperability!

It smelled like OneDrive. And I'll provide this quote to demonstrate the type of insanity that leaks out of my beloved Redmond on occasion...

"Prior to Windows 8.1, we had two sync experiences. One used on Windows 7/8/Mac to connect to the consumer service, and a second sync engine to connect to the commercial service (OneDrive for Business). In Windows 8.1 we introduced a third sync engine..."

Current beef is the BashOnWindows alpha. Why was this pushed out so early? Quite literally nothing works on it. The forward progress even in the past few weeks is impressive but... it's bad. Ubuntu Trusty was a weird release to begin with and now it's in a weird time in its lifecycle where it's very difficult to get modern versions of gcc, clang, python 3.5 or even ffmpeg. Not that you could run cmake anyway. But meanwhile you shipped bits where "apt-get update" itself failed right out of the box. I don't get it.

EDIT: HN is limiting my ability to reply to you Scott but my frustration with Bash on Windows is both. The quality of the early bits is poor and I find the timing weird. Likewise Trusty itself is at a maximally frustrating stage of its lifecycle for anyone new jumping in (Stack Overflow is already filling up with newbie Bash/Ubuntu questions that are often tied to Windows bugs. And geeze, just let the insanity of that sentence sink in.) From a technical perspective I don't love the alpha. From a dev/engineering/coder perspective I don't love Trusty. From a strategic perspective, just like .NET Core, I'm wondering WTF is going on, thus the rant.


Maybe we'll have to agree to disagree. The public asked for open and we gave them Open with a pretty capital O. When this project started we didn't know we were buying Xamarin, so that required a pivot. Yes it looked messy because it was messy. It's hard to be Open AND Organized. Node is a mess, remember io.js? Software Development is messy and this was a peek into the kitchen.

I can't speak to OneDrive, but it's clearly a problem. Unfortunately, I work in DevDiv, not Windows.

As far as Bash is concerned, I think "literally nothing works" is not fair. I helped with this release. I presented on it for 90 minutes this morning and installed and brought in build-essentials, worked on redis-server, g++, ruby, and it worked fine. Yes there's rough areas, but we can update it often with WU. Also, you're complaining about Trusty in the same breath as Bash on Windows. You'll be able to update to 16.04 later so that might help. And, you're certainly able to Hyper-V any Linux and SSH in as well.


I for one have enjoyed the dnx/dotnet cli journey and have engaged in communities I never knew existed. Thanks.


Keep up the good work, I love being able to see and play with the beta or alpha code. Things may not work perfectly, but that is why it's no production code.


I'll give a polite tip of the hat but warn that cutesy redis demos are starting to wear thin. The C++ support in the tools you get after installing build-essentials on Trusty smells like Visual Studio 2010.

As for .NET core, you'll get plenty of warm fuzzy "just happy to be part of the journey" crap here but don't forget that for every unemployed enthusiast chatting in an issues thread on github there are 100 professionals working their asses off trying to ship solutions. That's why you exist. Don't lose that.

"Things may not work perfectly, but that is why it's no production code." Ugh. They called it a Release Candidate! And it was... crap.

Tools support isn't like horseshoes, almost doesn't count. Likewise you need breadth and depth with your Linux support or this is going to be a total friggin' debacle for devs.


So is it BashOnWindows that is the problem, or a frustration with Trusty itself as a distro?


I'm in the bracket of the "100 professionals working their asses off trying to ship solutions" as my day job. What has the "unemployed enthusiast"s chatting on a thread go to do with me shipping solutions? Let the children boogie and i'll join in my free time :)


What demos do you want? If redis is "cutesy", what isn't?


Something hip like one of the neural toolkits? I guess that would be hard to do since you can't install CUDA, R packages or the JDK at the moment. Maybe run Docker? Nope. Heck, I'd settle for being able to run tar or rar. Those don't work either. Cmake? Nope, broken. Valgrind? nope. Mono? nope.

He's welcome to run the same redis demo he does every tradeshow and pretend everything's hot and ready for action, but it's misleading at best.


We're focusing on mainstream developer scenarios to start with (esp. Ruby, Java, Python, etc); we'll get to more advanced, esoteric, and exotic technologies later.

FWIW, many core Linux tools (e.g. tar, gzip/gunzip) work* well, and tools like gcc/g++, Mono and CMake work* well in current insiders builds.

* By "work", we mean, they work in our scenario testing. If you find issues, please log bugs at https://aka.ms/winbashgithub.


I'm running the latest insider build. The bugs reported are based on personal experience and I verified they were currently open on GitHub as well before I posted. Tar hangs, cmake can't find a compiler, Valgrind goes nuts, and mono doesn't run. This is right at the top of the current issue list on GitHub with Microsoft annotations confirming the bugs.

I do what I can to report issues but frankly I do pretty basic development and hit an immediate brick wall with this stuff. I can't believe I'm celebrating Cygwin both for stability and breadth of packages. It's nuts.

I don't expect you guys to demo tensorflow. I'm simply saying that getting Redis limping borders on false prophecy.


"He's welcome to run the same redis demo he does every tradeshow and pretend everything's hot and ready for action, but it's misleading at best."

Here, I just built TensorFlow on my Surface while sitting here in an airport. Here's a screenshot: http://i.imgur.com/WlNNuVt.png

I'll try some more complex TensorFlow examples on the plane.

I'm sorry you're having (or had) issues with the build on your machine, but your negativity is kind of a bummer. We're happy to help chase down filed bugs.


writev() still doesn't work, Scott. That breaks countless tools, often subtly. The fix somehow didn't make it into the latest insider build.

My solution? Wait for Windows update to change every Windows component, in hopes that I can then maybe run cmake in a console window.

Like I said, I appreciate the recent progress, but this is exactly the type of goofy situation you used to jump up and down about years ago.


Who are you?


UPDATE: Looks like the TensorFlow minst (no GPU) test data set works: http://i.imgur.com/n4zDz3a.png


Have you got any example tar files that definitely hang on the latest build? I'm sure I've been able to tar -xf on the first and current builds.


I did a "make dist" and then tried to untar. It hung two builds ago. This build it hangs -- or worse, the files are corrupted. It's sort of ridiculous.

EDIT: Maybe it's bug #313? Anyway the Microsoft I used to know wouldn't release stuff like this. Step 1 fire all the testers. Step 2 "let the community vote." Step 3 is not profit.

https://github.com/Microsoft/BashOnWindows/issues/313


I absolutely love the Open and transparent way of developing software and the progress you've made is amazing.


Timing is a "problem" completely in your control - just don't use it.

Also applies to the other problem of it being bad or you not liking it - just don't use it.

I do agree that OneDrive sucks and is the worst syncing software available.


When I am presented with a tool, a ship date, and a release candidate -- I set my timing and expectations accordingly.

When those expectations are not met, you're correct -- I don't use it. I also provide what the Redmond boys call "feedback." Having worked there it's easy for me to spot the knucklehead behaviors they tend to fall into. Inability to name or ship a product plus goofy tech decisions (like shoving everything they can't quite figure out into this "middleware" hole) usually means developer pain ahead.


PM for Bash on Windows here. In answer to some of your questions:

"Current beef is the BashOnWindows alpha. Why was this pushed out so early?"

We released this feature early because we wanted to get it into the hands of the community in order to learn how they'd use it, what tools they'd run on it, how it would stand-up to real-world use, etc. And the community has been AWESOME, filing lots of bugs (https://aka.ms/winbashgithub) and engaging in an overwhelmingly positive and supportive manner. Thanks to all of you that have provided your feedback.

"Quite literally nothing works on it."

This is just not accurate. There are a large number of things that do work surprisingly well on Bash/WSL, even in this early stage in its development. While Scott and I were on Skype, prepping for //Build, I curl'ed the Redis source tarball, unzipped it, apt-get install build-essential, built, and ran Redis, within minutes, without any modifications.

Are there many things that don't work? Yes. Absolutely. We've been very, VERY transparent about this and, in fact, state explicitly in our intro video (https://aka.ms/winbashav) and our announcement (https://aka.ms/winbashann) that there will be many gaps and that lots of things won't work.

But let's be fair here, we're talking about an early beta feature in early builds of a beta OS that you choose to download for free. If you don't know that you're going to experience issues, you should not be running Insiders builds.

"Ubuntu Trusty was a weird release to begin with"

No, it's not.

That you don't like it is merely a reality of the Linux ecosystem: Ask 100 Linux dev's which distro to use and you'll get perhaps 20 suggestions - Ubuntu, Debian, Arch, Fedora, CentOs, RedHat, Suse, Gentoo, etc.

WSL is designed to be distro agnostic, but we had to start somewhere, so we chose a stable, supported distro's that is very popular with developers - Ubuntu 14.04. We plan on supporting other distro as we expand and improve our implementation of WSL.

"it's very difficult to get modern versions of gcc, clang, python 3.5 or even ffmpeg"

Really? You're suggesting you don't build everything from source??? ;)

Realistically, we're aiming to benefit the most developers possible with this first release - there are far more developers using 14.04 right now than newer versions. And there's FAR more existing code targetting 14.04 era toolsets than the latest and greatest.

We'll get there, but you'll have to be just a little patient while we improve the breadth and depth of WSL's implementation.


Is there any way to restore Bash on Windows to its factory settings as it were?

I've enjoyed playing around with it, although I will echo what others are saying in that it certainly does still seem rather broken.

It would be nice to have a reset switch of some sort for when it breaks.


lxrun /uninstall /full (then reinstall by typing "bash" again)

... but even that seems to have bugs. It doesn't manage to delete all the files in the home directory!

https://github.com/Microsoft/BashOnWindows/issues/310

They will get this stuff working, and it will be great. Because Microsoft. But whatever they're doing now is just stupid. I worked with some pretty chaotic teams at MS in high pressure situations and development was never approached like this nor was it this "messy." They're pretending this is how the sausage is made. This is not sausage.


First of all: software development 101. Is WSL even remotely feature-complete? So let's not call it a beta.

I'm not sure why you guys have such a hard-on for Redis, but... whatever. It is clean code that compiles on old toolsets. Much other code does not. Keep rocking those Redis demos, setting and getting values is fascinating. :)

Debating distros is pointless but I have noticed that Ruby (and oddly, Microsoft folk) tend to land on Ubuntu. Fine, no big deal. But Trusty was a weird release for them -- and the Canonical packages sort of suck. It's mid-2016. How much C++14 support would you expect to get from installing build-essentials? Shouldn't it compile code I'm writing in Visual Studio 2013? Would you expect to have Python 3.5 available? Would you expect to be able to type "apt-get install ffmpeg"? For various reasons these basic things do not work on Trusty. And then you made it worse by shipping bits that didn't even handle apt-get properly.

I'm not anti-Ubuntu other than to point out every bag o' bits reaches a point in its lifecycle where it's a total pain in the ass to set up a modern development environment. Trusty is past that point. It's an unfortunate thing to launch with. Not being able to move forward is also unfortunate. Don't punt on that, trust me. If WSL actually worked, people would be screaming about that instead of all these noise bugs. You'll see.

If you want to triage this, get Canonical to offer some more modern packages. Again, I'm not sure why Microsoft hopped into bed with Canonical (especially on the Azure/container side)... but ok. Now get them to ease some pain. I shouldn't have to install a half-dozen shady PPAs to try the stuff that hits the HN homepage every day.

And I truly despise this "let's see what the community wants" fake project management technique. You know damn well what features you need to implement. You knew damn well what C99 stuff was needed 10 years ago. "Send us a note if it doesn't work" is lazy, shortsighted and damaging to all parties. It's community management, not engineering. Project Management via squeaky wheel is just stupid in 2016. It didn't work prioritizing C99 features and it's a particularly stupid way to implement a shim library for Linux userspace. This is Open development done wrong.


Literally the only person who I've met thats made an issue out of this




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: