While I think these C#7 proposals are all cool features (Non-null, tuples, pattern matching), it feels almost like it's too late to make improvements of this kind to C#. We have had F# for a very long time, do we really need non-null and pattern matching and so on in C#? The features sure make sense for C#, but do they make sense now?
As more and more features come into C#, all of the BCL would have to be updated to use them if they are to deliver their full value. When generics entered in .NET 2.0, the generic and non-generic collections API were both included in the BCL, and they still are. I can't see how it would be as easy to make the BCL take full advantage of proper tuples, non-nullables etc., in a way that doesn't feel like a complete afterthought.
I'm all for adding features, and I'd be happy to get breaking changes and update thousands of lines of code to get them. But I fear that isn't going to happen, instead things will come in as optional features, the BCL won't be updated to return tuples where it should (I.e. whereever it would have if the feature had been around forever), and so on.
F# syntax is too verbose for OOP. Actually functional parts are too, like lambda syntax, LINQ etc. Unless it's improved C# is bound to remain mainstream.
Can you explain? In C#, for example, type inference barely works. You have to do e.g.:
Func<int, int> inc = x => x + 1;
In F#, this is just:
let inc x = x + 1
Based off a simple test (coded same thing in both languages), C# requires about 20x more type annotations. Fields, methods, type parameter constraints -- C# simply doesn't implement type inference in most places.
You're right that F#'s lambda operator (fun) should be shorter. I'd prefer \. i.e. map (\x.x+1)
But verbosity isn't the reason C# devs don't switch to F#. Hell, it's still a "thing" to decide if using "var" is ok.
That's not a "problem" in type inference, that's a design decision where lambda syntax can represent either delegates or expression statements. This is how LINQ-to-SQL can take a LINQ expression and turn it into a SQL statement.
Yeah, I know the reasoning, I just find it faulty. It's confusing to not know if your code is code or turned into an expression tree.
So with that, local functions will have different syntax than normal functions? I don't envy y'all on the C# team; it's gotta be difficult to try to advance it while maintaining backcompat.
I just wish MS would put more money and clear marketing into F# instead of pretending like hacking up C# syntax is the only way to continue.
That's an excellent post as Eric's a smart guy yet misses the real reasons.
Eric's wrong about Swift, and the reason is illuminating. Swift has the full, unconditional support of Apple. Look at the splash pages for Swift[1][2]. There's no comparable page for Obj-C. The docs on Swift[3] make it clear that you're totally set and don't need to worry. It's obvious to any developer that choosing Swift is a good, safe, correct choice and you won't be left hanging. Microsoft chose not to do this for F#.
Now compare to how MS markets C#[4] vs F#[5]. C#'s clearly sold as the general dev language for "rapid application development", whereas F# is to "solve complex problems ... such as calculation engines and data-rich analytical services". This sums up MS's attitude, which percolates to customers. F# is just for hyper-intelligent folks doing mathy stuff - not for us normal devs! C# should instead be pitched focusing on it's "familiarity and userbase" just like VB has "English-like syntax which promotes clarity"[6].
Inside VS and .NET overall, it's clear that F# is simply not getting the same level of support. So while Eric might be pointing out that "normal people", meaning the bulk of MS customers, are staying with C# this is just MS fulfilling it's marketing goals. MS refuses to reassure customers that F# can and should be considered for any application where C# is considered.
Until that attitude changes, I would not be surprised if customers continue to do what MS says to do. I doubt it'll change. C# has no real competitor (Java's lightyears behind). They've got a proven track record of ignoring F#. Add in the politics and face issues going on (going off the history as well as comments from third parties that have been involved on both sides) and well, what can we really hope for?
I'm not sure \ is better than "fun", fun does have a certain clarity when reading code, and it's only three extra keystrokes if you include the space, all easier to type than backslash which needs a pinky stretch. Honestly I could go either way though.
Also note that C# loses brevity as soon as there are more args than one (such as with commonly used Seq.iteri or folds) because of having to type the "(,)" for C# (the parens annoyingly requiring shift keys) instead of just a space.
As for fields and other class-level type declarations not using type inference, I guess that's a very conscious decision, considering that those constitute an API, which should be carefully changed, if at all. Just having a method return type silently change when you change something in the method body sounds a bit off to me in that context.
I don't think so. Eric Lippert says its due to internal compiler limitations[1]. Though now that they rewrote the compiler and haven't fixed it, who knows. Rust (incorrectly) deliberately requires type annotations for top-level items in the name of clarity. (Which ironically, makes it more difficult for beginners as they now have to make sure they got the type syntax correct just to write functions. Plus it adds visual noise that should be up to the programmer.)
As for things changing with type inference... if you run into that problem then add annotations. I'd be surprised if things continue to build after changing e.g. return types. I'd imagine this would only really be a problem with types that have implicit conversions (yet another reason to not have implicit conversions.)
I doubt your experience with Rust if you refer to its decision to opt for function-local type inference as "incorrect". Whole-program type inference works for scripts, but for programming in the large it's sheer horror. I want my functions to provide certain contracts, and I don't want to have to read the function body to ensure those contracts are upheld (and neither does the typechecker). Typed function signatures keep errors local and are the feature that keeps implicit return values from becoming a footgun.
Most of my experience is with F#, but I've simply not found it to be a problem. At module boundaries, I'm free to annotate as much as I want. The point is that it's up to me to decide what I need. I'd be surprised to find many cases where return types change around and the program still compiles but is now incorrect.
Ideally, items and expressions would have the same syntax and usability. I find in F#, I just start writing a long function with lots of nesting, then copy pieces out to refactor as needed.
Well, that's disappointing. It'd be nice if C# allowed more succinctness, and eliding type annotations is a major part of that. It feels like those aspects of C# were chosen just to implement LINQ, then forgotten.
Also don't mistake my tone in my comments. While I am annoyed with the direction MS is taking, C# is still obviously better than most languages and the tooling is a fantastic. I'm happy it is still being improved, despite detractors saying it's too kitchen-sinky or changing too fast.
Unfortunately, since people are dumb[1], we're stuck with C#. So they might as well improve it as much as possible. This feature could be layered on top of the BCL as extra metadata. Sorta like TypeScript allows you to add type annotations to third-party libraries, C# could take that approach with null. (Yes, this is sorta ugly.)
With tuples, C# can take F#'s approach and special-case common patterns (like TryParse that use out parameters). There's probably only a few patterns where this matters and you'd get most of the benefit. (And I suppose you could always provide annotations to note how to match certain non-tuple-returning functions.)
1: There's really no reason to use C# when we have F#, unless the app is some crap LOB app where programming style/syntax would be the highest barrier to entry. Even then I am suspicious. Anyways there's a lot of untalented developers that apparently cannot cope with FP, or much of anything really, and managers that are scared of different languages.
I completely agree with this proposal, especially since this seems to be opt-in, otherwise a lot of "probably bug free" code would suddenly break.
If this makes it into 7, a lot of code should at least be attempted to be converted to this to cut down on an entire class of bugs.
Seriously, Microsoft is making me happy that they're taking the whole "languages should make the job of the programmer easier, not harder" thought process seriously.
I only used C# for a few months, a few years ago, but I immediately loved the language. I wasn't thrilled about the documentation (Why aren't return types listed in [1]?), VS (OK, vim key bindings made more bearable), or just dealing with the MS ecosystem in general, but I loved the language.
It felt like everything Java was suppose to be, but is failing at.
Because it's a statically typed language it's all in the IDE. Almost everyone who uses C# doesn't need to check the documentation for that kind of information, it comes up when you're typing, or it's an ctrl-space away if your caret is on a method, or you can right-click go to definition.
You can pretty much right click on anything and go to definition, which will show you all the method signatures and return types of everything.
My only beef with the documentation is that they show examples of the code in use, but annoyingly never the output. So, for example, it's never clear even from the examples in the documentation whether a directory path or a URL includes a trailing slash or not. And because there's no REPL, you have to either build/compile/run or use an external program (I used to use some LINQ tester program that was basically a REPL, but it's contextually almost as expensive as simply trying it in an existing program).
Working on it -- you can expect a public pre-release soon. The code is in master right now: https://github.com/dotnet/roslyn/tree/master/src/Interactive.... I'll have to check with the dev to see how installing it is supposed to work, but he demo'd it on Friday and it looked very slick.
> Because it's a statically typed language it's all in the IDE.
What if you don't want to use VS directly? Also, if it's "all in the IDE" why have it online at all? And if it is on line, why not just include the return type?
My guess is that they only include the types of the parameters to disambiguate between overloads. Since you can't overload based on the return type they don't include it.
C# is fantastic compared to Java and seems to avoid a lot of the crap. But F# trumps C# in just about every aspect, apart from having lots of funds for tooling.
C# has "computation expressions", except they are hard-coded into the compiler. It has duck typing, but it's hard-coded into the compiler. So instead of understanding a general concept, you learn specific instances. That's certainly less elegant, and I'd guess just as complicated - but I'd say that's more magic on C#'s side since in F# it's just simple concepts repeated.
I don't know if type providers are more complicated than partial classes, and they probably should have been some generic macro system instead of just type generators.
I don't have a good definition for complicated, but I'd be surprised if C# is actually less complicated. (Something number of core concepts and interactions.) Certainly if we take expressiveness/complication F# come out far ahead.
I don't see what it will break? Sure you might get a lot of warnings if you upgrade (if I understand the proposal correct), but everything is going to work just as before.
It would be impossible to _actually_ have a feature like this as opt in since it would effectively create two different versions of C#. What the proposal does is emit warnings (and not errors) which is another shame.
I'd much rather have it opt in at the class declaration level like:
`strong class Foo`
Where strong means references to it act like value type references with regards to nullability.
Alternatively, they can just fork C# and remove all the weird parts they've accumulated over the years and stay with a sub-language that's more powerful and they can add useful features on top of.
> would be impossible to _actually_ have a feature like this as opt in since it would effectively create two different versions of C#
You still haven't said what's wrong with the proposal. There are multiple versions of C# already. Emitting warnings seems to be a good compromise, given there is already an option to treat warnings as errors.
>`strong class Foo`
This is a slippery slope. Next time they add another feature for correctness, they'll have to add 'stronger class'. This is akin to "use stricter" proposed by Sound Script (JS).
I worked on some fairly large C# projects where the teams had zero-warning policies. It's not that difficult to achieve with a bit of discipline. Warnings in third-party dependencies only show up if you build them from source, but these days it's a lot easier to get your third-party dependencies from NuGet instead.
That being said, I don't think warnings would be a good way to go on this case, because the sheer number of warnings generated by these new rules in existing, large (and mostly bug-free) projects would make it very difficult to reestablish a zero-warnings state. Developers on the team will just get undisciplined due the the broken windows effect.
This does not belong in C#; this belongs in the CLR. If they don't do this as an upgrade to the CLR, then they are effectively admitting that "common language" is dead and a joke, even more so than now. (Where libraries, even MS-shipped, make API decisions based on C# compiler implementation details.)
On the plus side, they're finally talking about something that should have been addressed back in 1999.
The downside is that C# seems to be getting a lot more complicated due to having to graft this stuff on in an ugly way. I agree the language needs tons of work. I just wish they had better fundamentals so it was neater. F# already has most of what C# can aspire to be, but it feels far less complicated.
I'm not sure what you mean exactly but there are already differences between various .net languages there are things you can do with C# that you can't really do with VB.NET easily even simple things like bit-wise operators or the fact that you have literal control of overloads in VB which you lack in C# and the list goes on and on.
Since the languages them selves work quite differently the interpreter and the CLR can't really treat them the same way heck if you make something simple lika a hello world in VB and C# the MSIL won't be identical.
And with the current nullable support then currently supported Nulleable Type Values for VB.NET and Nullable Types for C# don't really work exactly the same way because they still need to conform to the specification of their language first and then to the CLR.
The idea is that major type features should be exposed via common metadata and represented in the IL. This allows other languages to unambiguously use such types. Non-nullable references are definitely a big enough feature. Conceptually, it's bigger than Nullable<T> which is just a wrapper you can toss around any struct and could be done purely as a library (though I understand there's some delicacy as far as unboxing null and converting that to a Nullable<T>).
I'm not sure which interpreter you're referring to, but languages don't need to emit similar IL for executable code (why would it matter?). What counts is that the type defs end up exported in the same manner. If making a class in VB emitted substantially different IL than C#, that'd be an issue.
Correct me if I'm wrong here, but those things have to be done at the CLR level anyway. Otherwise the compiler couldn't warn about wrong usage of method declarations in external libraries (as one example in the proposal shows). For that to work the types or parameters would need to be atributed in some way in the IL. And then it's not only a C# compiler feature anymore.
The CLR already supports arbitrary attributes on methods and parameters. So one could implement this entirely in the language and still target old versions of the CLR.
They can smuggle it in via attributes. But if they don't make this part of the CLR spec, then not really cross-platform and more like C#'s extension methods.
Ideally these features would be designed at a higher level, in the CLR, aimed to target as many languages as possible. I.e., they shouldn't be proposed as C# extensions.
Case in point: You can define a type as an enum, a discriminated union, a record type, an interface, an abstract class, a class or a struct. You can have a primary constructor for a class or struct, or like a method with the new() keyword. You have namespaces and modules.
F# also has first-class tuple types. But the better way to look at it is that you can have records and can put them in sum and products. Then for compat, there's the types for .NET's OO system. But sure, that's an increase in complexity in F#.
Yet in C# you have hard-coded keywords for things like System.Threading.Monitor (the lock keyword). This doesn't exist in F#-the-language, but is implemented via a simple function so it doesn't count. (Just like you wouldn't consider System.Math.Max part of C#-the-language).
In C#, you have hard-coded operators, whereas in F# they're essentially just functions (compile-time type parameters work similar to _Generic in C11). In C# you have hard-coded LINQ and async syntax, whereas in F# you don't (just one general construct for computation expressions).
People cite stuff like F# pipelining, but that's not a language feature, it's just a function. C# and F# both have quotations, but F#'s are more complete, not a one-off to implement LINQ-to-SQL. That reduces complexity, or is at least a tie. (In the way that arithmetic over integers is less complex than arithmetic over int32.)
Type extensions are another case of this. C# has them, but only partially implemented (extension methods), whereas F# implements the feature in a less complex way that provides more coverage.
OTOH, F# does stuff like transforming Int32.TryParse to match as a tuple (nice, but a complication). And it also has really ugly things like this one: Explicit parameter names take higher binding than = operator. Example:
Foo(Bar = Baz) // Calls Foo with argument Bar with value Baz
Foo((Bar = Baz)) // Calls Foo(bool) with result of (=) Bar Baz
My gut feeling is that once you subtract F#'s stdlib and add in C#'s plethora of edge cases, you're not actually significantly less complex in a meaningful way using C#, and F#'s improved uniformity wins out.
I have not even started using C#6, which introduces the Null-Conditional operator. I expect this to already cleanup a lot of that verbose null-checking code, and avoid null reference exceptions simply because checking for null gets so much easier.
To me, as a C# dev, this proposed change seems rather quick (let C#6 get some use first) and it also seems not consistent with nullable value types: with a reference type you'll get a compiler warning where the same error on a value type will give an error.
>To me, as a C# dev, this proposed change seems rather quick (let C#6 get some use first) ...
Not really. The non-nullable references feature request has been on the radar for many years. (E.g. this July 2009 stackoverflow post[1]). There are also some very old MS channel9 videos where Anders Hejlsberg acknowledged that the community wants this safety feature and discusses the difficulties of implementing it. I don't have a link to the video but this 2008 stackoverflow post mentions a Computerworld interview on the topic.[2]
Non-nullable references is not a shoot-from-the-hip feature addition, and the C# team has been thinking about it for a very long time
Don't mistake the dev workflow for what the customer sees. Now that we're open source we've pulled back the curtains, but this is how it's always worked. We plan new language features years in advance.
No, but Visual Studio 2015 came out July 20, only 7 weeks ago. I've been using it 3 weeks now. I find it difficult to judge a proposed feature when there was so little time to get used to the current situation. Maybe it's a great idea, I don't know. It looks like the next step beyond the Null-Conditional operator. I'll be migrating a lot of code to use that in the coming weeks. After that I'll have a much better idea how much I like this idea.
It's a proposal. It doesn't even have to be implemented in the language for the next version. The C# 6 proposals were done a long time ago as well, and some didn't make it into the language after all.
What would you have the language designers do? Wait two years before even thinking about what to work on next?
It was released 7 weeks ago, but has been in preview for several months prior to that. This could be a result of looking at feedback for the preview, and deciding that this feature wasn't a good fit for 2015, but was something that should be discussed soon.
I always advocated for this kind of change in C# and now I'm using Swift I'm definitely sure that non-nullability should be available in any kind of OO-language. There aren't that many use cases that require nullability of a reference and there are many cases where the forced nullability bites you.
Unfortunately adding it to C# now will mean that either the language breaks or the feature will be an add-on that isn't as powerful as it should be (like in Swift, where it's really the base for everything).
Personally I would say that a reference type can be declared explicitly non-nullable with an exclamation mark (!) and if they're properties they either should have a default value or they should be set while the class is constructed.
I thought the idea behind .NET CLR was to make it easier to use whatever language makes sense for the problem without worrying about whether it can be integrated into the overall system. By making C# into a language for everything, MS isn't using strengths of .NET to its advantage. I would much rather learn 10 easy languages that work together than a single, complicated monster of a language that I can never wrap my head around (read C++).
I'd argue that C# is still much simpler and easier to understand than C++. But in any case, one major problem with .NET and multiple languages in a single project is that a compilation unit is an assembly. So you can write one assembly in one language and another in a different language, but that's about the extent of being able to mix languages in a project. Which just may be not fine-grained enough.
Actually assembly is the minimal unit of deployment, not compilation. The minimal unit of compilation is the netmodule[1]. You can compile from different languages as netmodules, then link the resulting netmodules together in an assembly.
By the way, this is not a scenario supported by Visual Studio, so it requires using command line tools. So for any practical purpose, you're right that multi-language solution require the use of multiples assembly projects, which is tedious.
Oh, didn't know that. Thank you. But it seems indeed like this scenario is neither really advertised, nor supported by IDEs, so adoption is probably very low.
Not only that, but when you do mix languages in the same solution VS has a history if getting confused... I used to bring in VB.Net when I had to deal with a lot of XML after they added XML literal syntax (similar to E4X), but it was never added to C# (though LINQ was, go figure).
This is probably a stupid question, but why do we have null as a concept? Nullable booleans especially grate on me. Surely it is true or false, but not null. Otherwise you surely have the wrong type? Does anyone else have these deep down feeling that null feels wrong?
If that's surely what you have, then you have a regular "bool" and not nullable "bool?".
In C#, "bool" and "bool?" are not the same type. (Although C# provides some convenient implicit conversions that minimize tedious verbosity when working with both -- but that convenience also has the effect of blurring the lines. Nevertheless, they are different types.)
A "bool" can represent 2-states. A "bool?" can represent 3-states: true -or- false -or- unknown/empty/indeterminate/invalid/etc.
Your premise that I quoted is constrained to 2 states -- therefore use the 2-state type which is plain "bool". Unless you think nullable "bool?" as a syntax seems to defy some kind of airtight mathematical logic? What would be the alternative?!? Have everyone create a custom enum that with 3 enum values "{falsy = 0; truthy = 1; unknown = 2}"? Why would reinvention of those semantics in everyone's redundant code snippets be better than C#'s standard nullable "bool?"?
(Another issue that may confuse the "null" discussion is the 2 separate concepts of "null": #1 is "null" as an indicator of pointing to invalid memory and #2 is "null" as deliberate semantic placeholders for application/business logic of unknown values. The #2 concept is what is modeled by C# nullable<T> and SQL database fields.)
Edit: "Logicians have proved that a large class of MVLs are not truth-functionally complete[.] They have also proved that if such an MVL is made complete (for instance, by adding the Slupecki T-function), it becomes inconsistent! Therefore[,] we can have either truth-functional completeness or consistency, but not both."
The last time I looked at this, most of the arguments were focused on mathematical rigor of comparing NULL to NULL. Are they the equal or not? (The RDBMS of Oracle, MSSQL, etc don't all agree.) It doesn't matter which interpretation one favors, it will eventually lead to a logical contradiction.
If you took away the Nullable<T> concept to conveniently add 3-value logic to value types, the same philosophical problem still remains. If you force programmers to manually reinvent their own 3-value logic (e.g. manually add in an extra bool variable "HasValue" to signify whether another variable's content is "valid", you still have the same conceptual problem your pdf is grappling with.
- the mathematical rigor, as you suggest, which besides the issue of contradictions also introduces problems for the implentattion of things like query optimization.
- non-null solutions are possible, f.ex. normalization, which besids avoiding the null-issue also comes with other benefits.
However, I'm not sure Nullable<T>, is problematic in the same sense. It is, after all, an explicit value domain, and not part of any logic at all. Notice how the debate about relational systems doesn't talk of a Boolean value domain (MSSQL doesn't even have a Boolean type, only a single bit number type) but of the predicate logic resulting from existing and non-existing rows.
I guess a comparable situation would rather be, if you have a Nullable<T> where T:class, how do you treat the difference between boxed and unboxed nulls?
In, Scala, f.ex., it's perfectly valid to have Some(null) != None, which means you need to handle the null value even for non-None values.
Null is wrong. It was implemented because it was easy, as it maps directly to how object references are typically implemented in memory: http://lambda-the-ultimate.org/node/3186
That is null as a reference. Null as a type, however, is useful. An off the top of the head situation is suppose you have a ballot. On this ballot, one can vote for one of x, or they can choose to not vote for any at all. In verifying and tabulating the votes, having a null value makes a lot of the rest of the process easier.
For a type of "an object" or "a value", null is wrong.
For a type of "maybe an object" or "maybe a value", null is not wrong.
There is nothing fundamentally wrong with letting null exist in some form, especially if you hide it behind an Option<> interface.
The problem is when people use the wrong type.
> Surely it is true or false, but not null. Otherwise you surely have the wrong type?
Nullable boolean is not the same type as boolean. Problem solved.
The deep sense of wrongness is probably because so many languages force you to use nullable objects even when that's the wrong type. Not because nullables are inherently nonsensical.
Making null assignable to anything results with all kinds of nonsense. For example, to say that any class implements an interface is now nonsense - as long as nullable values are assignable to that type, there is no guarantee at all.
You now have this nice and fancy type system... with this giant gaping hole that lets all hell break lose. And then we wonder why people are skeptical of the claim that statically typed languages help with correctness.
>For example, to say that any class implements an interface is now nonsense - as long as nullable values are assignable to that type
You're talking about a different concept of "null" from the parent you're replying to (junto).
You're talking about "null" as invalid object reference (similar to uninitialized pointer in C/C++).
junto was asking about "null" as a sentinel value for semantics of "missing" or "unknown" data. This is a very desirable language feature to have. Your complaint about "null references" is valid but it's further confusing the misunderstanding junto appears to have about Nullable<T>.
Nullable<T> isn't about invalid object references. Instead, it's a convenient language feature that combines two booleans to expose the desirable semantics of a tri-state boolean. The 1st bool is the user named variable. The 2nd bool is the .HasValue property. Instead of explicitly setting .Hashvalue=false, C# just lets programmer use "null".
I always liked Swift for the pragmatic solution to null reference issues, perhaps the most attractive aspect of that language to me, so this is nice to see!
As more and more features come into C#, all of the BCL would have to be updated to use them if they are to deliver their full value. When generics entered in .NET 2.0, the generic and non-generic collections API were both included in the BCL, and they still are. I can't see how it would be as easy to make the BCL take full advantage of proper tuples, non-nullables etc., in a way that doesn't feel like a complete afterthought.
I'm all for adding features, and I'd be happy to get breaking changes and update thousands of lines of code to get them. But I fear that isn't going to happen, instead things will come in as optional features, the BCL won't be updated to return tuples where it should (I.e. whereever it would have if the feature had been around forever), and so on.