F# implemented this many years before (6 years ago), and as a library, just by providing the proper language feature, workflows, and a default async implementation.
In comparison, C# adds special compiler keywords for one specific example, just like they did with LINQ. That seems rather ugly IMO. Providing building blocks and letting libraries fill things in is a lot nicer.
This is more of a "C#'s finally catching up with basic features".
I don't think it's fair to characterize this as "catching up". Both F# and C# are developed by an overlapping group of people at Microsoft. And, until recently, the bulk of Haskell's GHC was done by SPJ in a closely collaborating group in Microsoft Research.
The correct characterization is to view this as a pipeline from a research language, to a specialists' language, to a common man's language.
Six years isn't really all that long to wait for a specialist feature to be 1) motivated 2) conceived 3) prototyped 4) validated 5) justified 6) implemented 7) tooled 8) released 9) marketed. Given that there are hundreds of ideas and only so much time, the "minus 100 points rule" [1] basically means that it's no easy feat for a feature like this to show up in a mainstream language. When you consider the quality bar, level of IDE integration, the magnitude of the education effort, and all the other odds and ends, it's something of a minor miracle.
C# operates entirely differently, I understand. The IDE work is massive and necessary. Having said that, a lot of this stuff is "catching up" or implementing stuff from the 70s. To be clear, it's not like type inference or closures were invented with Haskell, F#, or C#. Stuff like that is pretty well-known PL stuff, isn't it?
People would be upset if C# didn't have for loops; why aren't they upset the type inference is nearly useless?
People would be upset if C# didn't have for loops; why aren't they upset the type inference is nearly useless?
Can you honestly not comprehend the answer to this? There are plenty of languages without type inference and shit gets done fine. People don't rely on it. People do rely on for loops.
Probably because they don't know it's useless. I use and like C#. The type inference seems useful to me. Avoiding generic parameters on almost every linq extension method is a huge savings in comprehensibility. var x = new SuperDuperLongClassName(); is a nice savings in redundancy.
Where can I see an example of useful type inference?
Yes, C# tends to copy F# features in the way of compiler-syntactic-sugar. I wonder if Type Providers will be next. Async/await have been around for a couple of years and I haven't heard of the next big C# feature, other than Roslyn.
They seem to be pretty busy with Roslyn, Anders recently admitted it's taking longer than originally expected. So perhaps we need to give 'em a break. The only thing I heard about C# 6 so far is it's maybe going to have more compact class declarations, a-la F# or TypeScript.
C# 2 added generics (courtesy of the same people that did F#) and closures (albeit with syntax as verbose as JS).
C# 3 added LINQ, which is a major breakthrough for end-users, although I'm not fond of the query language. So really, C# 3 just added in some basic features you expect from proper languages. I do understand this required a huge amount of work, esp. with the tooling required.
C# 4 added dynamic (F# provides ? operator you can provide your own implementation for, if you really feel that strings look ugly). Oh, and it finally backpedalled on the no optional parameters (although the optional parameters is the same broken C-style callsite implementation).
C# 5 added async (F# had a more flexible implementation 6 years before).
What else? C# seems to have stagnated, although I understand that's a feature for some of their users. C# still lacks type inference in most places, making it extra verbose. C# expression trees are still very limited. C# still can't easily do tuples. Not sure they deserve a break; this was MS's "flagship" language.
OTOH, The CLR itself doesn't seem to be getting any upgrades, either - IL stayed locked at v2. It's as if they realised they have done better than the JVM can ever do (generics via erasure just sucks) so why bother pushing it further?
I'm not sure what you mean by break, but the comparable time line for Java, and C++ would show far less improvement in terms the addition of programing languages (as in the academic discipline) features (e.g lambdas, closures). They've done a far better job than most languages of actually advancing the language conceptually, not adding libraries/features.
TO me that seems to be a good thing. The language has picked up a lot of great features that make it nice to program in, but it is also making the language huge. How do all these new features interact? What are the emergent properties of the language? Personally I could use a few years to A) get legacy code caught up B) Explore what already exists.
In F# workflows are just a syntactic sugar for monads, much like Haskell 'do' notation. You can get continuation monad (workflow) in F# easily. I don't know what ClojureScript uses, but it doesn't seem very likely that it has more powerful mechanism :)
Clojure/ClojureScript now has core.async, which is a Go-style implementation of CSP with coroutines and channels.
Like C#, core.async uses a lexical compiler transform to produce a finite state machine for the co-routine. Unlike C#, Clojure can achieve this with a user-level macro, instead of a compiler change. Both C# and core.async differ from Go, in that Go's coroutines have dynamic extend, by virtue of being heap-allocated stacks with a custom scheduler. In practice, this has a minor impact on higher-order usage of co-routines, but is a smaller problem than you'd think, it's generally advisable to minimize higher order usage of side effects.
Both C# and Go's approaches can be implemented as Monads, yes. However, Monads are a significantly more abstract thing than either CSP or C#-style Tasks. The do-notation is barely concealed continuation-passing style, which is generally less pleasant to work with than traditional imperative constructs for side effects such as send & receive. "More powerful" isn't a really useful measurement for practical use.
As Brandon alludes below monadic designs generally have allocation overheads, this is why C# uses state machines. So while they may be equivalent in some abstract sense of "power" one ends up being more efficient in practice.
In comparison, C# adds special compiler keywords for one specific example, just like they did with LINQ. That seems rather ugly IMO. Providing building blocks and letting libraries fill things in is a lot nicer.
This is more of a "C#'s finally catching up with basic features".