From whatever little I see, I am already hating it :-) This looks like Java for Javascript, if that makes any kind of sense. We are in the year 2011, PL design has progressed so much since 1990 that such an anachronism is unpardonable. I sincerely wish that companies like Google focus on hiring the right kind of person for designing programming languages.
What is frustrating is that there are so many people who can do this job right, yet we are stuck with amateurish programming languages designed by companies like Google. Please try and hire people like Oleg Kiselyov, Simon Marlow, Erik Meijer or some one from the possible hundreds who have worked deep in PL theory for literally decades. On the face of it Gilad Bracha has great credentials, credentials that would make the HR folk happy but one look at some of the stuff he writes and we know things are not going to be good.
I think Phil Wadler gave him more respect than was due in the comments :-) Microsoft hired Anders Hejlsberg for .Net which I don't think was a good idea, he is a great engineer but not a PL theory expert. They however "rectified" the situation by hiring Erik Meijer later and his impact via LINQ and F# and improvements in C# is obvious. Not to say that Hejlsberg is not good, but Meijer is better. Getting back on track, I may be wrong but Dart makes me groan. It is one thing to design such a language in 1990, but quite another thing to do it in 2011.
Overall: uninspired. Below what I was expecting. Probably good for Google goals (tooling, migrating developers, etc)
The language is somewhat interesting, but unfortunately saddened by an incredibly boring syntax. At this point, I am thinking that they would have been better off just going with the Go language for this.
Looks like something good to migrate fleets of java engineers to, but not something that would inspire "hackers".
Now I feel like it is not worth to create any "brigthly" IDE whatsoever. Just go with eclipse.
I was planning to immediately rush to the language. But changed to wait and see now.
Misses:
- Incredibly uninspired, boring Java-like syntax.
- Not everything is an expression.
- Java-like classes, instead of anything more interesting.
- Lacks simplicity, symmetry and beauty.
- Semicolons.
Some notes:
In Dart, you can often create objects directly from an interface,
instead of having to find a class that implements that interface
What?? Why to blur the concepts of classes with interfaces and introduce a quirk? Beauty comes from simplicity and symmetry.
named constructors:
var greeter = new Greeter.withPrefix('Howdy,');
Mildly interesting.
Every type in Dart is public unless its name starts
with an underscore ("_")
I laughed at this. I always have hated the practice to start with "_" for supposed private variables, in languages without true private. I like it this way in Dart. My little favorite syntax feature until now.
I agree, this syntax makes little sense in dynamic languages where classes/types are first-class values that share the namespace with functions and ordinary values... Even in JavaScript, where it sort-of-kind-of makes sense, it introduces more problems than it solves.
Nitpick: you can make a constructor look like a regular method that happens to return a new instance, but you cannot make it one. How would that constructor make the instance that it returns?
I think the overhead of adding that 'new' is worth it. Without it: in var foo = Bar(), is Bar a type or a function? If you decide it should be colored like a type, your syntax colorer needs deeper information about the code, making it harder to write, slower, etc.
Not true, look at Objective-C for an example of how it's done. Broken up into two methods, the first of which is a static method which creates an uninitialized instance, and the second which is a plain method which initializes the instance and returns self (or rarely, nil):
+ alloc // reserves memory and creates an instance
- init // initializes the instance
- initWithSomething: // can have multiple inits
I go back and forth on whether I like this. On the one hand, the clean split makes it conceptually simple what's going on. On the other hand, you can have allocated, uninitialized code, which makes me uneasy.
You always chain together `alloc` / `init` calls, and so you're hardwired never to have a situation where you don't initialize something you've allocated. I think the compiler even knows to warn you about it, now.
Why not? It makes no difference whatsoever. Make it so that the only constructor in the system is Object.new(), and so everything else that inherits behaves just like a method.
>Without it: in var foo = Bar(), is Bar a type or a function?
That's easy. Make it so that only classes and consts can be capitalized.
I agree with you about the syntax - I was hoping for something more like Python.
Perhaps this is by design, though? Presumably the syntax doesn't look too bad to people who spend all day writing JavaScript (i.e. the target demographic for this language).
Presumably some bright spark will eventually write a Python-to-Dart compiler and then we'll all be happy :-)
When you're trying to change things, it's best to appeal to the early adopters and to do give them a real reason to switch. Making it look the same and appealing to the same "user base" won't work, because then they might find little reason to switch in the first place.
I learned LISP and Python before Java and C#. The latter two are just as useful nevertheless.
The world's experience outside of HN makes me think Dart will not be at all inhibited by its Java like syntax. If it doesn't catch on, it will be for other reasons.
Me too! We already have a very readable, simple and elegant language, let's use that. Anyone know why that wouldn't have worked? (Not using the current CPython implementation but maybe the same syntax but with a different (web-only) set of libraries, that is...)
I agree. Yes, I would have liked to just have coffeescript as their new language. Maybe they could have just supported direct native execution on Chrome. Maybe Firefox would have followed.
CoffeeScript solves most of the problems with JavaScript's syntax, but the problem they're trying to solve here isn't the syntax, it's the structure and semantics.
Sounds like Java's anonymous classes. Quite a useful feature, actually.
It's only useful because Java lacks first class functions and syntax for lambdas. This is the single most irritating thing about Java to me. When this is fixed with the release of 8, the only thing Java will be missing is a really good implementation of persistent data structures a la Clojure.
I agree that the syntax and semantics are uninspiring (you could say the same about Scala).
Where it shines IMHO is in the core library (http://www.dartlang.org/docs/api/index.html). Look there and you will find some interesting classes that deal with concurrency and asynchronous operations (Isolate, ReceivePort, Promise<T>). Also, unlike standard Javascript, there are collection classes for specific use cases (like Queue, Set, LinkedHashMap). This is where it is superior to both Javascript and mainstream static typed languages like Java.
Edit: Someone's in disagreement with me, so I'll try and clarify. I'd classify boring code as code where the intent gets lost in the syntax/language or other constructs the maintainer of the code doesn't care about but are required anyway.
Trying to open and read a file in Java is probably a better example as the intent (open this file and read the contents) is buried in about 6 lines of boilerplate code
I use Java as an example because in a previous life I used to write it and see this as perfectly acceptable
Indeed they do not translate into the same thing. But neither makes your program more or less portable in itself. It all depends on what you want to use it for. Some files and protocols should always have \n some always \r\n while in some cases it should be the system line separator. I think Python, Perl, C, and Ruby have fixed that quite well by making the IO library do the conversions. I do not remember if it is the same in Java.
But is it something people actually type? I haven't played with Java for almost ten years now but I do C# for a living which is pretty similar and I can't remember the last time I had to type something that long, Visual Studio does it for me. I can see the point if you say it's longer to read but it's also much clearer, isn't it?
It's kind of a misnomer too, because only the second circle is typed for you so you had to type Circle at least once... which is the same as if you were typing var. So it's not only not saving you much effort it's uglier. And with more complex types it's downright evil (to not use var).
Default interface implementations are nothing new. In Scala for example, if you create a Map, you can get it to automatically construct a MapImpl, without having to know it. A really common case in many code bases is 1 implementing class, and 1 mock/testing class, and this fits the common case really well.
I think "beauty" is really a subjective consideration when it comes to programming languages.
Also, string interpolation using only double quotes has bothered me in Rails. That was the one feature I saw that they supported, although now that I think about it, Rails probably adopted it as a convention to prevent excessive escaping.
That's Ruby, not Rails.
It allows you to influence how your content should be escaped, so you write '\n' instead of "\\n" and '"' instead of "\"", it's actually pretty handy in practice.
I believe this interpolation difference in Ruby originates from Perl which in turn got it from bash and other UNIX shells. I actually like the fact that Ruby and Perl have so many way to quote a string a lot since I have always hated excessive escaping.
> This looks like Java for Javascript, if that makes any kind of sense.
It completely does. And I love that it manages to be completely incoherent with Go, the other "Google Language". That's so symptomatic of what google does on that front.
I said "coherent" not "identical". They could be coherent on shared features, such as type specifications or variable declarations, which have little to do with the "class" of the language. They're not.
I don't see any reason why the syntax of a systems language couldn't be the same as the syntax of a web language. They can have different libraries available to them for example but syntax doesn't have to be different.
One of the design goals: "Ensure that Dart delivers high performance on all modern web browsers and environments ranging from small handheld devices to server-side execution."
One has pointers and no security model and produces binary processor-dependent artifacts, the other has to support untrusted code, mobile code that runs on any device.
Go allows you to take the address of any value. You just can't do pointer arithmetic like you can in C. That is, you can't address uninitialized memory.
Go doesn't define "lvalues," in the spec, but it does define "addressability." My comment would be more accurate if I had said "Go lets you take the address of any value in memory." (stack or heap)
I imagine this optimization can only happen once escape analysis is performed. In the general case, address-of moves the data to the heap, or causes it to be stored on the heap in the first place. Correct?
If they had taken Javascript (or if you prefer a more coffee-script like syntax) and added Go's best features: interfaces-as-duck-typing, channels as fundamental types, string iteration over runes rather than bytes then they would have had a nice little language.
The only thing I like is underscore for private variables. Go does the opposite (which I think is a safer approach): anything you want public has to start with a capital. It really works—it is both easy to do and easy to read.
Systems language and web language are not just a different name for the same thing --like Voiture and Car.
And, no, that their function defined in abstract and in totally generic terms is the same, doesn't make them the same thing.
"move data from point A to point B" can be said for any programming language. As such, it's not particularly enlightening when comparing a language's suitability to a specific task.
Turing-completeness aside, your argument misses the point that a particular language's design, compiler, library, toolset, (heck, even a particular language's community) can make it better suited for system programming or for web programming or for some other field.
Avion: transport des personnes du point A au point B
Boot: Menschen bewegen von Punkt A nach Punkt B
We are talking past each other :) Here is your first comment interpreted through your second one. There is no reason to have different languages, but it's OK to have different jargon.
I might not get the joke. Do you mean this ironically?
There sure is reason to have different programming languages, and it's called specialization (see: necessary engineering compromises).
My first comment says: two objects having the same generic functionality, does not mean that one and the same object can implement their specific (non generic) functionalities.
My second comment says: the same thing, basically.
I mean that (programming) language is a special kind of object that has enough versatility to express any idea in a reasonable form. Historically, our programming languages weren't that good in versatility and made various utterances an universally agreed on pain. But we are getting better and there is no law in the universe saying that we'll be forever stuck in Babel.
To some extent, your point is that Shakespeare is better in English than in its German translation for style reasons. My point is that it doesn't really matter and that an universal language is better than Babel because of network effects. Life is too short to erect artificial communication barriers.
"Historically, our programming languages weren't that good in versatility"
Well, we're not yet to the point that barrier between systems and web programming languages to be eliminated. You seem to imply that the problem is the inflexibility of the languages, but to me it's not a problem, it's a feature: I want different abstractions to work in different problem domains (e.g. systems vs web). So it's not that the languages are not flexible enough, but rather that we, as language designers and users, have MORE flexibility, to use a different tool for a different job.
Also: yes, Shakespeare is better in English than in its German translation. And it's not just the language, it's also the cultural universe that Shakespeare presupposes. For business use, maybe, but for culture I don't like universal languages, network effects be damned. Life is too short to reduce world languages and communication to a lowest common denominator [and that is inevitable, because any wannabe universal language will lack the historical and cultural ties and shared substructure of any particular population).
> Well, we're not yet to the point that barrier between systems and web programming languages to be eliminated.
We are one or two iterations away. Look at this thread. A lot of people were expecting something looking closer to Go. Go itself is an example: statically typed, garbage collected "systems" language. If you ignore the differences in the type system and the slice syntactic sugar, that sounds conspicuously close to Java. And Dart itself sounds closer to Java than Javascript as well. Not that I believe we'll converge on Java, given the glacial pace of evolution in that community.
But the edges of the state-of-the-art are getting closer in this generation. Next generation (5-10 years from now) will be even closer, if not identical.
On a technical note: what exactly do you have in mind when making a distinction between "web" and "systems"? Backends qualify as "systems" in my book and I'd be a fool if I'd want to develop in two distinct languages when there is just one app. Thanks to GWT and now Dart, I don't have to.
Well, by "web programming" in the context of the original comment, I was talking mostly about web backends. If you mean that, then yeah, we'd develop those in one language, in fact we already do in some case (node.js et al).
But I don't qualify that as "systems programming", to quote Wikipedia:
"System programming (or systems programming) is the activity of programming system software".
and:
"System software is computer software designed to operate the computer hardware and to provide a platform for running application software".
The language does look less cumbersome than Java though. In terms of how interesting I find it, I would place it between Java and Haxe. While I cannot see what it buys you that CoffeeScript or even JavaScript do not, I do not have Google's vantage point to be able to adequately judge. I am only at the foot of the mountain they already scaled. And then tunnelled.
Not only did they create V8, they write many boat loads of
JavaScript so they are well placed to address both the technical and human pain points of scaling JavaScript. And in introducing a new language they can't stray too far from the mainstream if they wish large adoption. All that said, I am not moved.
Also, F# is Don Syme's work not Erik Meijer's. Although F# probably gained from some Haskell people in Cambridge (the original one).
I'm in two minds about it. Firstly I was hoping for something like a simplified Scala or a slightly extended Go. Its not either.
On the other hand the surprise of seeing it coming out looking so mainstream (i.e. its a mix of javascript and java) is a reminder that really the language doesn't matter that much. The blub paradox is not half as important as one intuitively gives it credit for. See Peter Norvig's comments about how C++ programmers can be as productive as Lispers. Look at something like the iphone. The implementation behind the shiny interface is boringly mainstream, C, objective-C with standard OO GUI framework. Like it or not Google and Apple are about engineering and getting things done. From that point of view something like Haskell might improve things slightly but not half enough to make a significant difference to the end product. i.e. it just doesn't matter.
In summary when I saw the language this morning my first thought was: "Nothing to see here". But as a result my second was: "Get back to work". Maybe that's not so bad.
It sounds like there's a lot of PL innovators† here on HN, who value new technology for its intrinsic value, rather than for its benefits.
I would expect PL theory to deliver great benefits at about the same rate as for other fields, e.g. that pure mathematics does for physics - some of it does; though it's common for it to be reinvented independently by people trying to solve specific problems.
BTW: Notwithstanding the over-general flame-bait title, I think his "types are anti-modular" is really just making the point that, while interfaces reduce your dependence on implementation, now you depend on interfaces.
i.e. The problem with using types to hide decisions that you think will change is if your prediction about what will change is wrong. "On the Criteria to Be Used in Decomposing Systems Into Modules" http://www.cs.umd.edu/class/spring2003/cmsc838p/Design/crite...
Perhaps one solution is to specify all the types you use in a module, internally; and provide a mechanism for converting between equivalent types at the boundary, to remove the dependency while facilitating interoperation. This mechanism acts as a buffer or glue (or middleware) - a kind of interface between interfaces if you will.
† Although the technology adoption lifecycle gives the impression that a new technology with interesting properties is just the beginning of great things, the vast majority of new things do not become massively successful - it's just that the lifecycle is based on those that were. http://en.wikipedia.org/wiki/Technology_adoption_lifecycle
It is easy to agree that there MUST be essential dependencies between modules in a system designed separated in a certain way. (Ignoring some philo arguments on identity, separation and reference point...)
The issue is whether the interfacing technique adds in non-essential coupling. A technique that has at least the constraints of another technique will also have trivially have at least the non-essential coupling of the other technique.
Although types have benefits. And direct modularity is seldom the only concern. I.e. Correctness and reliability also enhances modularity in an indirect way - an erroneous module definitely decreases modularity.
> The issue is whether the interfacing technique adds in non-essential coupling. A technique that has at least the constraints of another technique will also have trivially have at least the non-essential coupling of the other technique.
I'm not sure exactly what you mean. Could you elaborate please?
I'll clarify my view with an analogous example: you can think of a
database as having a set of types - its schema. Think of each of the
applications that use that database as also having its own set of
types - its classes or data structures in general. Now, an example of
the "mechanism" I mean is that SQL provides a way for each app to
translate data in the database's schema into its own data structures.
If its data structures change, the SQL can change to adapt to that;
similarly, if the DB's schema changes, it can also adapt by changing
SQL (or even creating a virtual schema with another layer of SQL - a
view). Of course, this only works if these are only changes of the
representation of the information, and the information content itself
is the same. If the information changes, in a way that the DB or an
app depends on, then there's no avoiding that dependency.
A similar example are JSON/XML API's, where the JSON/XML is like the
DB, and the data binding code is like SQL. The "type" of XML is often
explicitly defined with an "XML Schema" document; JSON doesn't usually
have that, but it is of an implicit expected form - which is still a
type, just informally defined. There are also tools for converting
between XML types - XSLT, and may GUI "XML mappers" that display two
xml schema and let you draw lines between them.
tl;dr These enable "modules" with different types to interoperate
without dependency. Each defines its own types at its own boundary,
and so can be compiled independently. The middleware glue
(SQL/databinder/XSLT/datamapper) facilitates interoperation by
converting between types at the respective boundaries of two modules -
provided those types are equivalent, i.e. contain the same
information, just in a different schema/type.
Does your concept of "non-essential coupling" include the requirement
of precisely identical types at the boundary? If so, by loosening that
to only require the same information, this mechanism avoids that particular kind
of non-essential coupling.
Typing adds constraints - the type must be represented by and implemented in a particular type system. A trivial anti-modular constraint is that interacting modules must all use the same type system. The type system matters too - in Java, choosing from easily implementable types can easily add a lot to the complexity of the data structure, leading to a complex interface (in a more expressive type system, less of this happens).
More trivially, adding types IS adding constraints that are verifiable at compile time.
If compile-time verification of that sort is not essential (or not desired, which is the more general form of not essential), then the constraints are non-essential (or not desired).
---
Whether you choose to
1. Choose a particular data-structure and implement both modules to pass and receive that data-structure.
2. Specify an interface: The interface requires the data-structure to have certain properties (e.g. some methods work on them). The modules then build to the interface.
3. Specify that an adaptor exist to convert the data-structure to whatever the module requires.
is a somewhat different issue, although the kind of type system used does affect the implementation complexity and effort required.
I've got to say, I'm looking at it and thinking "That's it?". It looks embarrassingly like C# to me.
Other than that: it's got coffeescript's fast initializer syntax, but it's not as concise. It's got types but AFAICT no type inference. No destructuring either.
Only thing I can see it's really got going for it is some basic types which could have been provided as a standard Javascript library (and probably will become one...)
I've thought about this, and I don't really think the "early days" defence is really acceptable in this case. If Dart isn't better in some appreciable way than Javascript, it has no reason to exist whatsoever. Equally, why not just adopt C# wholesale (it's even registered with the same standards body as Javascript) and give it a new library suitable for web programming?
Sorry man, accidental downvote. It was actually a +1.
I agree with you, the first impact with the language is terrible, the syntax is too reminiscent of Java. It seems that at Google Java is popular for web apps (GWT, Closure, ...). This must have had an influence.
I still fail to see what are the improvements over Javascript (while they managed to get the syntax worse). I guess a concurrency model, optional static typing, interfaces. Did I miss anything?
void doSomething(String a) {
// Do something 1
}
void doSomething(Integer a) {
// Do something 2
}
doSomething("1");
doSomething(1);
That's the sort of thing they mean by types don't change how programs behave. You can't easily change or remove the type system because it's tightly coupled with method resolution. If you have an optional type system, the evaluation semantics of the language cannot depend on the type system because you don't want the program to do something different if you are/are not using the type system.
The fact that type errors are only manifested as warnings is a (perhaps odd) design decision, but it doesn't stop them being type errors. They could just as easily made them errors, and String a = 1 would give a compile time error.
Hopefully they'll be a 'warnings as errors' flag for those of us more used to type errors being compile errors.
I like it. It still gives dynamic types like Javascript, but catches a huge class of errors immediately upon compilation. It's exactly what I wish Javascript had.
Have you seen ES.next? It's worse. I want to see T39 just start over and aim for the same elegant JS-style solutions that Dart has shown for things like class declarations and a separate int type. Instead they're adding things nobody gives a damn about, like debuilderstructors and shit.
C# had a hard constraint to start with - it was supposed to be an easy replacement for Java, it actually started from Microsoft's own Java implementation and it had to ship as fast as possible.
Even so, the language evolved nicely as they left room for improvement in a forward-thinking manner.
Also, C# 1.1 did have 5 things which I terribly miss from Java - delegates, P/Invoke, stack-allocated types without "special" exceptions, object properties and the GAC. From a theory-standpoint, none of them are groundbreaking, but we are talking about a language that's supposed to be grounded in real-world constraints.
In my opinion, Go is an anomaly in a company dominated by Java developers. I mean, this is a company that has gone so far as to write what is essentially a Java to Javascript compiler. http://code.google.com/webtoolkit/
I wouldn't be surprised if "performance" of the language meant "able to get our vast fleets of mediocre Java developers to write web apps faster" rather than "able to execute faster".
"I wouldn't be surprised if "performance" of the language meant "able to get our vast fleets of mediocre Java developers to write web apps faster" rather than "able to execute faster"."
Yes, because Google is notorious for hiring "mediocre developers".
Or are Java developers especially considered "mediocre" by definition?
Or does dabbling in Ruby/Python/Lisp/Closure/Scala/Haskell make you "non mediocre" by definition?
It's the skills, it's not the language.
I would like to see something with the scope, ecosystem and functionality of, say, Eclipse, written in one of the non-mediocre languages (no, Emacs doesn't come close).
Again this is my opinion, and I say this as a very mediocre developer in a past life and having managed many developers, both mediocre and ninja awesome.
I agree that there are some amazing Java devs out there.
Most Java devs are not amazing (and heck most devs aren't amazing). But Java devs are not amazing with a frequency and depth that both boggles the mind and is entirely expected as that's the point of the language -- accommodate mediocre developers in a large shop producing boring enterprise code.
If you look where it's publicly known that Google uses Java, it's pretty much in Ads. The most boring, enterprisy kind of dev job Google has, but it's the money maker so it has to work despite having unmotivated "I'm just here for the paycheck" developers hacking on it.
Yeah, I know, it was used on the backend of Wave, and via GWT for the Wave web client (which was dog slow btw, I loved Wave, but have yet to encounter a web app as slow). And look where that got us.
As an aside, here's an interesting little writeup about why GWT is bad (with bonus example by pg of all people)
And, at least publicly that's about it. If a dev can't be bothered to just learn the syntax of another language in a couple of weeks in order to properly support their target platform, they are a mediocre developer almost by definition.
Oleg Kiselyov works for the U.S Navy. Don't think he was available. His essays on programming and computation are examples of an astounding mind - http://okmij.org/ftp/Computation/
> Please try and hire people like Oleg Kiselyov, Simon Marlow, Erik Meijer
All brilliant people, but you have to remember that not everyone thinks types are all that matters. I like type systems, but the majority of the world's shipped code was written in languages with unsound type systems.
I have little knowledge of the latest trends in programming languages, but if your example of a good PL theorist is someone who designed LINQ, then I am going to have to take your opinion with a grain.OfType(Salt).Where(m => m.Value.Contains(nothing)).ToList<Salt>().OrderBy(m => m.irrelevance).FirstOrDefault<Salt>();
var q = from m in grains.OfType<Salt>()
where m.Value.Contains(null)
orderby m.irrelevance
select m;
return q.FirstOrDefault();
which fits in 80 characters on one screen. From this post and your reply, it seems like the your complaint is that LINQ is hard to understand if you write it poorly. But that's true of most programming.
Yes if you write it like that, it removes one of my complaints, but it leaves the other. You now have inconsistency in the language because LINQ is basically being used as a new type of language added to an existing language. You are working with two languages at once. Why stop there? Why not add python to c#? And every other language. The super language that has everything from everywhere. It is a total mess.
LINQ solves a real business need -- how to access sets of data, regardless of the source, in a uniform teachable way. If you can teach people how to query data structures using a one syntax that is integrated into the language, then you don't have to teach them a different API for every different data source. By encouraging the few API designers to target a standard, the many API consumers can build more reliable code faster.
If you approach it as what it is, yes, a language-integrated query syntax, and avoid the fluent-chaining style most of the time, then you'll see an information-dense productivity booster instead of a "total mess".
The LINQ syntax fits on 1 sheet of paper:
http://www.albahari.com/nutshell/linqsyntax.aspx
Personally, I look at the two-tier language and library support as a case of "the easy things should be simple, and the hard things possible." That's just my own take, but I find that it guides thought process a little better. I look for a syntax solution first, then rewrite it using a chaining style if I need to (e.g. SelectMany), or possibly split into multiple queries. I don't think I've ever written one longer than 10 lines, properly formatted.
Now, as far as python and other languages are concerned, LINQ is less "python in C#" and more "PEP 202 in C#".
Yes, it requires a wide-screen monitor. I don't think I can answer the other question as I don't see any value in it. Turning loops and so on into a giant, inflexible (unless you want two wide-screen monitors) daisy-chain of methods does not improve the readability or maintainability of code. It also encourages programmers to perform actions that should otherwise be in "repository" classes, in inappropriate places.
The from item in blah syntax is just SQL rearranged, and SQL isn't as expressive as programming language constructs, so why try to emulate it?
It encourages you to write the grain of salt statement I mentioned earlier, which requires programmers to read through and understand the implementation of what you are doing, instead of it being embodied in a meaningful method and called. Of course you could just wrap that in a method, and slowly grow the line to 2-3 wide-screen monitors as you add more conditions.
The more tools you add to the core of a language, the more of a monstrosity of a kitchen sink it becomes (c#/.NET). Now you can loop over a list in 6 different ways! Hooray! That is why Dart looks good - a few basic concepts that can be used to build things suited to a particular problem.
Well, if your example were meant seriously, it is needlessly verbose. ToList is not necessary in this context, and the type arguments to ToList and FirstOrDefault are not necessary since they can be inferred. And anyway, you can just insert line breaks if you think the chain is too long. So:
Could you enumerate what actually made you hate it? Familiarity is not necessarily a bad thing, and it seems it was one of the main goals of the language.
A delightfully blank page :-) Yes it mentions classes, interfaces, optional types, libraries, tools, structured yet flexible language. But all of these are a given. Is there nothing else?
What about Generics, Covariance/Contravariance ? Type inference? Odersky thinks type inference in the presence of subtyping is untenable. Is it the same here? Are interfaces linearized as in Scala? What about immutability? It might be too much to ask for rank-2 polymorphism and Haskellish Type Classes but what about support for delimited continuations? Let me guess, these aren't "design goals". All I am saying is this isn't 1990 either.
For example:
" Dart supports optional typing based on interface types.
The type system is unsound, due to the covariance of generic types. This is a deliberate choice (and undoubtedly controversial). Experience has shown that sound type rules for generics fly in the face of programmer intuition. It is easy for tools to provide a sound type analysis if they choose, which may be useful for tasks like refactoring."
Almost none of those things are relevant to a dynamically-typed language. If you want a statically-typed language use a statically-typed language.
What Dart brings to the table is a dynamic web language that's somewhat more structured than Javascript by using a class mechanism and optional typing. This is a great region of the design space, inhabited by languages like Common Lisp and Dylan.
Type are indeed anti-modular - types create huge amounts of coupling across multiple module boundaries in ways that are not easy to wrap up, to componentize, if you will. If you create a module A, but the interface of that module (functions, classes, whatever you might have put in it) uses types from modules B and C, is it not anti-modular?
Designs are anti-modular. Types are descriptions of the design. They document the (anti-)modularity. Take the types away and you have the exact same anti-modular design as before.
In so far as (a) the language has a static nominative type system and (b) these types describe values that pass between modules, all modules that work with types defined outside the module will require other modules that use the module to also include (implicitly if not explicitly) the module dependencies.
You can get away from this by having a structural type system, or by being dynamically typed (duck typing ultimately being another form of structural typing). But within the constraints of the type system, you can't get away from this by merely changing the design - unless you put everything into a single module, which is the ultimate in anti-modularity.
> And I thought nobody in their right minds would question Hejlsberg's chops or the work he did for .Net.
Why not? C# 1.0 was terrible and most of the version since have been exercises in trying to fix it by piling more stuff on, in order to replace previous tentative fixes which did not work for any value of "work" worth using because they lacked generality.
> We need less of them in mainstream language design and more pragmatism.
I hope that's a joke, PL theory experts are nowhere to be seen in mainstream language designs (some have managed to get a claw or two into C# to add actually useful features like... lambdas...), and the "pragmatists" have a field days reinventing problems (not solutions) instead (hello, Go)
C# 1.0 was "terrible" for the reasons of being practical. You can design the most beautiful language in the world, but if all of the features you designed into it add exponential degrees of complexity to the rest of the system (parser, compiler, runtime, base framework, etc) then you're never going to release.
What language do you think was perfect at it's first release?
> C# 1.0 was "terrible" for the reasons of being practical.
No, C# 1.0 was terrible for the reasons of being a slightly improved Java, without some of the baggage (before it created its own, because you can't be a good java descendent without building a pile of legacy garbage) and nothing more.
Fucking hell, I loathe this bullshit about "pratical", it's the most meaningless term since "strongly typed". You know what else is practical? Computed gotos, FORTRAN IV, Superzap, front-panel switches, punch-cards, tape decks and magnetic-core memory.
Releasing C# without generics and half-deprecating all of the collection hierarchy (but leaving it in an undead state by not actually migrating users of old collections) a version and 2 years later was not "practical", it was a lack of foresight. Not having iterables was not "practical" it was "a pain", taking 2 releases to get properties (in the first place, and not so verbose you wanted to stab your eyes out) was not "practical" it was "whelp let's get this shit out now, who cares", having nullable references still isn't practical to this day.
> What language do you think was perfect at it's first release?
None, I've yet to see a perfect language at all. But there's a gap between a terrible language and a good language, or even an interesting language. C# 1.0 was nowhere near a good language. It wasn't even interesting.
And it's not like this shit's new, most of it is multiple decades old at this point (the only language aiming at mainstream I've seen do anything even remotely novel as of late is Rust and its typestates). We're talking about making mistakes which have been solved for 20 years.
What is frustrating is that there are so many people who can do this job right, yet we are stuck with amateurish programming languages designed by companies like Google. Please try and hire people like Oleg Kiselyov, Simon Marlow, Erik Meijer or some one from the possible hundreds who have worked deep in PL theory for literally decades. On the face of it Gilad Bracha has great credentials, credentials that would make the HR folk happy but one look at some of the stuff he writes and we know things are not going to be good.
http://gbracha.blogspot.com/2011/06/types-are-anti-modular.h...
I think Phil Wadler gave him more respect than was due in the comments :-) Microsoft hired Anders Hejlsberg for .Net which I don't think was a good idea, he is a great engineer but not a PL theory expert. They however "rectified" the situation by hiring Erik Meijer later and his impact via LINQ and F# and improvements in C# is obvious. Not to say that Hejlsberg is not good, but Meijer is better. Getting back on track, I may be wrong but Dart makes me groan. It is one thing to design such a language in 1990, but quite another thing to do it in 2011.