Some Java folks are looking down on C#? I don't believe it :))
In my experience it's always been the opposite: C# guys have always looked down on Java folks. And rightfully so: the language itself is nicer, the standard library is much nicer, and the CLR+CIL are vastly superior to JVM, especially in memory management department.
>CLR+CIL are vastly superior to JVM, especially in memory management department.
This is an extremely disingenuous statement. You may be referring to the fact that the CLR/C# supports the notion of structs, and allows native memory management. However, as someone who has experience working on both the JVM and the CLR, I can say that the CLR's GC is primitive compared to the JVMs.
Biggest advantage off the top of my head is the JVM's garbage collector has a lot more tunable options. Different application's various heap profiles necessitate using them in high performance applications. I don't believe the CLR allows you to specify the size or number of various generations, or the number of collecting threads, all possible with the JVM collector(s).
Either way, the G1 trounces both the CLR and JVM's current collectors, so the JVM for java 7 will make the advantage even more apparent.
I'm less familiar with the internal workings of JIT compilers. I don't believe the CLR supports static analysis type optimizations, but perhaps the CLR team focused their research in other areas. If anyone knows more details on the differences in their JIT strategies I'd love to hear more.
Meh. GC isn't terribly interesting. It's just a tool to manage internal heap for a process. The issue with JVM is simple: it's a memory hog and it hates to reboot because JVM thinks of itself (and behaves) as a "computer" instead of a "process": it slowly boots instead of starting, it doesn't fork nicely, it doesn't share any code with its own clone running side-by-side, it allocates more RAM that it needs on startup, basically behaves like a drunk asshole in a bar, hence no Java on the desktop. Dalvik, on the other hand, is "JVM done right".
CLR processes are just processes. They're integrated into a native VM, they are capable of code sharing and they use more compact in-memory representations of built-in types. CLR was designed to run on machines we're running it on, not with a goal of selling more of "big iron". CRL is great for general purpose software development, without cowardly "long-running" or "on dedicated box" prefixes attached.
I agree that the JVM is a bigger memory douche than a vodka-and-Red Bull addict with a new haircut, but I disagree that GC is a "meh" problem. In fact it seems that the CLR is still evolving in this area and can have serious issues with multi-GB heaps -- as does Java unless you incant the right incantations:
Of course, one might say "don't use such big heaps dummy!" But that's a workaround. Due to fragmentation, sub-second GC latency is still a tough problem even for 1-2 GB heaps.
If C# would lead the way in a GC-friendly memory management scheme (maybe regions or something similar) they might win a lot of Java converts.
Anything I have tried to use Mono for has not worked. Maybe there is a subset of of .net stuff for which Mono works perfectly, but I do not think we should say that .Net or Silverlight are cross platform based on an incomplete Mono implementation.
After reading about "Explicit virtual methods" I lost all my interest in C# forever.
I don't want this, seriously do not want. I don't want a language "with lovely features from C++". Explicit virtual, structs - please take those and leave me alone.
Lambdas are nice, some syntax sugar is nice, but not that creep.
Please don't take this as flamebait. I know a handful languages, I do backend development and I just absolutely don't need some language features and I fear that somebody would use them in their little library and would POISON the lifes of anybody who touches it.
Java has some "do not want" features too, like checked exceptions, but those are more or less solved these days.
You're not the only person in that position. I'm a developer from the Java camp and moved over to the C# camp (by work necessity, Oracle, etc); and, of all the features of C# that I enjoy (Properties, Attributes, Lambda expressions, etc.) I absolutely abhor the idea that I, as a developer, have to know, perfectly, what future subclasses are going to want/need to run properly. It has never made sense and always felt like it was taking away one of the core tenants of object orientation (though I know that neither Java nor C# are true-OO languages). I know it probably technically doesn't, but that's how I've felt ever since I heard about doing it "another way"
As a library developer, you can explicitly declare all your methods as overridable. The key point is that in large scale software development, in Java, you can have the following undesirable situation:
Version 1, you import a third party library:
public class ThirdParty
{
public void calculateAndPrintResult()
{
System.out.println("1");
}
}
public class Mine extends ThirdParty
{
public void calculateAndPrintResult()
{
super.calculateAndPrintResult();
this.printResult();
}
protected void printResult()
{
System.out.println("2");
}
public static void main(String[] arguments)
{
Mine mine = new Mine();
mine.calculateAndPrintResult();
// Prints 1, 2
}
}
Version 2, third party has refactored and introduced printResult. You swap in the new third party library and suddenly the behavior has changed:
public class ThirdParty
{
public void calculateAndPrintResult()
{
this.printResult();
}
protected void printResult()
{
System.out.println("1");
}
}
public class Mine extends ThirdParty
{
public void calculateAndPrintResult()
{
super.calculateAndPrintResult();
this.printResult();
}
protected void printResult()
{
System.out.println("2");
}
public static void main(String[] arguments)
{
Mine mine = new Mine();
mine.calculateAndPrintResult();
// Prints 2 ! (wrong)
}
}
While this example is trivial, bear in mind, when you have programming in the large, i.e. large codebases with a lot of third party libraries and many teams of developers, you would rather not have surprises like this pulled on you, or you pulling surprises like this on other people.
It can take a lot of work tracking down this kind of bug.
When you finally realize this type of issue arises despite everyone following "best practice" OOP, then the reason is because "best practice" OOP is not actually best practice.
It prints "2%n2", where %n is the platform-specific newline; but that's not really important, as your point is still made.
I can see how that would become a problem, a potentially terrible problem, but for a situation like that, wouldn't private methods solve that problem?
I can see more complex situations with this, but at some point, and they'd be annoying to break down. I don't throw out my position on this, though, as good documentation could help at least a few cases of this. Also, newer versions of Netbeans encourage the use of the @Override attribute, to help ensure things like this down happen, I imagine (I don't actually know what the @Override attribute is for, as I've only used it in large Java projects where I was the only developer).
Not if the third party developer intends to override it in his own subclasses.
Documentation may solve the issue, but this goes against the "just works" philosophy.
The root of the problem is versioning. Your class was developed against version 1 of the base class. You expect further changes in implementation of the base class should not materially affect your running code.
To achieve this, one of the sacrifices is the all methods are not overridable by default (i.e. not virtual), but instead have to be explicitly declared virtual AND the subclass has to explicitly state they wish to override the base class.
the problem runs deeper. When version 2 of the ThirdParty class ships, suddenly your class doesn't work. If another team has subclassed your class and overridden your printResult routine, you are effectively stuck. You can't take in bug fixes that exists in version 2 and the other team effectively couldn't move forward either. Remember, this is best practice OO, and there really hasn't been any changes to the interface. So effectively this is like version 1.01.
It is namespace pollution of a very subtle and devious kind.
@Override is typically used to detect when you intend to override some method of the superclass but mistakenly don't. Such as when you misspell a method or something. By default the 1.6 compiler will treat it as optional and the situation described above is possible. However, you can tell the compiler to treat a missing @Override as an error which would give you an indication of the problem above.
edit: unless you misspell the method you intend to override and forget to include the annotation, in which case the compiler can't help you :-)
As a long-time Java developer, I absolutely abhor the assumption that everything can be safely subclassed and overridden unless it is explicitly declared as final.
Implementation inheritance breaks encapsulation. It creates an intimate entanglement between the internal implementation details of the super- and sub-classes. Every time you extend a class or override a method that wasn't explicitly and carefully designed to be subclassed and overridden safely, you are _writing hacky code_.
There are, of course, places for hacky code. If you understand why something is hacky and feel it's a worthwhile trade-off then by all means you should be able to do it. Java, though, encourages people to pepper their code with such hacks before they have learned why they shouldn't.
I thought Java had this same issue, but if methods aren't explicitly virtual you're effectively always in this state. So clearly Java must deal with construction/virtuals differently.
Considering that for the last several years nearly every significant improvement to Java has come from them copying from C#, if anyone should be looking down it is the C# folks.
I've been working in C# for the last five years or so, and I couldn't ever consider going back to Java again, every time I look at some code, even my own, I just see all the nice things that are missing.
There is so much that might seem like syntactic sugar at first, but when you get into it, you can just express yourself in a lot less code compared to Java. Sadly, there also seems to be a culture thing around Java to make things needlessly complicated, whereas the culture for C# is about writing less.
I've started using Jython along with Java at work to get some of the syntactic niceties described for C#. But it would be nice to get the syntax without having to cross language boundaries.
Exactly ! In fact, I know of people (myself included) who look at C# in envy !
I think Microsoft has managed the language really well, even taking painful decisions like backward-incompatible versions to properly support generics. Kudos to the team !
Pardon my misunderstanding, if there is one, but didn't C# start as essentially Microsoft's clone of Java? Seems if Java wants to return the favor, it's fair enough.
It'd be tough to argue that Java is better than C# (the programming language). The benefits of Java come not from the programming language, but from the ecosystem, the JVM, the libraries, and the platform.
To be honest, I'd say a lot of the java ecosystem exists to make up for failings in java itself. A lot of other languages are just plain better out of the box with either more robust built in features or language design that avoids common problems. For example, C# doesn't need the equivalent of spring and hibernate because there are equivalent, or indeed superior, built in or 1st party options in the form of LINQ and asp.net mvc. Similarly, there's less need for 3rd party build tools like ant because msbuild is available right out of the box.
I think you mean LINQ to SQL rather than LINQ - LINQ is a query language, not the ORM/provider. Last I heard MS were trying to deprecate LINQ to SQL in favour of the entity framework.
In practice a lot of developers end up using open source build tools like nant and rake in preference to msbuild as they tend to be more flexible in other build environments like TeamCity/Hudson.
Nitpick: You're right that the C# designers don't have to worry about a JCP-like standardisation process, but they do put some effort into producing a formal language standard (I think through ECMA). Unlike earlier Microsoft proprietary languages, where the language definition was whatever the compiler said it was, the C# team do try to pin it down, make sure all the edge cases are considered, and so on. The compiler implementers try to write to this spec. They don't always succeed, but from what I read on compiler guy Eric Lippert's blog, when they do diverge, he regards that as a bug in the compiler, not an error in the spec.
That said, the standard is very much owned and written by Microsoft. You're absolutely right that they don't have to work through a JCP-like process.
Yes, you are right. Despite it's proprietariness, C# enjoys having team that is very rigorous and have a great deal of compiler experience. Eric Lippert is super cool. In fact, C# remains one of the reasons why I still root for MS.
Sadly, Microsoft is one of the reasons C# isn't living up to its full potential. If they would release an official version of .NET for Mac, Linux or at least officially support Mono project in a more decisive way, C# will become a lot more popular. I don't think they gain anything by this though (apart from Moonlight which lets them advertise platform compatibility), which is why they will never do this :(
Even though MS is a large money making behemoth, I don't necessarily think C# would be served any better by MS trying to support other platforms. A large part of .NET depends on a well supported class library. These class library can be very operating system sensitive. Backslashes vs forward slashes, directory separators, nonblocking IO, GUI stack etc. It is hard to do this well. Particularly on a platform that belongs to your competitors.
Microsoft has pushed Silverlight as a .NET vm that runs on Mac, with a smaller framework library. So far, this has met will little traction. I don't think it will encourage MS to push this further and wider.
More to the point, C# has integrated an excellent map-reduce framework in the guise of (P)LINQ. Java still doesn't have anything analogous to that, nor does it plan to, if the latest roadmaps are any guide.
I'm not certain that Java really needs this. Developers that wan't a more functional style should consider making a transition to Scala or Clojure.
Scala can evolve at a much faster pace than Java as well, given that the core language is tiny and most of the language constructs are implemented as libraries.
He didn't say they weren't useful, just that Java the Language doesn't need it, due to the fact other languages that have interoperability with Java on the JVM have these features already. .NET isn't as language rich as the JVM currently unless I missed some major changes.
I get that you can call across to another language that has more functional features. But that's not the same as having little bits of it right there. e.g. when you want to find the latest item in a list, instead of a loop, you do:
and carry on. It's great. You don't tend to have that fine granularity when working across languages.
I don't know if you missed F# or not, but if you want a fully functional language in the ML / Ocaml lineage on .Net, you can write some code in that and interoperate.
For example, in the type system we do not have separation between value and reference types and nullability of types. This may sound a little wonky or a little technical, but in C# reference types can be null, such as strings, but value types cannot be null. It sure would be nice to have had non-nullable reference types, so you could declare that ‘this string can never be null, and I want you compiler to check that I can never hit a null pointer here’.
50% of the bugs that people run into today, coding with C# in our platform, and the same is true of Java for that matter, are probably null reference exceptions. If we had had a stronger type system that would allow you to say that ‘this parameter may never be null, and you compiler please check that at every call, by doing static analysis of the code’. Then we could have stamped out classes of bugs.
Instead of being able to send null DateTime, the WS dictates that the client must send something. Therefore, the client sends Epoch date thus the code must handle this properly.
What about dealing with Database when NULL actually means something (in certain situation, we do want to put NULL instead of some random value/junk).
IIUI Anders isn't suggesting that null isn't necessary, but that you should be able to say in code: this type does not allow nulls (or this type does allow nulls). We have this with value types, int or int?, but not with reference types. Adding this in at the start would have removed many chances for bugs.
E.g. I frequently need to check if params are null and then throw an exception at run time. If I could declare a param as requiring a value then the error could usually be caught at compile time.
> At the end of the day, it all depends on your developers.
Sort of. In your WS example, if you had better developers then they could have defined the interface to accept Nullable<DateTime> instead of DateTime. So the language does let you express that null is an allowed value.
The feature request for the language is to make it possible to express Nonnullable<MyClass>. Regardless of your developers, the best that they would be able to do is throw an ArgumentNullException (which I see thrown for all over the place) without that support.
I'll accept that there may be a situation where a client sends junk to a service, and the service treats the junk as null, but that requires both sides to be in on it - in which case, they would be able to agree to a signature change instead. But they can only agree to the signature change if the language supports it.
I do not see the need to constantly evolve a language and/or push for rewrites from scratch. Java as a language succeeded precisely because it was relatively simple, yet powerful enough. For example, I believe that adding generics was a mistake, especially in a crippled form (due to compatibility with previous language versions).
I favor switching the language/environment once you feel that you need more expressing power. E.g. use Scala, Jython or Clojure if you need JVM, or perhaps Erlang for server-side.
The author makes some fair points. Java generics are a bit clunky. C#'s more liberal application of autoboxing can be nice. Some other options for string literals would be helpful, especially for writing regular expressoins without a ton of backslashes.
Checked exceptions are not something that I'd happily give up. We do have unchecked exceptions (RuntimeException), and the JRE generally uses them where appropriate. But what about methods like Inflater.inflate, ImageIO.read, or Class.getMethod? If these didn't throw checked exceptions, we would sometimes forget to catch exceptions which should almost always should be caught. It would be a step in the direction of a weaker type system, unhelpful errors, and less graceful recovery.
Regarding initializers, we can do the same thing in Java by giving an initializer block to an anonymous class:
List<Integer> L = new ArrayList<Integer>() {{
for (int i = 0; i < 10; ++i)
add(i);
}};
Too bad he didn't get into any of the stuff that really makes C# nicer than Java, in my opinion: lambda expressions and closures, LINQ, expression trees, dynamic typing, type inference, P/Invoke vs JNI.
Also the event / delegate thing makes C# really nice for GUI stuff.
P/Invoke is worst part of C# ;-) In Java JNI, and even JNA are difficult to use for purpose. So if you really need native code you may use it, but it is difficult to do, thanks to this most of developers never used anything native.
In C#/.NET P/Invoke is so easy that almost all projects are using it. But this makes those project Windows only, this result in all problems with things like scalability or being bound to one vendor.
In case when you need to move your app to other system it will cost a lot, and results aren't sure.
With Java moving app to another system is quiet easy, I was moving some apps from Windows to state when those apps are able to work on Windows and *nixes and this was rather easy, you need only to be careful with working with files.
And because of this most of big Enterprise projects are on Java, not on .NET. Maybe Java isn't the best language on Earth, but JVM and this what is behind JVM is (as for now) the most Enterprise friendly platform on the planet ;-)
This that behind Java you may now two big vendors, so Oracle and IBM, and behind C#/.NET you have one - Microsoft ale makes difference.
You are right C# is nice for GUI stuff, but... but again only for Windows, not even for mobile, because as for now WinMo is far behind iOS and Android... and you may use Java on Android, and BlackBerry ;-)
Delegates + Events are another innovation in C#, is a much cleaner approach than Java anonymous classes + interfaces IMHO. I think anonymous classes are good in both compilers but Java is really missing a delegate/event standard system.
I always enjoy these Java vs. C# threads. They are like dispatches from a world where there are only two, absolutely terrible, languages. It makes me happy that I live in a world where I can choose between dozens of languages, some of them actually decent.
In my experience it's always been the opposite: C# guys have always looked down on Java folks. And rightfully so: the language itself is nicer, the standard library is much nicer, and the CLR+CIL are vastly superior to JVM, especially in memory management department.