... and you have to work really hard to make C or C++ run any faster than the JVM. For all normal levels of programmer effort there is no difference or the JVM is faster.
Makes sense - the JVM is fast and stable. It comes with a huge array of tooling and its own built in debugging system. It is well supported and cross platform. Why would you not want to take advantage of all that good stuff?
Doesn't make that much sense to me - the JVM is complex and has disadvantages (slow startup times etc.), so if you can compile PHP to native code, why compile to interpreted/JIT-compiled code instead? You're just wasting cycles again.
The advantages of the Java ecosystem can be accessed through Thrift if necessary, but the C/C++ world is far from being dead.
It might be interesting for the security and debugging aspects of the JVM, but that's a bit meagre for such an effort.
And some people might not like to hear it but Oracle has really provided a new injection of excitement in the JVM with their G1 GC and JRockit additions.
And with clarity around the future roadmap and the ever growing list of non-Java languages the platform has never looked better IMHO.
Oracle moving on from the 1.6 doldrums was a huge plus for Java and the JVM. Now the buzz is around 1.8 with lambdas and enhancements to performance of invoke dynamic.
The JVM has a huge advantage over the CLR in that Mono is nothing like as well supported as Oracle supports the JVM and Microsoft's CLR is Windows only - which makes it far to expensive for cloud computing.
If you're looking to stretch a budget I'd look to companies like Hetzner rather than looking to the cloud.
I'd try to increase the revenue my server generates rather than decrease the cost of servers, but I'm the kind of guy who thinks it's possible to pull more than 5 cents an hour in revenue from a server, but if you only get 5 cents per hour then it would be important to use linux so you maintain your 2 cent per hour profit margin.
I see your point, but then money is money. So, why pay MS anything? Why is computing treated in this unique way where minimizing cost is not standard practice?
In this case a team could work across the project. One on the parser/lexer one on AST translation, one of code generation, Two on the runtime and 1 on build/ci. I think that makes 6.
You either never read the book or are interpreting it wrong.
The premise is that as you add more people to an EXISTING project, the amount of time it will take to complete will increase. However, if all of those people are involved from the beginning, then you are not subject to the same phenomenon - within reason of course.
So while it might not be "6" developers working on it for a year, it could potentially be 9 very devoted developers cranking this out with good results towards the end of the year.
Brookes also talks about another issue, that in a team of 6 there are (6 choose 2) edges in the communication graph. I think this is what kyriakos was referring to.
Errmm, if you actually read the book properly you'll realise that the baby/mothers analogy applies to synchronous sets of work and workloads that cannot be broken down into parallel running tasks. Once architected to a suitable degree I'd imagine that there are at least a few streams of work that could be carried out concurrently.
AFAIK they have about 20 people on this....unless I've mis-construed a friend of mine's elliptical comments. In other news - when is Google's VM that all the people they hired from the CLR team going to be released?