Hacker News new | past | comments | ask | show | jobs | submit login
The Modern Java Platform – 2021 Edition (jamesward.com)
266 points by craigkerstiens on March 17, 2021 | hide | past | favorite | 253 comments



There's some great stuff on the JVM today, but Spring Boot is recapitulating all the problems of J2EE. Everything is extremely "decoupled" to the point that you have no idea where anything comes from or why, and just adding a new dependency to your classpath will radically change the behaviour of your application (oh, you added a dependency on a library that has a transitive dependency on the MongoDB client? Guess that must mean you want a MongoDB connection pool spun up and running).

I swear I'm going to add a bitcoin miner to my libraries and list it in spring.factories so that it autostarts whenever someone starts a Spring Boot application with my library on the classpath. There'll be an undocumented config property to turn it off or make it mine to a different address. That's standard practice with every other library that uses Spring Boot, so it's perfectly ethical, and it's not like anyone using Spring Boot would notice that their application was wasting a bunch of resources.

Maybe someone else did this already. That would explain a lot actually.


I've been through the whole evolution from Spring with XML configuration to the current 'convention over configuration' approach of Spring Boot.

I actually liked the idea behind the XML configuration when I first got to know Spring in 2007. With this you could decouple the composition of your application from the actual code. You could deliver a jar and the user could decide with his own XML which parts to use and which not, for instance if he wanted to use your software in test mode or if he wanted to use another database. Otoh, most people who were using Spring were just following the cargo cult and not using any of the freedom that XML offered, and then all the XML configs were just massive overhead.

Then came Spring annotations. For the cargo cult followers this must have been a big relief. For those of us who really used the XML application configuration, the decoupling of code and configuration was now gone. There was still the option to use XML but it was frowned upon by the community.

And finally there was Spring Boot. The advantage of Spring Boot is that if you just want all the default Spring choices you can have a full fledged service up and running in no time. This looks really nice in blogs and demo's. With 'convention over configuration' you only have to adjust the parts where you don't like the defaults. This might seem nice but if you're working on a larger software project this can quickly turn into a major headache. All kind of choices are made for you by conditional spring beans which you might not even know they existed, there can be complex conditions determining their behaviour that might change all of a sudden when you add a new jar to your classpath.

Spring has one big advantage though. It is so very popular and widely used that as a Java developer you only need to get good at this one framework and you will get plenty of job opportunities for years to come. And the reverse is also true for employers: just choose Spring as your framework and you will have no problem finding new developers.


There's a next stage after annotations. The current thinking is to replace annotations with function calls. It makes more sense if you use Kotlin because Java is a bit verbose when you do this and in Kotlin you get to create nice DSLs. This cuts down on use of reflection and AOP magic that spring relies on and also enables native compilation. It also makes it easier to debug and it makes it much easier to understand what is going on at the price of surprisingly little verbosity. Kofu and Jafu are basically still experimental but work quite nicely https://github.com/spring-projects-experimental/spring-fu/tr...

Another trend is native compilation. Spring native just went into beta (uses the Graal compiler). That still relies on reflection but they re-engineered the internals to be more native friendly.

Spring Boot basically added the notion of autoconfiguring libraries that simply by being on the classpath self configure in a sane way. It's one of those things that makes the experience a bit more ruby on rails like. Stuff just works with minimal coding and you customise it as needed (or not, which is perfectly valid).

Compared to XML configuration, Spring has come a long way. Separating code and configuration is still a good idea with Spring but indeed not strictly enforced. @Configuration classes can take the place of XML and if you use the bean dsl, that's basically the equivalent of using XML. Only it's type checked at compile time and a bit more readable.


> Stuff just works with minimal coding and you customise it as needed (or not, which is perfectly valid).

You can't though, because the usual pattern is that the classpath-based self-configurer imports a non-public class that contains all the actual configuration. So you can't customize or extend the autoconfigured version - you have to either accept it as is or replicate the entirety of it from scratch.


You can disable configuration classes and then set things up manually. I did this recently with the mongo driver.


Right, but you can't customise the automatic config. It's all or nothing - you have to either use the whole autoconfig as-is or reimplement everything that it does yourself.


While nowadays I wouldn't touch Spring with a ten foot pole if I had the choice, back then (must have been mid 2000's as well), Spring helped me really develop as a software engineer; everything I know now from (Java-style) interfaces, contracts and unit testing comes from that one six month internship where I was left to my own devices to build an application similar to what Sonar is nowadays (or was a few years ago). I still have the code somewhere, so many comments <_<.


Looks like I came up the same path as you: XML in Spring 2, with Spring 3 adding lots of support for annotations. Once Spring 4 came around, I was... okay with abandoning _most_ XML, but some things just made way more sense in XML, such as Spring Security configuration.

I first saw Spring Boot demoed by Josh Long at a Pivotal office in Toronto, and my first reaction was to wretch at autoconfiguration, since it was extremely apparent where it would lead. The team I was working with at the time were a hard NO on annotation-driven config, which I thought was extreme at the time; however, several jobs later, I saw the proliferation of Boot and the autoconfiguration cancer it caused. Some projects were explicitly re-written with a hard technical requirement to not use Spring Boot, and those code bases ended up cleaner and more readable as a result.

The current gig is steeped in Boot and I've just given up and instead tried to use TypeScript for anything new, simply to avoid the Spring ecosystem.

What's more terrifying is watching new grads and green developers use this magic trash and have zero concept of what's actually happening under the covers. When I say zero concept, I really do mean that they have no idea what the servlet spec is, let alone containers or reverse-proxies.


I guess it depends upon what you mean.

For many projects, DI frameworks are a giant cargo cult. Whether you use annotations or XML. More broadly, Java's obsession with "flexibility" (really, false flexibility) is a giant cargo cult.

If you're talking about webapps specifically... using XML in Java was not a cargo cult. It was the only option in 2007. Even if you didn't use Spring.


My thoughts exactly. I started with the XML flavor, and although I hate XML, it was (IMO) a far superior mechanism than annotations. All the config was in one spot. You didn't need to recompile to change it. So much easier, all in all.


One power of Spring Boot , non-xml based code is using an IDE like intelliJ for refactoring/navigation. XML is hard to refactor/navigate if you're dealing with 1000s of files.


You don't need Spring Boot for that. You can use plain Spring with annotation-based config, no XML needed. And your IDE navigation will work, whereas with Spring Boot it frequently finds the wrong bean definition because of all the dynamic magic (ConditionalOnClasspath, ConditionalOnMissingBean etc.).


Errr, so today would you for Spring or Spring Boot ?

For a large ebay-sort-of backend with lots of REST apis to both front end and back end itegrations?


I would choose it.

Bottom line, organizing and maintaining large codebases on which lots of developers are collaborating is going to be painful no matter what your stack is. There is no technical fix for overcoming all the dependency and coordination problems created by large, complex software.

As nothing is going to remove that cost from you, the best you can do it transform one of set painpoints into a different set of painpoints. The most dangerous choice is then the one where the painpoints are not well understood, even to the point where you think they aren't there. Trust me - they are there, lurking - waiting for you to start tripping over them.

At least with Spring there is a well understood approach with a large pool of developers and some accumulated wisdom. That's better than most alternatives for real world use.

OTOH, if you are building a small project with a small team, it doesn't matter too much which framework you use, just use whatever your team members are most comfortable with. If your intention is to grow into a massive project, then finding devs who have experience in your stack will matter more down the road.


There's a very good reason why Spring is used so widely.

Complex enterprise apps are often complex because the use case and the environment is complex.

E.g.: - integration testing and unit testing is required - transparently pluggable backends for message queues so that locally you can use SQLite as your pub/sub storage but in production it's Google P/S - standardized health check endpoints for all your apps

... The list is infinite. You can make a decision for each point or you can agree with the team that you're using whatever Spring provides.

In a team with 100 devs, simplifying and unifying decisions is extremely important. And Spring _works_. I don't like it either, but it works.

The alternative is: solving all the problem that Spring solves with different tooling. And no, you can't avoid dependency injection in a 700 kLOC medical application, because you need to test the hell out of it.


> Complex enterprise apps are often complex because the use case and the environment is complex.

Uh maybe... the real question is where does that complexity come from. Is it intrinsic to the problem or just bureaucratic slob? Given a framework so popular, what are the incentives to go uphill and challenge assumptions - with the likely risk of being fired - or just concede and ad your little contrived contribution to the problem?


From my experience, enterprise apps are generally complex because they are a combination of:

- environment: integration of a large amount of services - and a good amount of them are legacy and idiosyncratic

- use case: enterprise app are at the intersection of real life and the virtual world: the rules are messy, illogical and have a baggage of 20/30+ years. Thus they cannot be changed at all. This is IMO the main difference between a "pure" greenfield startup kind of project and the enterprise one.

- add another layer of burocracy and complex environment to navigate

And with that you got the enterprise app world :).

In the end whatever framework is chosen, the most important property is the availability of common language/patterns.


> use case: enterprise app are at the intersection of real life and the virtual world: the rules are messy, illogical and have a baggage of 20/30+ years. Thus they cannot be changed at all.

This is in my experience exactly the problem. They can be changed and should be changed as it would save everyone boatloads of time and money. Engineers need to advocate these business process simplifications, and managers need to illustrate the ROI of making such simplifications.

One has to challenge nonsensical requirements: it’s a critical part of engineering.


Enterprise as in "sold to enterprises" or as in "created by enterprises"?

Both are complex, but for different reasons.

Software that is sold to enterprises is complex because they compete on number of features (in checklists), and there is no pressure for quality since the software users have little saying on what software gets brought.

Software that is created by enterprises for internal use is complex because the enterprises themselves are complex. They are full of rules, created by different people with very different goals, that add up with time, and the applications must deal with them.


> Software that is created by enterprises for internal use is complex because the enterprises themselves are complex. They are full of rules, created by different people with very different goals, that add up with time, and the applications must deal with them.

Maybe it's just me and my scarce disposition for forgiveness, but after several years I tend to believe complexity emerges as a consequence of superficial understanding, diffuse aversion to analysis and outright pool analytical skills.

:/


agree on poor analytical skills. top talent devs tend to work on front-office apps that generate $, and not the backoffice intranet type apps for internal employees and processes. i think it is a function of compension difference as well and dev self-selection


Spring proper is (relatively) fine; you can make applications with it that are more-or-less maintainable. Spring Boot is very different. It's a write-only framework, and while businesses may have enthusiastically adopted it, it's new enough that they haven't (yet) had to pay the maintenance costs and realise how bad they are.


I agree with this, having recently gone through the exercise of creating some "onboarding" apps for new grads using Spring 5 and XML, then again with annotations. All the underpinnings for a non-Boot application are still there and mainly sane.

Spring Boot absolutely yields write-once throw away, and you _must_ consider the source: Pivotal is a contracting shop, so it's in their interests to hook people into an ecosystem that they happen to be experts in. I'm surprised this fact is lost on most.


What do you mean by write-only framework?


Write-only in this context means that the cost of maintaining something vastly exceeds the cost of rewriting that something from scratch.

I don't know Spring and Spring Boot well enough to pass judgement, but this situation generally arises in software when you have a language or framework that's reasonably expressive, where the language has a lot of sigils or keywords and it has "shortcuts" that you have to grok before you can really understand codebases that use those shortcuts (like annotations or heavy/excessive use of design patterns). It leads to software solutions that are only well understood by the original authors due to their specific knowledge and problem solving approach and anyone who comes along later must have the exact same overlap of knowledge and skills otherwise they'll find it easier to start fresh. This includes the original authors if the time lapse has been long enough.

This often then leads to the next phase of miserable software jobs, the second-system effect.


WOF == you understand your software while you're writing/developing it but give it a rest for a Trump's "two weeks" and you'll have no clue what you did.



The "different tooling" you mention should be the Java language itself. Let's not kid ourselves, literally no one writes enterprise applications in Java without some kind of a framework, and lives to tell the tale/doesn't go crazy. Since Spring is a de-facto "official" framework to write Java apps in, we might as well clean it up for inclusion into the core Java language, with attention paid to transparency to tooling in IDEs.


> In a team with 100 devs, simplifying and unifying decisions is extremely important.

I agree with this.

Unfortunately, Spring is often chosen for projects with much smaller teams too. Just 1 to 5 devs on what is fundamentally just a CRUD app. Wrong tool for the job.


I disagree. I can spin up a CRUD service with Spring Boot in an hour including validation, health checks, db migrations, API documentation and what not.

It lets me move fast while taking care of the boring stuff.

Nothing to do with team size.


I have the same experience as you. I have used Spring Boot in different environments for 5-6 years (first complex global enterprise and later small startup). Spring Boot have been enabling me to be very productive. I haven't had any major issues that I can blame on the "magic" that Spring Boot provides.

I have read startup experiences with other frameworks where they write blog posts about all the issues they had to spend time to fix, that is trivial to solve using Spring. There is a slight learning curve in the beginning, but it is worth it in my opinion.


That’s all fine and nice, except when the magik doesn’t work and you’re ctrl-clicking for hours trying to figure out what darn annotation is breaking the whole incantation.

Or you realize a that the tiny tiny small configuration change you need isn’t contemplated by the code supporting the auto-magik, so you start adding overrides which turn off the autoconf, and you have to manually configure the whole beast by yourself (discovering all the undocumented gotcha’s along the way.)


Luckily this doesn't happen all that often. Unfortunately this is gonna happen in every framework. Compromises is what you always get when reusing somebody else's framework/program/anything basically, because use cases are never 100% similar.


A good framework lets you gracefully progress from 100% autoconfigured to 95% autoconfigured, 5% customized to 90% autoconfigured, and so on. It does this by working in a consistent way, where framework-provided defaults behave the same as custom implementations of the same thing, and you can use the same tools and techniques to understand what the framework is doing as you use to understand your own application.

None of that's true of Spring Boot. The autoconfigured stuff comes in in its own fashion that's hard to relate to your normal configurations, and it's encapsulated in such a way that as soon as you want to replace part of it you find you have to reimplement all of it.


Well, as someone else on this thread wrote, it does happen all the times you're not preparing for a presentation or a blog.

If you're building something marginally different from the routine the curtain is drawn, and IMHO you're probably better off buying a shrink-wrapped ready-made SAAS.


Same. Spring boot + jooq is a quick, simple way to spin up backend services. Like any tool though, one needs to know Spring boot.


You know what should be taking care of all of these things? Java! For a language that bills itself as the "enterprise language #1", it's abysmal in supporting actual enterprise features like you listed above in a lightweight fashion. Instead, a whole third-party framework has to be tacked on just so you don't have to reinvent the wheel. Java doesn't even support dependency injection, something that I would consider an absolute minimum for even a small project.


>Java doesn't even support dependency injection, something that I would consider an absolute minimum for even a small project.

For small projects, DI is easy done by passing things via the constructor. If multiple things need to get wired together, pull that wiring logic out into its own class or method (FooBuilder.buildDefault()). If things start to get tedious, that's a really good time to stop and reflect on the design choices. That stop-and-reflect opportunity if often lost when things can be simply AutoWired together.


Jakarta (née Java EE) has CDI, which bears a striking similarity to the Spring's dependency injection capability: http://cdi-spec.org/


The real thing that is missing is a runtime annotation scanning API.

Spring does this my examining the class path, casting it to a urlclasspath, finding all the zip/jar files, unzipping them and then parsing the byte code.

Interestingly the set let spec added annotation scanning too. And now if your not careful you jar files get extracted three times. The JVM, the server conteainer and spring all repeating the same work.

No wonder people think java is slow to start up.


I don't think there is a need to blur the line between a standard library and frameworks. Maybe @Inject should have been in the JDK, but on the other hand it can be added to any project easily and is supported by several frameworks.

In general I think, it is wise to keep the standard library small, because innovations are easier to implement in libraries. Rust is a good example for this style.


If Java had had first-class functions to start with, things like Spring would probably have never got off the ground. Up until Java 8 you couldn't even pass a reference to a constructor as an argument without defining a whole class to carry it around in.


But first-class function is a completely different thing than dependency injection (DI). Supporting functional programming is a an aspect of the programming language itself, while DI is part of the infrastructure.

I agree with you, that Java should have had first-class function from day one. Ironically this was considered by the language designers, but they decided that it would be too exotic for the average Joe. OOP was a hype back then ...


DI is a design technique, you don't have to use a framework to implement it. Reflection-based frameworks became a popular way of doing it in Java because doing it in plain Java is cumbersome, and that's due to the limitations of Java as a language.


I don't know about that. Dmitri Sotnikov, joint author of "Web Development With Clojure", worked on a similar app and didn't even need types or OOP let alone a monster like Spring.


Enterprise apps are complex because they don't focus on solving the business problem, they focus on the tools and frameworks.

Spring is not the cure, it's the disease.


Enterprise apps are super complex, frequently a lot more complex than startup apps.

Someone basically comes with 1 thick tome worth of business knowledge plus 1 thick tome worth of legal restrictions and you're supposed to codify all of that, to the letter, in software. Some of the business logic can make you cry.

Software dev: "But that's not clean/elegant".

Business owner: "Reality doesn't care about elegant/clean, it cares that we build stuff our customers want and that won't get us sued, so we have to implement it to the T".


I'm sorry, having extensive experience writing enterprise apps I have to refute this. Almost all complexity I've ever seen was rooted in infrastructure, not the inherent complexities of said enterprise.

I would even go so far as to say that the enterprise logic was so straight-forward that most programmers spent their time inventing problems, which is how look at the enterprise market. They's a lot of "inner platforms", meta-problem solving and needlessly complex deployment environments.

Contrast this with my current industry, gaming. Here the problems are real, tangible and hard. Suddenly overcomplication is much less of a problem, because the extra cognitive load becomes too much when your core problem is already hard.


Just as a counterpoint, I've done a little bit of game programming and currently work on a tax reporting system (mostly). Both are full of random shit you have to know for no reason. The game random shit I've had to learn was often tied to very specific platforms at very specific times. The game stuff very often I had to delve into actual math + algos.

The tax reporting system I get bogged down and mired in the literal thousands of edge cases and exceptions due to the interactions of all of the laws, sometimes written maliciously by some political entity, or sometimes some local municipality goes against the federal laws leading to literally impossible scenarios. On top of that you have things like your company booking transactions one way that they shouldn't have and is now too much of a pain to change so it has to get reported differently at the reporting layer (which itself is a problem as to why this wasn't noticed initially). Very often the gov't specs themselves are contradictory.

Here is an example: stocks can sometimes pay debt interest did you know?! That doesn't come up in any of the fixed income documentation I had to read. But right there in some list of bonds the IRS publishes every year are non-bond things. wtf?! So do you fabricate some new type "debt interest paying thing" of which are mostly composed of bonds and once in a while stocks in your data model? Do you keep your data model clean and fabricate an internal bond to represent the debt part of the stock in these cases and link it to the actual stock? You will have to consult with your legal/tax department and realize if you were to misreport this income what would the resultant fines and loss of customer goodwill is (due to higher chance of them being audited, paying more tax etc)? Do you conclude that your internal team cannot keep up with the ever marching and changing tax law (FATCA anyone wtf?!) and outsource this to a tax reporting company? The scope of our tax reporting is nearly unlimited since the US government seeks to capture all of human endeavor. The results of getting this wrong are very real. People get audited and lose money and sue our company and can go under due to litigation costs. In games, if we fuck up, our company goes under and everyone loses their job but I didn't have to worry so much about potentially ruining the jobs of people outside of our company.

This is actual complexity your software will have to deal with no matter your language/platform.

In my, admittedly limited (never AAA), experience with games I always seemed to be bumping up against physics (time, memory, latency/speed of light) but with enterprise apps it's almost always bumping up against the sum total of human stupidity past and present.


I'm not refuting that businesses can be complex, but I am saying that most of the complexity I've seen hasn't been inherent to the business.


I understand, I should have been clearer. My experience has been the opposite. The companies I've worked for had actual complexity to tackle. I guess I got a little defensive because I've encountered "game programmers are gods, everyone else sucks" attitudes before from forums and fellow programmers and projected that onto your response. Sorry.


No worries. The gaming industry has its own set of problems for sure, so I'm not claiming it's inherently better in any way :)

I'm glad to hear if your major challenge in the enterprise was to solve actual problems and produce real value. My journey was mostly learning one over-complicated framework after another, and finally coming to the conclusion that it was mostly for nothing. I was the local Spring expert, but at the same it's biggest critic. The "code", or more accurately the configuration, gets very consise, but you risk ending up with only a handful of people in the building who know how to debug the app properly.

I started coding Java when Sun marketed it as a very pragmatic choice, focusing on "simply writing code". The influence of IBM and the whole JEE movement (including Spring) still looks at the problem of coding from the wrong angle IMO.

Creating a good development environment is not about creating a all-in-one runtime environment or methodology, nor about simplifying the problem space for developers by letting them write plugins to large servers.

It's about establishing a fast RAD cycle and offering a buffet of good libraries, to simplify the writing of code.

Java used to be the language which allowed you to "just program", nowadays the mainstream choices are golang or node. To me it's become very clear that Java is on the wrong track here.

There are an infinite number of distractions as a coder, including Microservices, responsive design, functional purity, patterns etc.

If you managed to duck most of these and end up in a place where you were producing value effectively, you were very lucky judging by my experience :)


Extensive experience writing enterprise apps in many different industries? To me it sounds like you're trying to refute a generality with anecdotal experience. Have you ever worked for a heavily regulated industry like healthcare or insurance? The business logic is heavily tied to the regulations which vary by state/country and can be quite... cumbersome.


That is true. I don't really refute that there are instances of complex rulesets. I've worked in some regulated markets, but not healthcare or insurance.

That said, I stand by the point that most complexity I've seen has come from infrastructure. Because although the rulesets might be complex, the way they are encoded is usually the source of the problem IMO.

For example, if the rules of the business are encoded in such a way that unit testing them is straight-forward, it puts the business at the center.

But when every rule is testable only in a complex deployment, the complexity of the app is no longer tied to the complexity of the business.

And the latter has been more the rule than the exception in my experience.


This is absolutely not the case. Enterprise app developers (etc.) are not stupid. The domain and the business problem are often extremely complex and hard to understand.


My observation from my long career of writing enterprise apps is that frameworks are super complicated with many hidden variables, so when you apply them to a complicated business problem you end up with the square of the complications of both systems.

We ended up in framework hell. In response, we ditched all the frameworks we were using, went to pure JavaSE, and ended up with a (much!) faster, more reliable enterprise application that was far, far easier to maintain.


I had this experience with the so-called "app servers" at our company. We also have a daily batch processing system has tons of operational support, db+serivice monitoring, a very useful web UI, logging split out, authorization system etc. It's built around common Nix concepts like files, pids, pipes, etc. Thousands of these batch jobs are little JavaSE programs that launch, do their business, record their progress and report success or failure. Some are python programs and behave the same way.

Our JEE apps were deployed to these app servers with very little internal support or expertise that very often had devs having to physically log onto production hosts and figure what in the world was wrong, restarting the app server, grepping logs and in general having to learn about these app servers (and usually just shrugging their shoulders at what went wrong). The sum total of things you had to know to keep the JEE deployments up was as much as you had to know about Nix processes PLUS you still had to know all the Nix stuff, except none of the JEE infra was built out since the app servers were just these monstrous processes with hundreds of db connections and thousands of threads.

My team long ago ditched JEE style deployments and have our apps all managed with the batch processing system and have been none of the better for it. Some* of those apps use Spring/Boot but I we've been doing a decent job in the code reviews of just rejecting anything too auto-magikal. Other teams that have stuck with these massive app-servers have stagnated since the deployments are so frail, no one dares make large changes. This is mostly an institutional problem but still the end result was sticking to simpler JavaSE stuff has led to way more productivity.


Yeah your experience absolutely mirrors mine. Don’t even get me started on JPA; I’ve used exactly the same phrasing you did: when you use JPA, now you have to know SQL, JPA and JQL. It makes no sense!


Heh, I found the opposite. JPA is one of the few parts of the ecosystem that actually work and deliver enough value to be worth it.


Enterprise app developers (etc.) are not stupid.

Enterprise developers aren't stupid but enterprise software is some of the worst software I have encountered in my career. When a development team has one captive customer you get the results you would expect.


That`s not spring boot, that`s whole spring (and google guice, etc), starting from the idiotic XML for bean wiring, to equally idiotic anntations, and finally, after 15 or how many years, they realized that plain java can be used for object creation. What a discovery! Still, they introduced @Configuration bullshit, etc.

(There are cases when such features may be useful, e. g. systems that are extended by 3rd party plugins, like Maven, although may be done without that as well).

For normall applications that´s nothing more than bloat and limitations. The problem that the majority of users follow cargo cults without understanding what are they doing. In result the ecosystem is full of bad practices.

AbstactArgumentBuilderFactoryFactory were in fashion for the same reason.

Otherwise Java is a neat little language and a good platform.


I'm no fan of regular Spring, but it offers some legitimate value and has relatively comprehensible behaviour. It doesn't break grep: when a class is instantiated by Spring you can find a reference to that class, and most IDEs can also follow those references. Adding a new dependency doesn't change its behaviour.

Spring Boot is real a step change in comprehensibility, in the wrong direction. It's on a similar level to adding COME FROM to the language. You're not wrong to complain about Spring/Guice in general, but implying that they're remotely comparable to Spring Boot is thoroughly misleading. It's not the same thing at all.


Good to see another big fan of Spring Boot as me.

> Spring Boot is real a step change in comprehensibility, in the wrong direction.

With VMWare's Tanzu crap for containers and Spring native initiatives, the idea is complete lock-in in VMWare ecosystem from developer desktop to running service. The goal seems that no one should have any visibility on their own systems except VMWare consultants.


I work for VMware, so I am not disinterested, and of course we want it to be easy to use stuff we sponsor or contribute to.

But what exactly about stock Kubernetes and the fully-OSS-for-nearly-two-decades Spring Project strike you as "lockin"?

This is a bit like accusing Red Hat of lockin for shipping a Linux kernel.


Fully agree. I've inherited an insane desktop application that uses the spring XML bean configuration. Then there's classes used to configure the configuration. And three separate property class types to pass the properties into the context, only one of which will persist the properties. And factory beans to instantiate a different bean depending on the properties passed in. And different XML configurations for each subproject which somehow join together but sometimes complain about dependencies being defined in the wrong place... Total nightmare.

It's basically impossible to know what will be instantiated, extremely hard to inject and configure things. When I find the time I'm going to just create things directly from Java with constructor injection and ditch the whole sorry mess.

Edit: did I mention how unbelievably slow it is as well?!


I’ve sometimes recreated the Dependency Injection in 200 lines, because I had an absconse error for an entire afternoon. Sometimes one has to fix the right problem (It was with Dropwizard not SpringBoot, but it’s a recurring theme in the Java ecosystem).


Was "boolshit" a typo or is it a magnificent new pejorative?


it's an amazing new type description where some shit is either true shit or fake shit.


Typo :)


Also amazing new pejorative.


ah, I am promoted thus to reminisce upon my boolshitten youth spent vainly stabbing out crud most fervid and amorous to dump the vilest cores to jest with that naivest child me to know it better than to presume that my brittle logic could affect it so profoundly to do my bidding...


This is true in most languages really. It starts with people just wanting to speed up their development by turning common tasks into some macro, or annotation, or even library. But where does it end? It ends in a convoluted mess of dependencies, weird syntax, and hopeless stack traces.

So much work can be done with plain old Java. Or plain old JavaScript. Or plain old C + stdlib.


Agreed. Setting up Springs OAuth Client was so complicated and getting the configuration right was taking so much time. I replaced it with a simple Filter that just did the query itself. It easy to go to that filter and add whatever security logic you want (at least for me it is).

There is a balance here that might be hard to get right. Spring and other frameworks make things easier until they dont. At some point it might be easier to just write code instead of configuring your way through these frameworks. Many times I rather write code then go configuration hunting.

What I do like about Spring is that they offer a lot of hooks and interception points to overwrite with your own logic.


And you end up writing yet another custom framework for yourself when you realize you are writing the same things over and over...


But you and up writing just the bits you need so that you end up with a framework that serves your use cases, that you grew so you understand and unencumbured by the use cases of others. It'll make you more productive... but not necessarily the person you hand it off to.


Yes totally agree with that, it is not only negatives.


> e. g. systems that are extended by 3rd party plugins,

That's most non-toy desktop software


I simply don't understand how people favor a stringly typed custom DSL in a reasonably typed language like java. Why is it better to write Spring annotations instead of actual java code? There is literally no upside.


Annotations are fairly typesafe. The parts that aren't safe are parts that Java offers no way to do at compile time (e.g. "are all the arguments to this constructor of types that are in this set of existing services?")


One possibility is that the code is just Java and the end of the day. You could run it independent of Spring processing the annotations.


Totally agree, why i use https://www.dropwizard.io/en/latest/ with guice, generally waaayyy more explicit about whats going on


Also very lightweight and pleasant to use are Javalin and SparkJava. I've used them for all kinds of projects with great results. Basically a Java version of Python's Flask - small and concise but stable and performant enough for many (or even most) web projects.

https://javalin.io/

http://sparkjava.com/


Our team has had a lot of success with both of these too. The single architectural choice I completely disagree with though is their use of Static methods to hook things together.


I used Dropwizard many, many years ago. It was what made me like Java. Such an awesome tool.

I'm happy to see so many years later it is still alive and kicking and powering so many systems.

We definitely need more of this.


And it starts a lot faster. We used that combination at the last startup I was employed at, and it worked great.


Yea no runtime classpath scanning is a requirement for me, we do compile time eclipselink weaving etc which makes startups way faster


I would like to draw you attention to vertx[1] its an eclipse project with well thought out API's and documentation and no magic.

[1]https://vertx.io/


This.

I spent so much time this week chasing magic buttons in that over engineered piece of stink.

I'd rather do raw HTTP servlets at this point.


There's always https://javalin.io/


Plain servlets, jetty, haproxy for TLS, jstl, postgres, apache dbcp. Add in whatever specialty libraries for the project and you can pull away with 1/100th of the dependencies.


I built a blog engine that almost has that stack (Payara instead of jetty, apache dbcp).


Agreed, though I’d also add MyBatis.


You can even pair that down a little further if you use the built-in HttpServer that's been in the JDK since 1.6.


Even servlet containers are the wrong approach. I have from experience it’s best to stick to libraries. You know where you are when you call a function. The Servlet spec now includes annotation scanning by default now, which is unnecessarily complicated and slows down start up.


I read a comment once, saying that Spring is a lot like the COMEFROM statement [0]. You are in a class, but you have no idea how you got there, or how to get somewhere else. Your IDE is no help, because everything is "decoupled".

[0] https://en.wikipedia.org/wiki/COMEFROM


I’m just learning about the COMEFROM statement, this exactly captures my reluctance with very modern apps. I especially despise checking permissions or caches using annotations, because you are never sure they are applied. I hope one day COMEFROM is as forbidden as GOTO.


My frustrating experience with J2EE and Spring is what led me to choose https://sparkjava.com/ as the Java framework for my SaaS.

The Spark framework is so light and refreshingly comprehensible. I can actually tell what's going on when a problem occurs.


I used to really like Spark but the primary maintainer seems to go through periods of disinterest. I switched to using Javalin https://github.com/tipsy/javalin which was built by one of the top Spark committers a few years back.


This 'magic' is one of the reason why we switched to https://quarkus.io/

Couldn't be more happier.


Friends don't let friends use Spring*.


I felt strongly enough about this topic to write a full blog post: http://sreque.blogspot.com/2019/08/the-autumn-manifesto-why-.... TLDR: so-called DI frameworks are really just frameworks for creating and consuming global variables and have very little to do with the actual principle in of DI.

That said, I think the jvm and java the language are in a great spot. It's the frameworks and community that need a shift in mindset.


I disagree with that blog post. It may be technically possible to hijack the classloader mechanism to make instantiating classes do dependency injection, but it's not easy or idiomatic, and it's not good for maintainability either; a reader can't tell the difference between a global service and a value object if both are just "new Foo()".

DI, in the sense of separating the instantiation of long-lived service objects from the classes containing business logic that accesses those long-lived service objects, is a great thing for testability and maintainability.

Autowiring mechanisms where you have some kind of global bag of (pseudo-singleton) services by type, and wire service dependencies implicitly by type rather than explicitly, are a legitimate tradeoff that's appropriate for some cases.

Don't conflate Spring with Spring Boot. One is a framework that offers some legitimate value even if it makes some questionable tradeoffs; the other is a fractal of bad design.


In response to:

"DI, in the sense of separating the instantiation of long-lived service objects from the classes containing business logic that accesses those long-lived service objects, is a great thing for testability and maintainability."

You don't need a DI framework to do any of what you described. Also, I believe that what you are saying doesn't fundamentally describe DI, though it is related. In this post I go over what DI really means: https://sreque.blogspot.com/2019/09/dependency-injection-101...

I have never seen a case where using autowiring forms a legitimate tradeoff; it has always resulted in worse, harder-to-maintain code with little benefit in return.

I conflate Spring with Spring Boot because Spring Boot is built on Spring and most of my problems with Spring Boot apply equally to Spring.


Your blog post is, frankly, wrong. That's not what DI is usually used or understood to mean.


Which one is the fractal of bad design?


Spring Boot


The mindset that Java is often blamed for was already in full swing when other technologies ruled the enterprise.

Architecture astronauts will produce the same designs regardless of the programming language.


> Everything is extremely "decoupled" to the point that you have no idea where anything comes from or why, and just adding a new dependency to your classpath will radically change the behaviour of your application

What does "decoupled" even mean anymore? That sounds like the opposite of "decoupled" to me.

(/not a Java programmer).


Everything is wired up using Dependency Injection. So you just state your claims and are given appropriate objects that may be created by some third party factory factory. It's really neat until it isnt.


It's there an unspoken rule about how much dependency injection is too much dependency injection? Like I can see how it could get bad, but how do you know when you've gone too far?


Probably as soon as you don't know why an object appears in your tree. I'm not at all an expert on Di as the only project I worked on that used it a lot, I was also responsible for refactoring it such that spring di was only optional by annotated constructors, because spring startup is atrocious and we wanted to use part of the application in lambda.

I've not used it since because the project I inherited suffered extremely from nih syndrome.


The logic seems to be that having X use Y if it's available at runtime is less coupled than having X hard-depend on Y at build time. But IME the end result is similar to a "distributed monolith": you haven't actually decoupled it, you've just swept the coupling under the rug.


I guess it means that dependencies are hidden now.


As someone who loves Python/Django, dabbled in Ruby/Rails, learned a multitude of front-end JS frameworks (Backbone, React, AngularJS, and Ember), and currently works with Java in an enterprise environment, I just want to add two points:

- If you're going to have config/setup files, make sure they utilize a language that is Turing Complete. YAML looks pretty, but for all practical purposes, is it really better than XML?

- I've said this before, and I will say it again: I doubt writing an import statement ever killed anyone.

Edit: the two points are related. Having a Turing Complete config/setup file makes it easier to add a level of indirection between your code and library/framework code, so you can e.g. utilize different implementations for different ENVs.


XML is not any more or less "turing complete" than yaml, is it? They both express static data structures. I suppose you can make turing complete languages that use the syntax of either one (XSLT is one; i'm sure there are others in YAML), although it gets pretty iffy.


I agree, static data structures are far from ideal. Practically speaking, there's not much more you can accomplish with YAML or JSON compared to XML, even if some people find the syntax more palatable.

Also, trying to shoehorn conditions and loops into a non-Turing Complete language is cumbersome. I would imagine it being a nightmare for platform and framework maintainers, as well. Rather than create a config or dependency DSL for every platform, why not just put the language to work for you?


> there's not much more you can accomplish with YAML or JSON compared to XML, even if some people find the syntax more palatable.

Sure, and vice versa.

I think there are arguments to be had for whether you want your config language to be "turing complete" (or in general capable of containing logic or just static data). I am not sure I am convinced.

But you seemed to be saying that XML was preferable to yaml for some reason related to turing completeness/logical power, which I'm not seeing. You can "shoehorn" conditions and loops into YAML or XML if you want (by defining a semantics on top of either one), and it's going to be cumbersome, yup.


I didn’t mean to imply that XML was preferable to YAML. I meant that neither is Turing Complete, so using either for dependencies and config is likely to be cumbersome. YAML looks nicer, and if you’re not using a decent IDE, you’re less likely to mess up with closing tags, etc., but that’s about it.


It is telling that today modern JavaEE is in many ways a breath of fresh air compared to Spring :-|


One of the best decisions I have done, when I need to use my Java hat, is to have kept using JEE.

Spring even needs a web app to help configure its pleothora of behaviors.


This. I tried to bootstrap an web flux application without spring boot or kofu or anything and you have to move thorugh so many different classes to get to the appropriate bean for the handlers that I just gave up. Also multiple simmilarly named classes like ``WebHandler``, ``HttpHandler``.

I don't have much other experience so I thought it was just me.

DI is also a pain and you never know where something is comming from unless you are familiar with what every single functions adds to the ``CONTEXT``.


I think too that Spring Boot opt-out autoconfiguration is an issue.

Take a look at https://github.com/spring-projects/spring-boot/issues/25742#...

I included a workaround allowing to opt-in for autoconfiguration instead of opting out. I have used the filter for more than 2 years without issue.


I feel like spring got leapfrogged by plain javaee a bit; I can imagine that in early '00s Spring was big improvement over Java EE 1 .4 or whatever but modern Java EE to at least some degree caught up, JEE 8 and 9 being relatively pleasant to work with, along with Microprofile, while Spring feels more stuck in the 00s feel.


> I swear I'm going to add a bitcoin miner to my libraries and list it in spring.factories

That will be great service to this world. Considering endless turds Spring/Boot generates at runtime, bitcoin mining might most ethical thing to do.


> "Gosling".equals("Satoshi")

true


I first used Spring while doing an internship in 2007. I remember writing about bunch of XML bean configuration and wondering.... why???

Sounds like the ecosystem didn’t get much better.


Really the future of the Modern Java Platform is Graal - https://www.graalvm.org/reference-manual/embed-languages/

Java is not Spring Boot.

For example, this is Python 3.8 compliant runtime on top of Graal - https://www.graalvm.org/reference-manual/python/

You can also compile your application into a native image (like Go?) - https://www.graalvm.org/reference-manual/native-image/

you can try it in the next 5 mins

1. docker pull ghcr.io/graalvm/graalvm-ce:latest

2. docker run -it ghcr.io/graalvm/graalvm-ce:latest bash

3. gu install python

4. graalpython -m venv myvenv


What is more impressive is the graaljs implementation, that when warmed up, has comparable performance to god damn v8. All that with a complete polyglot runtime, so you can call python code from js and vice versa and the best part: it will inline and JIT compile over language boundaries. Also, truffleruby (graalvm’s ruby implementation) is/was two times faster than regular ruby - that’s how much engineering went into the JVM.

And of course well-performing (but worse than JIT) AOT compilation is also a possibility with it, though I feel it is not needed as often as people think.


Doesn't stop there.

There's also project sulong that tries to bring LLVM IR into the graal ecosystem. Giving you the ability to integrate C/C++/Rust/Fortran/etc. Whatever has an LLVM backend could possibly run side by side with Java/javascript/python all without major FFI penalties and possible cross language optimizations.

https://github.com/oracle/graal/tree/master/sulong


Never heard about Graal before. When reading the description it reminds me of WebAssembly. Can anyone compare how they compare?


WebAssembly is lower level with raw memory management, meant primary for running low-level languages in the browser.

Graal instead is a bit more complex to describe, since it incorporates many things. Perhaps the most important part of it is an abstract syntax tree-based interpreter, which can be used to implement a dynamic language with ease, and Graal can basically convert such an interpreter to a language runtime that uses the many many advancements behind the JVM like advanced JIT, GCs and the like. Such an implementation for small languages can easily surpass the “host” runtime in performance, for example R, Ruby.

What makes graal even more interesting is that this intermediate AST is language agnostic, it basically maps the guest language to JVM built-ins — this allows completely polyglot code bases (with JIT compilation between boundaries) and even has an llvm ir-based interpreter because dynamic languages often have C-based standard libs (and since it can inline between deps now, it may be faster than native FFI). But the whitepaper titled One VM to rule them all could give a much better overview than I could.

Since it is primarily a JIT compiler, it can be used as AOT as well.

(Also, there is graal wasm I believe as well)


The simplest way to think of Graalvm is that it’s not a VM, it’s just a plungin replacement for Hotspot. Hotspot is written in C++ and hard to change. So once the java JIT is written in java it will be easier to innovate.

But then they took the base and used the java to machine code capabilities to be able to do ahead of time compilation to machine code.

But the most amazing part is that if you use the graal apis to define an interpreter for a language (any language) then graal can generate a compiler from that interpreter. It will compile all that is known at compile time and leave to runtime what needs to happen at runtime. So essentially it does partial compilation.


Yup, they are very similar as both are used to run code at high performance.

WebAssembly was designed for the browser. Users want their websites to load quickly, so startup time is crucial. Also, users want to browse sketchy websites, so security is also crucial. WebAssembly is more like an instruction set for a "fake" CPU. It's very low-level and is not ideal for running high-level languages like Ruby (though it's possible!).

On the other hand, GraalVM isn't constrained by these browser requirements. This lets Graal do nifty tricks so that highly dynamic languages - like Ruby or JavaScript - run as fast as possible.


Quarkus (and its dependent techs) has been in my radar for a while, and recently I've started using it, and I must say I'm impressed.

Code in modern Java (lamba etc) -> build native Linux exe -> package as Docker image -> deploy in Google Cloud Run. All wiring from CLI so CI/CD friendly (next is to use Google Cloud Build). Since it's native, memory usage small and boot time negligible. Since it's managed, it auto-scales (up and down, to zero cost).

My complaints:

* Quarkus is very opinionated. I'm used to this coming from Google App Engine.

* Building _native_ exe is slow. Like 1995 Java slow.

* Scala support is limited.

If you're used to App Engine, you know exactly the dream I'm living in. Without App Engine's limitations.


I had the opposite reaction. They decided to reimplement almost everything and pushing hard on reactive stream, but their documentation is horrendous and the API is super complicated. After a month of tinkering I just gave up and used spring boot.


Please don't use Reactive in Java. Stick to plain imperative - and you'll be compatible with Project Loom in the future.


I don't see how using Reactive makes you incompatible with Project Loom.


I meant using the Reactive API types like using Publiher<Response> in REST APIs. These will be un-necessary in the future.


I almost fall into the same trap. Lucky the documentation for reactive approach is very lacking, and I'm lazy, so I stick with traditional approach.


I think that you have the option with Quarkus to use the traditional JVM with byte code for development, and only generate exe's for stage or production deployment. Best of both worlds.


If my experience with cross platforms (flutter, react native, phonegap, and now quarkus) tells me anything, it's to check everything (still) working everywhere after every small changes :)


I find reactive stuff the best improvement in my productivity since Spring.

I use both RxJava and Reactor, and I sometimes have a one page function that could have been an entire application in the past.

My manager once asked me if we can have a complex endpoint to page data from multiple collections on multiple MongoDB clusters (paging is for backwards compatibility with some clients) and do it efficiently. This requires sorting query results on each of the clusters, then merging the streams of data into a sorted stream and then do that paging manually on the resulting stream. You also need to deal with various error handling and retry requirements.

He asked this 20 minutes before our 1:1 and I had working endpoint before we started it.

Does it have steep learning curve? Sure. But it is totally worth it.


I liked this article, but it felt to be as much about Scala and Kotlin as about Java. A more accurate title would've been "The Modern JVM Platform".

I recently began doing a side project and, for the first time in a while, picked Java over Kotlin. I mostly did so because most books on deep details of the JVM are mostly books about the deep details of Java.

Plus: Java is moving along at a fair clip these days. Records just landed in 16, Project Loom and Project Valhalla are coming over the horizon, plus lots of other niceties that've showed up lately.


> Project Loom and Project Valhalla are coming over the horizon

A: They say it's coming over the horizon. What's the horizon, B?

B: That's the imaginary line where the land and the sky meet that you cannot reach however fast you go.


Things can come over the horizon towards you though.


Pair programming with colleagues I often get frustrated by them wasting time doing things a hard way. Like using print statements instead of the debugger, or setting a breakpoint but then have to skip it twenty times to get to the correct case (instead of a conditional bp), or restarting the whole server instead of hot reloading the changed class in a second.

So I think many don't know how smooth java can be if used correctly. But that also makes me think if there are low hanging fruit / obvious stuff I also don't know that could be a boost to my productivity?


Conditional break points are definitely a great secret weapon. But don't underestimate the value of the humble print statement. It's convenient and can often lead to surprising insights "hmm, why is this getting printed so damned much?" in a way that is sometimes more accessible than a debugger.


This right here, print statement debugging gets far more of a bad name than it deserves. If that's a developers only tool then sure that's a problem, but no need to hate on a very simple and effective technique just because it's often the only one people know when they are starting out.


Print debugging results in a printed log of this happened, then this happened, ..., and finally this happened. And you can trivially rewind it and do random looks at the log. Debuggers have difficulty giving you this much context at your fingertips. The downside is that if you missed one of those critical "this happened" items in your log you have to rerun. But debuggers have the same problem. If you missed a breakpoint you have to re-run.


Intellij’s debugger has a ‘drop frame’ feature that i use a lot for going back in time when i missed something.

It also shows you the state of variables at various points alongside the code, which is basically the context i want to log anyways.


You can make breakpoints that just print stuff (at least in Visual Studio)


GraalVM truly has the potential to become the universal, interoperable VM. It would be rational for e.g Julia folks to migrate to the graalVM ecosystem and cotntribute to it instead of living on their small code island. They would get an un inimaginable amount of benefits in the process.


How does a VM (JVM, GraalVM) compare to something like LLVM's IR? From your description as a ``universal, _interoperable_ VM`` it seems like they have some similar goals --- what are the reasons to write a compiler to a VM instead of with LLVM/IR?


Good question: GraalVM support taking LLVM IR as an input https://github.com/oracle/graal/tree/master/sulong Which means that graalVM can support any llvm language almost out of the box (in theory)

Llvm IR however has not been designed with inter language interoperability in mind or at least not enough. E.g rust has no transparant, complete, seamless and efficient interop with swift, go, c++, etc

But in theory llvm is just an AOT and both AOT and JIT can enable true interoperability which is why graalvm support both a jit mode and an AOT mode. However JITs enable better performance at least for any high level, GCed language than would an AOT.

Moreover, GraalVM through the truffle framework enable unprecedented language designer productivity. Through high level constructs the designers can be much more productive than in standard VM/AOT, which explains how with a few engineers Oracle has managed to reimplement Java, ruby, python, js and R in parallel in only a few years...


I don’t think LLVM IR is really interoperable, is it?


I’m not super knowledgeable on LLVM, but as far as I know while it did have at some point a VM runner, it is rather meant to abstract hardware for native compilation. The Java bytecode is much less specific. Also, LLVM IR is not really backward compatible, while Java bytecode is.

(By the way Graal has sulong which actually runs LLVM IR on top of the JVM)


No language with a good quality C FFI is living on a "small code island".


I've never seen or heard anyone major use GraalVM in production. I only hear Oracle and a small bunch of other folks hyping it. Tech also lives on hype and a long time until a tech reaches a critical mass is generally a band indicator (i.e. if something doesn't reach the mainstream in less than N years, it never will).

There are some exceptions (Ruby took off after Rails was launched, Python was adopted by Linux distributions and it also had some decent web frameworks such as Django), but there's a reason we call exceptions exceptions. It's because they're exceptional, they're not the norm. Plus in the internet age (I don't count anything pre-2000 as internet age, since dial-up wasn't something most people wanted to live through) it's a lot more likely for something with a great future to be adopted quickly.


Twitter runs graal in production.



Interesting, their core systems?


yes


Also Shopify.


It is interesting Java was supposed to be write once run anywhere already some 20 years ago.


You do realize the paradigm shift that enable language polyglotism? It is nothing like WORA


Language polyglotism was the original selling point for Java when it was first released.

It didn't quite pan out that way, but back then OOP was "the future" and everyone sort of assumed that only the classic style of OOP will ever be needed in the future.


I'm not sure I do. Isn't polyglotism about being able to write in different languages, and Java VM making that possible?


WORA is about coding in one language and running the code in any OS/hardware.

Openjdk is one of the only language VM to support multiple languages (with coreclr).

However the languages that compile to bytecode are not automatically interoperable between each other. Usually languages (scala, groovy, etc) have partial interoperability with Java and almost zero interop between each other (scala <-> kotlin, kotlin <-> groovy). Kotlin stands out by being the only language truly seamlessly compatible with Java.

However graalvm is next generation because it enable languages to become easily and seamlessly interoperable with ALL other platforms languages at once. Graalvm is also revolutionary for its productivity (its a framework for building languages) and for its easy to get performance.


> Kotlin stands out by being the only language truly seamlessly compatible with Java

Not sure I would agree with that - I think Groovy has at lest as good, maybe even better compatibility than Kotlin.


Interesting, I don't know groovy much but Kotlin has two major points in addition to interop it has idiomatic interop:

1) The kotlin standard library is the Java standard library.

2) kotlin feel similar with Java and has analogues to almost every Java feature e.g SAM conversions.

The fact that groovy is dynamic make much less suitable for hybrid code bases (half Java half X)


I see. GraalVM sounds great. Thanks for the explanation


I'll believe it when I see the performance.


He were are, mostly one developper in only a few years has managed to create ~the fastest R runtime thanks to the graalVM infrastructure https://github.com/oracle/fastr + it get polyglotism for free.

See also graalphp: https://github.com/abertschi/graalphp/blob/master/results.md

Oracle has managed to create a language framework that enable unprecedented performance and polyglotism but there are no human resources allocated to actually tuning the implementations. Without such framework imagining one developer to implement a language and expect it to outperform the reference vm made by an army of people through decades wouldn't be a realistic expectation but now anythings possible. Though it would be nice if others companies than Oracle understood the extent of this technological leap. Actually there is only one: shopify which invest in truffleruby

https://pragtob.wordpress.com/2020/08/24/the-great-rubykon-b...


Last I checked (years ago), FastR was a one-developer research project that was way behind GNU R in terms of feature availability. Has that changed?


Java is a great language. It keeps evolving at good pace, increasing dev productivity and adapting to modern patterns, while keeping a clear syntax, scalable VM, relatively fast compiler, in a mature ecosystem. It might be at the healthiest level it has ever been.


> in most ways, the core Java platform technologies work like other free and open programming platforms. There are, however, a few parts that are proprietary.

I thought OpenJDK was TCK-certified to be Java compliant, and 100% Free and Open Source software. Is that mistaken?

> To label a custom JDK with the Java brand (which is owned by Oracle) it must pass the tests in the Technology Compatibility Kit, which must be licensed from Oracle for such purpose.

Right, except there's a special exception for OpenJDK and derivatives. [0] Apparently though this doesn't always work out. [1]

[0] https://openjdk.java.net/groups/conformance/JckAccess/

[1] https://adoptopenjdk.net/quality.html


People are critical of Spring because they have seen it in production and in practical projects. And fall in the naive trap that somehow the tools and the libraries are the reasons why a real-life software project is messy.

However Spring is something that has stood the test of time and is used in thousands of actual projects. And Spring itself emerge out of practical software development with plenty of competing alternatives (Guice, classic J2EE plus many others).

The mentioned alternatives in article: Micronaut and Quarkus are not much beyound the demo state, and frankly based on some questionable ideas: Compile-time dependency injection and idea of "cloud native" - that somehow things run better in the cloud if they are compiled into a binary.


I'm not critical of spring because real life software projects are messy. I'm critical because I rarely if ever do greenfield development and when I'm debugging a production exception caused by something that Java's type system was perfectly capable of treating as a compile time error that is suddenly a runtime error I'm a little bit sad. And the sheer frequency that I encounter them indicates that just getting more disciplined developers isn't really a solution here.


The number of time's I've seen an error that amounts to `this instance of Foo should have been a FooBar, not a FooBatz` is really depressing.


The real solution is to bite the bullet and build Spring's features into Java. Make as many failures as possible detectable during compilation, and go back to hard typing.


Or maybe if you just want to hardwire your dependencies, you shouldn't use dependency injection.


I guess at the end of the day I just don't feel like I need Spring's features.


Micronaut gives you that. Compile time validation of your bean wiring.


Spring is a flow of control obfuscation framework.

Dependency Injection is for people who reject the clarity of composition.


The one thing missing is the innovation happening on the Java front end. Frameworks like TeaVM (https://blogs.oracle.com/javamagazine/java-in-the-browser-wi...) let you extend your Java app all the way to the browser.


For more info on full-stack Java, including options for Java based single-page apps and Java-based mobile apps, visit https://frequal.com/FullStackJava/


From that article "and produces smaller JavaScript that performs better in the browser".

Not according to my benchmarks. That may be true if you use the GWT Widget library, but if you code 'to the metal' using Elemental, GWT produced substantially smaller code than TeaVM when I benchmarked it a few years ago with the Bench2D (Box2D) benchmark. Maybe it got better since then, but I doubt it given the progress GWT's successor made. Using J2CL (the successor to GWT), the following Java class

public class Main { public static void main(String argv[]) { window.alert("Hello World"); } }

would actually compile down to just (JS)

window.alert("Hello World")

TeaVM doesn't support code splitting that Both GWT and Closure Compiler support, or cross-module code motion.

https://github.com/konsoletyper/teavm/issues/333


You might be interested in https://vaadin.com/.


Recent and related:

Java 16 - https://news.ycombinator.com/item?id=26477144 - March 2021 (276 comments)


With Project Loom coming soon, I think it's a mistake to write new application code in the reactive style. The imperative style is much more straightforward, gives access to a wider world of libraries, and (IIUC) will be just as efficient when Project Loom reaches production.


Reactive style is written for reactive style, not to somehow bypass concurrency limitations.


Project Loom (structured concurrency) is a game changer.


When I use developer tooling in other platforms I’m usually disappointed. I shouldn’t need some special native library installed on my system so I can install a dependency. It shouldn’t take me hours to get my local developer toolchain set up.

I found this comment odd - it does take some time to get a modern Java dev env up and running from scratch. For e.g. you have to at least download/install gradle or maven after the JDK. And then the author goes onto to talk about the Testcontainers project for which it appears you need to set up Docker. And so on. Wonder if I missed the meaning there.


The Gradle wrapper is how you let gradle download itself upon first execution. It stays in your git-repo, and then you don’t need to install gradle yourself.

https://docs.gradle.org/current/userguide/gradle_wrapper.htm...


But surely if you are starting off you need to install/initialize the wrapper itself, no? From your link,

Generating the Wrapper files requires an installed version of the Gradle runtime on your machine as described in Installation. Thankfully, generating the initial Wrapper files is a one-time process.


If you’re using existing project, you don’t need to install gradle. For new projects you can copy few files from any other project. That “generation” just copies few files into your project and puts tiny config file. It should really be available as a separate tiny download IMO. Gradle makes it looking harder than it is.


You can bootstrap a fresh project using this:

https://gradle-initializr.cleverapps.io/


I think it's a shot at Ruby and Node- they have a lot of dependencies that rely on underlying C/C++ libraries so you can end up having to manage both. Things like that exist in Java (e.g. some OpenGL libraries) but they're relatively rare.


That first sequence diagram in the article reminds me of similar diagrams that I have seen for microservice based systems recently. Progress? Not so sure.


Java Play Framework is great. It's what Spring should be.

https://www.playframework.com/


My biggest gripe with java is developer productivity. It has gotten better but is still far behind interpreted languages like Python and Ruby. Class hot loading and things like that have made it better, but if you change a method signature or interface you have to stop your process, recompile, redeploy, and restart. For big codebases it’s brutal compared to the aforementioned languages and their attendant frameworks.


That's interesting.

I'm not the biggest Java fan, I can tolerate it. Recently working on a Python project I find I'm massively unproductive compared to something like Java or more specifically Scala as using Spark.

Not having strong types and limited type hinting. I have no idea what things are, is it a int, string, object etc, clicking through in an IDE to see code/docs isn't as great as it's hard for an IDE to inspect the code compared to strongly typed language.

Not having a compiler means I have to have unit tests doing what a compiler would do or finding out I have syntax errors or other errors at runtime. This massively throws of my development flow as can't lean on the compiler to find trivial errors. Rather than doing compile I have to hope I have a test case with high coverage to instrument the code to get a syntax error or I have to start / deploy it to get the error.

There's a great deal of issues with dependency management. Venv, Docker etc help with this but it's a bunch of extra stuff on top I now need to worry about and sometimes still hit errors.

The only time I find I'm relatively productive in Python is writing a short 100-200 line script to do one off tasks / sysadmin / devops type scripts.

The fact if I change an interface, rename a method I need to recompile I find a huge advantage as everything breaks and I worth through the errors one by one and once complete and no more errors I have confidence the refactor is complete and the compiler has verified it.


Types almost never save the day and they don't add much to the understanding. The code should be structured and documented in such a way that you are able to understand it, reason about it and swap implementations of stuff in the places where such flexibility is handy. E.g. in Clojure, I can trivially test my functions in the context of the application using the REPL. I don't have to reload anything, I do it right in the namespace I am modifying. I can also just look at the data there. Also Clojure is way more consistent and you can probably learn all the functions in clojure.core and some of the typical libraries by heart. These functions then work basically on anything you will do and are almost 100% transferable to ClojureScript.

For sysadmin-like tasks, for most stuff you can use Babashka, which is a limited Clojure + some frequently used libraries implementation with very fast startup thanks to GraalVM. For the rest, Python/ Perl etc. will probably still have a bit better standing because of all the libraries e.g. working with SNMP.


> Types almost never save the day and they don't add much to the understanding.

There’s a saying in strongly typed languages “make illegal states unrepresentable”.

Java doesn’t let you get all the way there but it’s better than nothing.

When you look at strongly typed languages with a good type system, often when you compile it just works.

Besides hello world, I’ve never written program beyond a few lines that has just worked in a dynamic language.

...I do like clojure and the repl workflow and how everything is data. Haven’t done much besides play with it but have a friend who writes clojure professionally on a large codebase and their feedback is most of the errors they now see wouldn’t exist with a type system.


Can you be more specific about your friends codebase? How large, what industry/ problem space?

Of course, you can write bad/ unmaintainable code in any language. I have seen such code in Clojure(Script) as well. But some languages really encourage bad code where other languages already feel like you are doing something wrong when you write bad code. (e.g. Using many atoms in Clojure, doing boolean transformations etc. just from the top of my head.)

Strongly typed languages in my experience don't help anything, make programming harder for everybody and the benefit is (for most software) questionable. Compilers should handle most types for us and only give us a hint something could be better, if we were more specific e.g. with a type hint. Humans should think about the problems not about deep implementation details (like integer vs float vs double), when not strictly needed.


Schema systems (like spec and malli in the Clojure world) work well for checking states in data and are more powerful and ergonomic for that than type systems in mainstream languages.


That's strongly dependent on how big the code base is.

I'm a huge python fan, and I don't like/use IDEs on my personal projects. But eventually I still gravitated towards typescript because it had typing and the compiler + unit tests giving me a huge boost in ability to extend my project beyond a certain size.

I think the threshold is around 50-100 files, but smaller code bases also benefits from the typing structure.


At OrgPad, we are well over 100, maybe even 200 namespaces in Clojure and ClojureScript and I don't think we have any trouble keeping up. Most of the problems we have are frankly not in any capacity connected to Clojure or ClojureScript but rather half-baked, inconsistent technologies and their implementations like CSS, browser interoperability/ APIs mostly. The server-side (Clojure) is rather boring currently, so can't say much about that - most of the stuff seems to be just ok. There seems to be a lack of libraries for transforming image/ video formats in Java for the newer formats like WebP, AV1. Probably most people drop to imagemagick/ ffmpeg? That is kind of a problem, but not really pressing. We have lost much more time to CSS and bad browser APIs.


> I have no idea what things are, is it a int, string, object etc,

What if it's some kind of a String? Only a String starting with "_". You may define UnderscoreString. But now it's not obvious what this is. You have to go look it up either way. A compiler may stop you from passing a regular String. If you're very lucky it may even stop you from casting an obviously wrong literal. But beyond that you're probably out of luck unless it's a crazy language. Once everything is a custom type how is remembering all the types different from remembering what each function does?

I'm not sure how unit tests help here much either. Why would you come up with an example that breaks your code in a unit test but couldn't think of it beforehand? Unit tests are used just as much in environments with a helpful compiler. They mostly help stop new breakage affecting stuff that used to work.

I wouldn't write off checking at runtime. You can actually define exactly what it is and it's easier. You can do as little or as much of it as you want. It's not compile time but depending on your software you may be able to get fast feedback. This level of checking would not be compile time either way. If you really want to be sure you'll be doing this in your typed language too. You're only worse off if you'd really benefit from the simple stuff.

If wrong data hitting a function can cause multi-million dollar loses would a single person think the compiler is good enough? What if it can cause major data loss? Big embarrassment? Pretty clear to me one will only trust the compiler with bugs that don't matter in the first place.


> Not having a compiler means I have to have unit tests doing what a compiler would do or finding out I have syntax errors or other errors at runtime.

Nope, it's not the only way. In practice you'd have type hints all over your code and a linter to perform static analysis. Of course this can be integrated in you IDE, so it's easy to do.

> Not having strong types and limited type hinting. I have no idea what things are, is it a int, string, object etc, clicking through in an IDE to see code/docs isn't as great as it's hard for an IDE to inspect the code compared to strongly typed language.

Again, with type hints you'll find your IDE does a pretty good job. Heck, Pycharm even gets it right without type hints sometimes.

Not that I don't see the value in compilers, but the case is not as clear cut as you expose, far from it.

> There's a great deal of issues with dependency management. Venv, Docker etc help with this but it's a bunch of extra stuff on top I now need to worry about and sometimes still hit errors.

Not sure what Docker has to do with Python here... I don't use it and we're doing fine. Venv is something you have to learn, sure, but that's what it takes to become productive on any platform, be it Python, Java, Go, Whatever: learn the good practices associated with it.

And while the dependency management is not stellar with Python, that's really not something to love from Java either.


Type hints are optional.

Type hints are very different to a strongly typed language with a good type system. Haskel as an extreme feels like algebra, you have a type a, you need a type c, you have a function a to b, and a function b to c, you just compose them together and from your a you have a b. Tools like hoogle can tell you the function to use for the given args to get to the final type you want.

Ignoring Haskel even Typescript can let you express things hard to express in Python with ADT’s etc.

Docker has nothing to do with Python but if you want to produce an artifact you can just run anywhere it solves the dependency issue in that the decencies are in the image and you don’t need to run pip install on a deployment server.


The thing is, type hints are optional. The vast majority of existing projects don't use them. Third party dependencies don't use them.

Python is a toddler in terms of typing system support. Probably a 2 year old.


Indeed. But way to go... third party dependencies wont support Python < 3.6 forever. And it's still very useful for the application code alone.


You could use Clojure of course and drop to Java for the few performance-supercritical functions if needed (most likely not for like 95%+ of codebases). The productivity and performance may be even better than Python or Ruby. Another positive is, quite a bit of the code can be taken 1:1 and used in ClojureScript if you do back-end and front-end at the same time. (Of course you can program in ClojureScript targeting not the browser but Node.JS e.g. using shadow-cljs which makes working with npm easy.) Also, I haven't spent much time on StackOverflow when programming in Clojure/ ClojureScript because the language is very clear and consistent and usually the documentation and maybe one or two examples there is enough. Or I just play with the function in the REPL for a bit, maybe do a quick (time (dotimes [_ 1e3] (some-function-under-bench))) and dirty benchmark for the more involved stuff.

Clojure is a superpower especially for large codebases because you can maintain your state and ship your adjustments from the editor to the REPL in development which is almost instantaneous. This works for production code also, if you need a very, very quick hot-fix or want to look at live data in the live database without copying possibly confidential data around. Of course, you have to be careful. With great power... and all that.


In big codebases, people don't change Interface (that's a contract btw) that often. They would do the proper "migration" by offering path to the new contract/api.

If you have huge codebase in Python or Ruby, you'll rip your hair more often because it's riskier to make changes (dynamic language) according to your use-case :).


Sure, according to some OO professor. In real life these things change all the time. And it’s not just methods and interfaces that force a recompile, those are just examples.

As for dynamic language projects, they should include a test suite to mitigate such issues, but often do not.


A big chunk of the issues you're writing that test suite for, and constantly rerunning, are solved, out of the box, by having a decent type system. The type system also gives you superior code completion (with proper IDE support), which already outweighs any potential reload time costs in Java vs Python/Ruby, for me personally.


If your development productivity is entirely reliant on how quickly you can reload your changes, then you're doing something wrong. Though I guess you'd have to write mounds of unit tests and constantly rerun them to prevent common issues that Java's type system solves for free.


Actually, Java, its syntax, the explicit types and how its often taught only obscure the solutions that could be much more obvious. Since I have seen Clojure, I was swearing why nobody has shown it to us right in the beginning when I was in the university. Back then, Clojure was already an established, stable language and that has been a decade. I could just skip the C, Java and other classes and would be a much better real-world problem solver/ engineer much earlier. If I needed the specifics e.g. for embedded or legacy applications, I could always look-up/ learn the details for C/ Java etc. but that is not needed for more than 95% of the tasks a software engineer would encounter in the real world.


You're not saying anything specific about what's wrong with Java. Java is not perfect, but you haven't actually given a single good reason. The type system alone is a huge benefit over languages such as Python/Ruby. Lisp has existed and been taught for a long time, and I like Lisp languages such as Clojure, but the allusion to Clojure being some kind of magic bullet is also pretty baseless.


Well, I did say at least 2 specific things (syntax, explicit types). But maybe approach this a bit differently and talk about what is great in Clojure that is not good in Java: - persistent datastructures that can easily be made transient in specific performance critical cases - consistent syntax - the language itself is a datastructure so manipulating of code is very easy and you have a serialization format (again EDN) basically for free - the REPL - most of the code transfers 1:1 to ClojureScript, that is not the case for Java and JavaScript which are completely different languages and have very different strengths - dynamic types, but out of the box type hinting is available for some corner cases where it improves performance/ makes interoperability a bit clearer and e.g. using clojure.spec you can with some effort make something approaching depend type systems/ very strong tests incl. generative testing - Clojure/ ClojureScript interoperability with the Java/ JavaScript ecosystem - developer productivity - actually performance especially compared to Python/ Perl/ Ruby, very carefully written Java would win microbenchmarks but in Clojure you can probably improve the overall performance of a large codebase because in the same time as you would need with Java you can do many, many iterations more and therefore explore the optimal solution - you can become very proficient in Clojure in about 6 - 12 months where with Java you probably need maybe 3-5x that time to tackle the same problem space

Yes, some of those things are not so specific to Java. If you only care about performance in micro-benchmarks Java would probably win but probably 95% of the problems in the real world are way more complex. Also, good luck writing correct multi-threaded code in Java vs Clojure. Clojure is uniquely positioned for multi-threaded workloads thanks to persistent datastructures, atoms, agents etc.


I don’t think op was stating developer productivity completely relies on reloading changes, but developer experience does matter. Many folks end up working with legacy code bases that don’t have clean code, weren’t designed with testability in mind, among other issues. Reloading changes, writing to a logger, etc. end up being common techniques. Depending on the complexity and how much your org is willing to invest in such a system, you can be in a tough spot.


Well..there are always commercial tools like JRebel (https://www.jrebel.com/products/jrebel) that one can leverage for live reload


Jrebel is really really good but it’s just way too expensive. Maybe worth it for some folks though.

http://hotswapagent.org/ Is an open source alternative and covers about 90% of what jrebel does


If you change an interface in Python you need to manually verify that everything is correctly using that new interface. That’s much more of a devprod hit.


With interpreted languages you pay the price in terms of productivity (best practices, design patterns), performance (at scale) in long run which is way more than what you gain in short term is my opinion.


May be you should try Quarkus, its hot reloading is slick. Even with a large code base, I could just do change->save->refresh cycle (about a second).


Go? Archaic? I suppose you are referring to the paradigm it employs...


Author here. Yeah, the Go language feels very archaic when using more modern languages. Some things I miss when I use Go: immutability as a default, monadic error handling, type classes, higher-kinded types, high-level collection operations (map, flatmap, filter, etc), ADTs, extensive pattern matching, expression-orientedness, and explicit null handling.


Guess that’s why kubernetes and so many other CNCF projects were written in Go.

It’s archaic enough that everyone can get things done instead of worrying about esoteric new features.


Kubernetes was prototyped in Java and rewritten in Go due to Go bias of some team members, which forced a rewrite into Go.

Source: A couple of talks at FOSDEM.

Don't attribute to technology the outcome of political decisions.

I also used to get things done in TASM.


Have you stopped to consider why those team members had a bias towards Go? Maybe it’s for a reason.

Maybe the people behind Istio, InfluxDB, Docker, Traefik, Terraform, etc also chose the “archaic” Go for “political reasons”.


You're assuming that tech is a meritocracy (it very much is not).

You're implying that if a tool is archaic, things cannot be done with it (also not true).

You're also explicitly saying that the features James mentioned are esoteric, which is easily disproven by the fact that many mainstream languages have them nowadays.


Some of the world's most incredible software is written in C and JavaScript and both are languages with incredible deficiencies from a language design standpoint. This doesn't take away from the software that was written in them.


Maybe the author isn't a fan of explicit pointers? Because yeah, though it's obviously c-like (as is Java), I can't think of anything else one might consider archaic that isn't also in the java language or virtual machine.


There are sooo many nice language features that have proven themselves over the last 3-4 decades and have become popular but are missing from Go that I think archaic is appropriate.


There's archaic and there's minimalist, and they're definitely not the same. Languages with buckets of features have steeper and longer learning curves, and if they're still actively developed, what you do learn may not last long.

Go is what I'd call minimalist. COBOL or fortran are what I'd call archaic.


Why does Go need explicit pointers?


It’s less about need and more that it’s a feature to help you more easily understand your memory allocations and when you are getting a reference to something vs a copy


How do you get started with the whole Java ecosystem in 2021? It feels so vast.


> I really enjoy writing Scala code and the continual improvements. It feels very cutting edge and my personal productivity feels significantly better than with more archaic languages like Go.

Google seems really hiring clueless marketers lately. This article is indeed full of hilarious nuggets like above.


What’s the modern standard for a full stack JVM app these days? Something rails-esque, or is it still just a split between Play and Spring Boot?


Spring Boot pretty much dominates. I think Play market share is a bit more in the long tail of alternatives. Also, it's very Scala centric and not widely used without Scala.

Spring is increasingly Kotlin centric and the combination is pretty nice. There's a wide variety of other frameworks such as vert.x, ktor, quarcus, etc. They are popular but compared to Spring quite niche.

With Graal and Kotlin native, the whole space is becoming less JVM centric as well. E.g. Spring Native just went into beta and Ktor has been inching closer to working on the Kotlin Native compiler for a while now (still some missing pieces). Particularly for serverless, this is relevant due to reduced startup time. Overall performance is not significantly better though.


You have Grails p. P p a Rails alternative, I don't recommend it though Grails has come a long way but still has a lot of inconeniences and is becoming obsolete by the day, it became a monster with tangled and twisted code, everything (views, models, queries) has 2 or 3 reimplementations that don't quite work and are not finished. I used to develop more than 5 services in it and frankly I don't know how I managed to stay sane or sleep at night


Spring (Boot) is the biggest.

After that, in terms of being standard and widely-used, Java EE, now known as Jakarta EE. It has a comparable level of magic to Spring Boot; maybe some of it is done better, some of it isn't.

DropWizard is still going.

I suspect that a larger proportion of Java programmers are working on headless data-munging backend apps than, say, Ruby or Node programmers. Those kinds of apps often either don't need a framework at all, or need some more specialist framework. Hence, Rails-esque frameworks are less of a priority for the Java community as a whole. Which is a bit of a shame, because it would be great to have a really strong alternative to Spring.


I keep using JEE, or one of the Java CMS for more complex stuff, like Liferay or Adobe Experience Manager.

However Quarkus and Micronaut are also quite appealing for small projects.


Check out https://www.jhipster.tech

It uses Spring Boot, but comes with a lot of sane defaults.


Job listings in the searches I am subscribed to are almost universally Spring and Spring Boot.


Java backend = Spring Boot. It dominates the market.


What does modern mean in this context?


Probably relative to things like J2EE as mentioned in the beginning, and as mentioned toward this middle:

> Non-blocking / reactive is one of the central demarcating elements of “modern” vs traditional.


I'm hoping to skip modern (i.e. reactive) and go straight to post-modern (i.e. Loom).


Post-modern, pre-future




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: