Hacker News new | past | comments | ask | show | jobs | submit login
Eclipse OpenJ9 – Open-source JVM (github.com/eclipse)
322 points by jsiepkes on Sept 16, 2017 | hide | past | favorite | 104 comments



It's the IBM J9 donated to the Eclipse foundation: https://en.wikipedia.org/wiki/IBM_J9


The "Welcome to the Eclipse OpenJ9 repository" is just brilliant. So many product blogs and source code repositories fail to introduce the product first.


I'm confused. Do you think they're doing it right with that as an intro?


On the first point I can see that it's high performance enterprise grade jvm. It's a good introduction. There are many projects that's difficult to understand even after scrolling or clicking.


There have been a few projects that followed a link to, possibly from this site, looked at for one to two minutes, and quit looking after being unable to even figure out what the project is, what it does, and whether I should be interested in it.


It's funny what becomes refreshing after so many fly-by-night résumé repos.


This is really interesting: CUDA support out of the box[1]. I've never seen this before, does any other JVM do it?

[1] https://github.com/eclipse/openj9/tree/master/runtime/cuda


We've actually implemented something like this ourselves with our linear algebra library.

We wrote our own gc for cuda: https://github.com/deeplearning4j/nd4j/tree/master/nd4j-back...

as well as: https://deeplearning4j.org/workspaces

It also integrates with the JVM via weak references as well.

I can attest to: https://news.ycombinator.com/item?id=15269537 as well, you really need this 3rd party due to the frequency of how often cuda changes. We usually end up having to support the 2 most recent cuda versions.


Very cool, thanks for the reference!


All of them do with 3-rd party libraries like JCuda. Bundling that into JVM is even a disservice to the user, because it is much more difficult to keep up to the newest CUDA features.


It seems like such a long time ago since Sun/Oracle and Apache Software Foundation fought over the licensing of the Java Technology Compatibility Kit (Java implementation test suite) for Apache Harmony, their JVM and Java class library implementation which was used as part of Android for a long time.[1,2]

Hopefully OpenJ9 will meet with better success. On paper, it seems to meet the restrictions that Oracle currently imposes for TCK access, namely that it is available under a GPL license and that its class library is derived from OpenJDK.[3,4] And it comes from IBM, which has been able to put aside its differences with Oracle on OpenJDK in the past, when it became clear that Oracle would never support Harmony (which IBM previously backed).[5]

But when it comes to Oracle and intellectual property, who knows?

[1] https://en.wikipedia.org/wiki/Apache_Harmony#Difficulties_to...

[2] http://www.apache.org/jcp/sunopenletter.html

[3] http://openjdk.java.net/groups/conformance/JckAccess/

[4] http://openjdk.java.net/legal/octla-java-se-8.pdf

[5] http://blog.joda.org/2010/10/no-java-7-end-game_4619.html


> namely that it is available under a GPL license

Not exclusively from what I see, seems you can choose the Eclipse Public License. From your #4 link:

> Licensee may not: [...] (iii) distribute a Licensee Implementation under any license other than a GPL License

I hope they revoke their TCK license personally (ex-dev below stated it passed). I would hope then implementations would stop supporting this "Java" naming/certification single-company gatekeeping and instead support and contribute to the open test suite.


So IBM has OpenJ9 and OMR, Oracle has Graal and Truffle.

The JVM space is suddenly getting very interesting. But my question is, why now? While OMR did opened up a year ago it didn't seems to have gained any traction.

Then there was the JavaEE last week opened also going to Eclipse, which is strange because it has always been IBM going to Eclipse and Oracle goes to Apache.

Not to mention the OpenJDK builds https://blogs.oracle.com/java-platform-group/faster-and-easi...

What is going on in the Java Land? Because I dont for a moment believe these company are doing it for good of Java. At least not from IBM and Oracle.

Or am i thinking too much into it?


Hi,

I do not work at any of the major JVM vendors but can point to trends in the space. Disclosure: My business relies heavily on the JVM and I am heavily involved in the java community from the AI side. We also do a ton of java systems development and have had conversations with many of the JVM vendors over the years including azul,oracle,red hat, and IBM.

Azul started running a "pure openjdk" distro a bit ago that they provide support for. They also provide a licensed embedded version. They are differentiating with a pauseless GC.

Oracle uses the JVM in a lot of their products. App servers are slowing down in favor of microservices now. This unbundling started with spring boot and supporting Java EE annotations.

We can see this with Java EE migrating to eclipse now because the annotations themselves have become commoditized.

Red hat started supporting jdk 7 in RHEL and was probably a bigger proponent of the "open" bits.

Overall here, just of note: Java has always had the JCP. https://jcp.org/en/home/index

I'm guessing that enough of the member organizations started pushing more for opening of the JVM.

Finally overall, we can see in the space that even at java conferences, a lot of conversations are moving towards "JVM as a platform" including scala and clojure in the conferences now.

So overall, I would say it's largely a shift in both the thinking and the way you monetize the JVM.


It makes more sense to have platforms such as Java and the JVM be open as opposed to proprietary.

J9 was largely used for IBM products which means it wasn't exposed to or tuned for workloads outside. Opening it up is a very good thing.

OMR is the foundation for J9, and it tackles the issues that come up again and again in dynamic runtimes:

1. Having a good threading, networking library, etc...

2. Having a battle tested industrial strength GC.

3. Having a JIT with easy to use optimizations out-of-the-box.

4. Having solid tooling for debugging and performance introspection.

Other runtimes such as Ruby, Python are at various stages of improving their GC or adding a JIT. The goal with OMR is to turn the battle tested components of J9 into pluggable components for these other runtimes. If tomorrow somebody wants to make a new Ruby or Python they shouldn't have to write their own GC or JIT, the same way nobody writes their own filesystem these days.

Oracle and IBM both see this and Truffle/Graal are different ways of attacking the same problem.


> What is going on in the Java Land

Could. Oracle and IBM are realising that WebLogic and WebSphere licensing revenue isn't going anywhere and if they want to grow revenue and achieve lock-in they have to find a new way.


That's awesome. It was always some weird situation with abstract JVM specification but only one de-facto standard implementation which everyone used. I hope J9 will be a viable alternative, so everyone wins.


How so? There was BEA JRockIT, IBM J9, Azuul, Oracle and HP also have a JVM iirc.


I've been a Java dev for a decade, and I never once used anything but Sun / Oracle's version (now OpenJDK, which is almost the same thing).

I really have no idea who ever used these other JVM implementations. I'm guessing it would be companies with specialized hardware such as IBM mainframes or finance firms paying a lot of money to squeeze out performance with WebLogic. Either because of poor marketing or some other reason, no developer or company I worked with had a reason to use anything else.


Back in the early 2000s, the IBM JVM was freely available and frequently used as a drop-in replacement for the Sun JVM, since it was often faster and less memory-hungry.

IBM also had an open-source Java compiler written in C++, Jikes [1], that was considerably faster than Sun's javac. However, it was eventually abandoned. Jikes is coincidentally also the name of IBM's open-source research JVM [2], which is still under development.

Azuul is apparently popular in areas requiring low latency, such as financial trading.

As an aside, there are many niches that most people haven't heard about. You might be surprised about all the kinds of software that are hiding under rocks — invisible to most people because they're not working in that industry. Things like MUMPS, K/kdb, Fortran, Delphi — lots of obscure stuff that has left the mainstream (or never entered it in the first place) but is still in use.

[1] https://en.wikipedia.org/wiki/Jikes

[2] https://github.com/JikesRVM/JikesRVM


Jikes RVM was known as Jalapeño first, then they changed the name to Jikes RVM due to a name clash (Jikes Java compiler already existing at the time). I'm not sure it is still active though, a lot of researchers left IBM a few years ago (2012) and Java ceased to be the hot topic anyways.


At least Websphere 6 was using IBM JVM. Not sure if they still do this.

We had to use the IBM JVM for running tests and precomputing stuff, otherwise stuff wasn't working.


Websfear 8.5 still is using it (as of last year when I had a gig at an insurer).


I used JRockIT for some stuff. We found it to beat HotSpot (at the time) in some of our use cases. Eventually they got brought in house and HotSpot gained those speed improvements. One of the things that was really useful with it was Mission Control, which didn't exist to the same degree with the HotSpot VM.


JRockit's deterministic GC was awesome.


Didn't it make it into HotSpot?


HP had PA-RISC and Itanium so they ported Sun/Oracle's JVM (presumably under some commercial license agreement as this was before OpenJDK) to those platforms and distributed/supported it with their HP-UX. Oracle DBs on Itanium used that port of the JVM too IIRC.


Wow. It's been about 6 years since I last used j9, but at the time I used it it was extremely good. But I was running on Power, which is it's sweet spot.


Hi all,

I'm one of the authors behind https://www.adoptopenjdk.net (CI at ci.adoptopenjdk.net, project at github/adoptopenjdk) where we are providing nightly and release builds of OpenJ9 (as well as a host of other OpenJDK derivatives). We've recently been granted the TCK (as the London Java Community) and so you'll shortly have professionally tested 'You can call this Java' binaries.


Let's get straight to the really important question, can J9 or AOT help maybe just a little with clojure startup time woes?


I tried some quick tests using a fully AOT, short-running thing we run in house. Here the cost is the set-up of JVM and loading of Clojure stuff (CentOS 7 Vagrant VM on my Mac).

J9:

time ./jdk-9+181/bin/java -client -jar l2i-0.1.0-SNAPSHOT-standalone.jar

real 0m1.987s user 0m3.383s sys 0m0.161s

time ./jdk-9+181/bin/java -server -jar l2i-0.1.0-SNAPSHOT-standalone.jar

real 0m2.949s user 0m5.452s sys 0m0.167s

OpenJDK 8:

[root@localhost ~]# time java -server -jar l2i-0.1.0-SNAPSHOT-standalone.jar

real 0m1.545s user 0m2.510s sys 0m0.175s

time java -client -jar l2i-0.1.0-SNAPSHOT-standalone.jar

real 0m1.456s user 0m2.309s sys 0m0.182s

----

For whatever it means, this is a repeated execution of 10 runs together for J9:

real 0m17.341s user 0m26.783s sys 0m1.344s

And this is the same thing for openjdk version "1.8.0_144"

real 0m15.169s user 0m24.573s sys 0m1.711s

So I'd say that the answer to your question is is no, they are in the same class for a short-running Clojure app dominated by startup times, unless there is some special tweak.


The OpenJ9 infrastructure is somewhat different [for the better] than what I'm used to using internally, but I have only ever seen the shared class cache enabled when the -Xshareclasses [1] option is passed to the JVM. Do these numbers change significantly (especially after a warm-up run) if you add that option to J9's option set?

[1] https://www.ibm.com/support/knowledgecenter/en/SSYKE2_8.0.0/...


see above


Changes with Quickstart and Sharedclasses;

# time for i in `seq 1 10`; do ./jdk-9+181/bin/java -client -Xquickstart -jar l2i-0.1.0-SNAPSHOT-standalone.jar; done

real 0m18.571s user 0m30.429s sys 0m1.374s

# time for i in `seq 1 10`; do ./jdk-9+181/bin/java -client -Xshareclasses -jar l2i-0.1.0-SNAPSHOT-standalone.jar; done

real 0m16.642s user 0m19.483s sys 0m6.307s


You're showing the sum of all times, but do the times at least look better after the first run (with -Xshareclasses anyway) ?

Depending on how smoothly your experimentation went, you may also have to destroy a pre-existing cache before you really measure it (java -Xshareclasses:destroy) as it could have stale classes in it from an earlier run.

Would you be willing to open an issue with more details at http://github.com/eclipse/openj9/issues so we can look into it?


Maybe I'm missing something or I'm just blind but where is there AOT compilation in your example since you seem to run the same jar with the same command-line options.

That said I'm not that surprised that class aot compilation didn't really speed up a clojure app.


The Clojure app was AOT compiled (Clojure->bytecode) not the JVM's AOT


Apparently there's an option in J9 called -Xquickstart, have you tried that?


see above


Thanks for checking, even if the result is not as we hoped.


For OpenJDK, startup time appears to be dominated by the number of classes loaded[0]. I was told that .class files are read out of the JAR files one at a time, rather than being slurped into memory in a batch and then processed.

I'd be interested in seeing whether that holds true for OpenJ9.

[0] https://github.com/dsyer/spring-boot-startup-bench/tree/mast...


Benchmark results aside, JVM has very little to do with the startup time. Most of the time is spent wiring Clojure namespaces together: every namespace must be evaluated, every Var initialized. That takes time. The core reason for that is that namespaces and vars are dynamic, not immutable, and Clojure runtime has to evaluate everything on each startup to uphold language semantics.


Curious what performance it has comparing to default VM from OpenJDK.


There was a redhat sponsored benchmark released 3-4 years ago, should however note that J9 was developed by IBM, which owns the Power architecture and this test was run on Intel.

http://www.principledtechnologies.com/Red%20Hat/RHEL6_rhj_06...


Disclaimer: I work on the IBM J9 JIT + OMR compiler team, so I am in no way a neutral authority, nor do I speak for my employer.

'Even' on Intel, we go back and forth on benchmarks, depending on what each side is focusing on. Some metrics take longer than others to flip-flop but both sides have some very smart people constantly working to make performance better.

And FWIW, we have a team dedicated to making x86 perform well.

Two things to consider about this particular result:

1) This is running on what I expect to be our pxa6470sr4-20130207_01 release (if I'm interpreting the 'java-x86_64-70-4' correctly). There has been a ton of work on J9 since then, and I would be very leery of using that result as canonical for 2017

2) SPEC jbb2013 was retracted due to a flaw: https://www.spec.org/jbb2013/defectnotice.html . I don't know what the implications of that flaw would be on these benchmark results (if any) but I would want more data before concluding anything.

That said, the 'other side' does do some really good work, and does score wins. We work hard to do the same. Even on Intel ;)


Interesting news in Java world.

Does this support AOT ?


http://www.eclipse.org/openj9/

"Shared classes and Ahead-of-Time (AOT) technologies typically provide a 20-40% reduction in start-up time while improving the overall ramp-up time of applications. This capability is crucial for short-running Java applications or for horizontal scalability solutions that rely on the frequent provisioning and deprovisioning of JVM instances to manage workloads."


In my experience J9 needs to be finely tuned to reach maximum performance for a specific use case other than enterprise application throughput. Try using the -Xquickstart flag if you want to improve startup time.


Any chance of an official unikernel distribution, or single entry-point mechanism for container images.

This along with AOT cached classes being baked into a vm/container image would be interesting.


Wonder if it will be possible to compile under OS X. Curiously OS X binaries have never been available for J9.


Eclipse OpenJ9 builds on top of Eclipse OMR. Eclipse OMR has OS X builds ... What a great thing it would be to have OpenJ9 building and running on OS X :) !

Disclaimer: I am a project lead for both Eclipse OpenJ9 and Eclipse OMR.

OS X has been talked about, but we have a lot of plumbing to connect up at the project right now and that has high priority for us. If someone wants to kick that work off, I would happily encourage it :) .


I opened an issue where discussion on OS X support can happen: https://github.com/eclipse/openj9/issues/36


It's always nice to see project members in threads like this. Thanks!


I expect it's just a matter of time now. The build is out there.</mulder>


Congrats! This is really cool!


Is it a move from IBM to sidestep Oracle ? (I have no idea, just asking)


So is this OpenJDK-derived? If so, too bad, I've been hoping for a truly independent implementation. If not, does it pass the TCK?


This is IBM's formerly-proprietary J9 JVM, which I believe was developed entirely from scratch, independently of Sun's HotSpot JVM.

J9 actually has its roots in an earlier Smalltalk VM (VisualAge Smalltalk, I think). The source copyrights go back to 1991. Eclipse, of course, was what IBM started after abandoning VisualAge.

It's not a JDK/JRE. J9 must be combined with OpenJDK to be able to run apps.


> It's not a JDK/JRE. J9 must be combined with OpenJDK to be able to run apps.

well yes and no. if you don't use anything from the standard class library (that's impossible) than not. what I also guess is that you need a java compiler, since it reads that it uses a ibm created "rom file" that comes from java bytecode.

what the most interesting thing about that vm is, is a shared class cache. which is like a *.so file, if one program loads a library and the library is already in memory another program won't add additional memory to the computer, this is a huge win, especially for memory hungry java applications (this already speeds startup extremly)


They already have a compiler, ECJ, which is included in Eclipse and Tomcat among other things.


does ecj support indy and newer bytecode? I do not think so, but I guess openj9 already supports java 9 bytecode I'm not sure about the actual bytecode version I guess it was 52 on java 8 but I'm not totally sure about it.

I mean ecj is also not included in openj9, so if you want to strip out any openjdk dependency one would need at least a really basic class lib and a compiler (either ecj or anything else), (but I also think it hears that openj9 includes a compiler https://www.youtube.com/watch?v=96XoG6xcnys at the end he only says that they use the class library from openjdk, but he does not say if anything else and it also looks like it only uses that).


ECJ is in Tomcat? What for?


Compiling Java :-)

To be more precise, a compiler is required for JSP, which is a mixture of html and java that is compiled to servlets.

Theoretically you can compile them ahead of time (before deployment) but as far as I can tell no-one does.


Those who want to provide only class files and no jsp code use the JSP pre-compilation approach. The JSP folks get compiled to Java servlet byte code the same as at runtime.


> J9 must be combined with OpenJDK to be able to run apps.

No IBM has its own full JDK that you can download, but the rest of it other than the JVM is not being open sourced at this time. That's my understanding at least.


The IBM SDK for Java is based on OpenJDK, just like most (not all) other Java releases that are out there. IBM does not have "its own full JDK".


Ex-J9 engineer here, J9 was built completely in a clean-room. It does pass the TCK tests.


Will they now be denied to use the TCK going forward like Harmony was? If not, why? Because they were proprietary first? Because Oracle likes the Eclipse license better than the Apache one?

Oracle has stated that for open source, only OpenJDK derivatives get access to the TCK.


I was intern'd at IBM Toronto Lab about 10+ yrs ago to add instruction sets for this compiler/JVM. It was fucking awesome experience back then!


What is the 9 for? Jackknife? Jabbering?


There's some mythology here, perhaps another J9 greybeard can answer. The story I've heard so far goes as follows...

There was some constant for 8K that was mistyped incorrectly, something like instead of:

#define 8K 8192

They mistyped and wrote:

#define 8K 8096

This caused all sorts of bugs and was known internally as the 8K bug. When time came to write the Java VM, they wanted to name it as "post 8k bug" they named it K9, but K9 sounds like a dog (woof) so they decremented the K to land on J9.

At least that's what I've heard...

shrugs



This close, but not right. Dogs never entered into the equation. I'll post a blog somewhere with the history and come back here with a link.


Better late than never.


Will a Java expert (JVM developer) please answer this question? Thank you.

How impossible would it be to add a delete keyword to the JVM, and why?


Java does memory recovery strictly by garbage collection. To delete an object remove all references to that object and wait.

A "delete now" operation imposes requirements on implementations that are not strictly required.

Any advantages gained from destructors can be realized via explicit cleanup methods and (as a hedge against mis-use) state checking for initialized and destroyed objects. A delete operation also adds risk that an object that has been deleted may be referenced again (although with esoteric stuff like PhantomReferences you can almost do it).


I could see where this might be useful for reducing gc load in an application with (very) soft real time requirements.


What about the "automatically gets cleaned up when the object goes out of scope" advantage of destructors?


You can do that with try-with-resources


Would it be impossible (or prohibitively difficult) to add it to an implementation, though?


Prohibitively difficult. If you really truly need delete then you really truly don't need Java and should use something like C.


It's not impossible. HotSpot/OpenJDK already has one. It's a native method/compiler intrinsic rather than a keyword but amounts to the same thing:

http://www.docjar.com/docs/api/sun/misc/Unsafe.html#freeMemo...


That frees a block of native memory, not an arbitrary Java object...


It's far from impossible. I would even say it's extremely trivial to add a C++ style delete keyword but then you'd lose memory safety because of the potential use after free bugs and it wouldn't even increase performance because the problem with garbage collectors is the stop the world pause, lack of off heap allocations for very large data and the needed value types to fully take advantage of that feature.

If you want manual memory allocation then you should instead look at sun.misc.Unsafe.

http://www.docjar.com/html/api/sun/misc/Unsafe.java.html

    public native long allocateMemory(long bytes);

    public native void freeMemory(long address);
Of course you still have to calculate the field offsets and all that stuff manually because it's not a part of the java language but you only asked about the JVM.


Why would you want to add a delete keyword? Are you trying to add explicit memory management to simplify the work of the garbage collector?


One example would be game development, where people end up doing object pooling to avoid GC pauses dropping frames.


There are better ways to eliminate GC pauses than manual memory management (even ignoring the relatively new "pauseless" GCs). See the new RTSJ (realtime Java) specification here: https://www.aicas.com/cms/en/rtsj. Not that it's designed for far stricter requirements than games (cases where even microseconds of unexpected latency may result in actual life safety concerns).

Manual memory management is a good choice if you have both worst-case latency concerns as well as very restricted RAM and/or severe energy constraints (basically, the kind of applications that Rust was designed to handle).


JEP 189: Shenandoah: An Ultra-Low-Pause-Time Garbage Collector (http://openjdk.java.net/jeps/189) could help with that.


The RTSJ solution is to simply pre-allocate everything you could ever want and reuse that.

That’s horrible to use for game development.


> The RTSJ solution is to simply pre-allocate everything you could ever want and reuse that.

That's not the RTSJ solution. RTSJ uses arenas, which are great if you have regular allocation/deallocation cycles, like, say, a frame in a game.


> That's not the RTSJ solution. RTSJ uses arenas, which are great if you have regular allocation/deallocation cycles, like, say, a frame in a game.

So how do I get a user’s Oracle JRE installation to use arenas? The only realistic scenario is to preallocate and reuse.


You don't. You use an RTSJ JVM.


That’s not an option, that was the whole point of the discussion.


If changing the JVM spec is an option, then surely so is using an existing spec. IBM has an RTSJ JVM[1], which, I believe, is based on J9.

[1]: https://www.ibm.com/support/knowledgecenter/en/SSSTCZ_3.0.0/...


Changing the JVM spec has a chance at being applied in future normal JVMs. Using an entirely separate solution has not.


It's not an entirely separate solution, but a Java standard. In any event, it's probably overkill if all you need are acceptable pause-times for games (RTSJ is for hard-realtime, safety-critical systems, where even 2us jitter may kill someone). There are "pauseless" GCs for the JVM (with 1-10ms max pause, depending on the GC) without hard realtime guarantees. We just need more free implementations.


Eclipse OpenJ9's -Xgcpolicy:metronome is one of those GCs. Depending on how you well you configure your operating system, it defaults to 3ms pauses along with an application utilization contract.


I'm not a game developer, so why is that horrible? consider I have an array/map of my game objects and the rest only works with methods, so most stuff does not lie on the heap, well that means basically no strings, since they would add to heap memory aswell, but only using basic datatypes and arrays.

why would some kind of that be problematic? I mean graphics programmic is kind of new to me, but considering that you could offload that to a c/c++ engine via jni (that's another horrible thing of some kind), but just the game code itself doesn't seems to be to bad, does it?


> why would some kind of that be problematic? I mean graphics programmic is kind of new to me, but considering that you could offload that to a c/c++ engine via jni (that's another horrible thing of some kind), but just the game code itself doesn't seems to be to bad, does it?

The whole point is not to offload anything. If you need to offload anything, even the render loop, the language has failed.


Isn't that what almost every game does during loading screens?


You could also check out Eclipse OpenJ9's -Xgcpolicy:metronome option. It only works on Linux X86-64 at the moment, but it is designed to better regulate GC pauses. If you want to learn more, you can open an issue to ask for more details at https://github.com/eclipse/openj9/issues .


Even in game development in C/C++, you wouldn't want to be allocating/deleting heap objects in every frame. You'd probably end up pooling there too, unless you're referring to the problem that the JVM has no value types and forces heap allocation for all objects. This will be fixed in JDK 10 (see Project Valhalla).


That doesn't solve the fundamental problem of stop the world pauses. If you had 10 threads sharing the same gc heap that allocate as much garbage as they want and 1 thread that has it's own heap that doesn't allocate anything only the 10 threads would be stopped by the garbage collector and you would have reached your goal.

If those 11 threads would share the same heap and 1 thread still doesn't use the gc because it's using manual memory management then you'd still suffer from stop the world pauses.

What you want is isolated heaps like erlang does, not manual memory management. I'm still wondering why we have no other programming languages with multiple heaps. You could probably achieve something similar with D and use of memory mapped files to efficiently share memory without copying bewteen processes. Alternatively you could call into C to spawn OS threads that don't suffer from stop the world pauses. But those two options are a hack. You're not supposed to use D like that.

I chose D as an example because it's a programming language with both a garbage collector and manual memory management that still suffers from stop the world pauses.


What would you expect to be the behavior of any existing references to an object that was deleted?

(Keep in mind that Java is a memory-safe language.)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: