Hacker News new | past | comments | ask | show | jobs | submit login

Looking at it, I don't see anything dealing with sparse matrices or factorization (e.g., LU, QR, SVD). All the Java libraries for SVD are pretty bad. Plus, none of your examples mention double precision. Does the library support it?

I find it interesting in the Numpy comparison no mention of the BLAS Numpy is linked to is mentioned, but it is for Nd4j. Numpy is highly dependent on a good BLAS and the basic Netlib one isn't that great.




Those are implemented by lapack as part of an nd4j backend.

Yes we have double precision - we have a default data type with the data buffer.

If you're curious how we do storage: https://github.com/deeplearning4j/nd4j/blob/master/nd4j-buff...

We have allocation types and data types.

Data types are double/float/int (int is mainly for storage)

Allocation types are the storage medium which can be arrays,byte buffers or what have you.

If you have a problem with the docs - I highly suggest filing an issue on our site: https://github.com/deeplearning4j/nd4j/issues

We actually appreciate eedback like this thank you.

For net lib java, it links against any blas implementation you give it. It has this idea of a JNILoader which can dynamically link against the fallback blas (which you mentioned)

or typically openblas or mkl. The problem there can actually be licensing though. The spark project runs in to this: https://issues.apache.org/jira/browse/SPARK-4816

If we don't mention on the site, it's probably because we haven't thought about it or haven't gotten enough feedback on something.

Unfortunately, we're still in heavy development mode.

FWIW, we have one of the most active gitter channels out there. You can come find me anytime if you're interested in getting involved.


Lapack doesn't implement any sparse linear algebra. If you think the landscape of "Java matrix libraries" is fragmented, when really they're all just different takes on wrapping Blas and Lapack or writing equivalent functionality in pure Java, wait until you look into sparse linear algebra libraries. There's no standard API, there are 3ish common and a dozen less common different storage formats, only one or two of these libraries have any public version control or issue tracker whatsoever, licenses are all over the map. The whole field is a software engineering disaster, and yet it's functionality you just can't get anywhere else.


I'm aware of the different storage formats. However there are quite a few sparse blas and lapack implementations now.

I'm aware the software engineering logistics that go into doing sparse right which is why I held off

We are mainly targeting deep learning with this but sparse is becoming important enough for us to add it.

As for disparate standards I managed to work past that for cublas/blas.

I'm not going to let it stop me from doing it right. If you want to help us fix it we are hiring ;).


> However there are quite a few sparse blas and lapack implementations now.

There's the NIST sparse blas, and MKL has a similar but not exactly compatible version. These never really took off in adoption (MKL's widely used of course, but I'd wager these particular functions are not). What sparse lapack are you talking about?

> If you want to help us fix it we are hiring ;).

We were at the same dinner a couple weeks ago actually. I'm enjoying where I am using Julia and LLVM, not sure if you could pay me enough to make me want to work on the JVM.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: