Hacker News new | past | comments | ask | show | jobs | submit login

I’m quite happy to see random projections getting some love, but I hope more people start using Choromanski et al.’s 2016 Structured Orthogonal Random Features, which has provably higher accuracy while reducing runtime to linearithmic and memory to linear (or constant) from quadratic for each. I’ve verified this experimentally in my implementation here [0]. As a shameless plug, it’s quite fast, is written in C++, and comes with Python bindings for both kernel projections and orthogonal JL transforms.

[0]: https://github.com/dnbaker/frp




The reason why I am using random projections in the latest test is because I am testing an algorithm that iteratively calculates the inverse Cholesky factor of the covariance matrix and am testing it on Mnist images. The cov matrices made from raw Mnist images are non-invertible, but projecting them to a much smaller dimension allows me to actually test the algorithm on non-synthetic data.

I do not actually need more than I have, but I'll keep your link in mind if I ever need random projections though.


That makes sense. Thank you for explaining! I hadn’t fully pieced it together from the commit.

For dimensionality reduction, if one were to use my library, the way to go would be the orthogonal JL transform. The key to all these methods is the idea that multiplication by a diagonal matrix and subsequent F[HF]T is equivalent to multiplication by a matrix, which is what allows it to do what random matrix multiplication provides without instantiating or using the full matrix.

As an aside, I admire your work and think it’s both very exciting and highly valuable.


How is this relevant to the upstream comment?





Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: