Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I consider GPs in two ways: hyperparameter optimization (where GPs are a specific instantiation of bayesian optimization) and a powerful non-linear regression method that supports uncertainty via the rich posterior distributions you get with GPs.

However, on the side of hyperparameter optimization, an experiment comparing bayesian optimization to random search with only double the computational resources [0] suggests that BO is not at least 2x better than random search which, to me, means that in practice BO isn't my first go-to for hyperparameter optimization.

On the side of regression, its poor time complexity (you need to invert the data matrix which is O(n^3) and slow in practice with maybe >1000 points) makes it less of a first go-to for me than neural networks, which maybe irrationally benefits from its current "hotness", but also for which there are well-explored strategies for generating predictions with uncertainty [1]. Of course, GPs compare significantly more favorably on the axis of interpretability, since you can get a closed-form posterior distribution for any point of interest, and the manner in which your dataset influences this posterior is transparent.

There have been, however, interesting developments in improving the efficiency of GP regression for large datasets, such as estimating the GP with a reduced and weighted set of data points [2]. The recent Deepmind paper on neural processes is also interesting, as a combination of GPs and neural networks and boasts strong scalability. [3]

[0]: http://www.argmin.net/2016/06/23/hyperband/

[1]: https://eng.uber.com/neural-networks-uncertainty-estimation/

[2]: https://arxiv.org/abs/1106.5779

[3]: https://arxiv.org/abs/1807.01622



The hyperband (HB) paper you refer to was discussed (briefly) by Frank Hutter in this talk - [1]. He observes that the interpretation is not correct, and had the experiment run longer, you would've seen different behavior. In fact their group combined Bayesian Optimization (BO) with HB; producing some interesting results. Some initial results were reported in a NIPS'17 workshop - [2], and a detailed version was part of the recently concluded ICML'18 - [3].

[1] https://youtu.be/OR-IKyP4ZpI?t=3m30s

[2] See accepted papers here - https://bayesopt.github.io/accepted.html

[3] https://icml.cc/Conferences/2018/Schedule?showEvent=2387




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: