Hacker News new | past | comments | ask | show | jobs | submit login

Working on a library that helps benchmark Active Learning (AL) techniques [1]. This is a form of Machine Learning, where you want to learn a supervised predictor but you don't have labels to start with, and labeling comes at a cost that you need to account for (the term AL can have broader connotations but this is the popular one). We feel the area suffers quite a bit from poor benchmarks, which my colleague and I wrote up about in a paper [2]. To run the many experiments in the paper we had to write a fairly comprehensive codebase that makes it convenient to swap out different bits and pieces of an AL pipeline, which we'll be polishing up now.

PS: If the work is of interest and you want to avoid reading the paper, I have a blogpost too [3].

[1] https://github.com/ThuongTNguyen/active_learning_comparisons

[2] https://arxiv.org/abs/2403.15744 (this was accepted in EMNLP'24)

[3] https://blog.quipu-strands.com/inactive_learning




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: