At Roboflow, we've seen users fine-tune hundreds of thousands of computer vision models on custom datasets.
We observed that there's a huge disconnect between the types of tasks people are actually trying to perform in the wild and the types of datasets researchers are benchmarking their models on.
Datasets like MS COCO (with hundreds of thousands of images of common objects) are often used in research to compare models' performance, but then those models are used to find galaxies, look at microscope images, or detect manufacturing defects in the wild (often trained on small datasets containing only a few hundred examples). This leads to big discrepancies in models' stated and real-world performance.
We set out to tackle this problem by creating a new set of datasets that mirror many of the same types of challenges that models will face in the real world. We compiled 100 datasets from our community spanning a wide range of domains, subjects, and sizes.
We've benchmarked a couple of models (YOLOv5, YOLOv7, and GLIP) to start, but could use your help measuring the performance of others on this benchmark (check the GitHub for starter scripts showing how to pull the dataset, fine-tune models, and evaluate). We're very interested to learn which models do best in which real-world scenarios & to give researchers a new tool to make their models more useful for solving real-world problems.
Might be good not only to show that YOLO and other do not fully generalize to the presented image domains but also that some specialized SOTA models e.g. for areal images perform well in the subcategories to actually calibrate the benchmark. I otherwise would wonder about the quality of the labels (e.g being contradictive).
Good idea. I haven’t looked too closely yet at the “hard” datasets.
We originally considered “fixing” the labels on these datasets by hand, but ultimately decided that label error is one of the challenges “real world” datasets have that models should work to become more robust against. There is some selection bias in that we did make sure that the datasets we chose passed the eye test (in other words, it looked like the user spent a considerable amount of time annotating & a sample of the images looked like they labeled some object of interest).
For aerial images in particular my guess would be that these models suffer from the “small object problem”[1] where the subjects are tiny compared to the size of the image. Trying a sliding window based approach like SAHI[2] on them would probably produce much better results (at the expense of much lower inference speed).
I have experimented with the platform and am a huge fan. Thanks for all you are doing - have you thought about or are you integrated with tools like Synthesis AI or Sundial.ai? They don’t cover all of the wide range of data I want to gather, but make it so easy for fixed objects.
Haven't heard of those two, but would be really awesome to see an integration. We have an open API[1] for just this reason: we really want to make it easy to use (and source) your data across all the different tools out there. We've recently launched integrations with other labeling[2] and AutoML[3] tools (and have integrations with the big-cloud AutoML tools as well[4]). We're hoping to have a bunch more integrations with other developer tools, labeling services, and MLOps tools & platforms in 2023.
Re synthetic data specifically, we've written a couple of how-to guides for creating data from context augmentation[5], Unity Perception[6], and Stable Diffusion[7] & are talking to some others as well; it seems like a natural integration point (and someplace where we don't need to reinvent the wheel).
* Blog Post: https://blog.roboflow.com/roboflow-100/
* Paper: https://arxiv.org/abs/2211.13523
* Github: https://github.com/roboflow-ai/roboflow-100-benchmark
At Roboflow, we've seen users fine-tune hundreds of thousands of computer vision models on custom datasets.
We observed that there's a huge disconnect between the types of tasks people are actually trying to perform in the wild and the types of datasets researchers are benchmarking their models on.
Datasets like MS COCO (with hundreds of thousands of images of common objects) are often used in research to compare models' performance, but then those models are used to find galaxies, look at microscope images, or detect manufacturing defects in the wild (often trained on small datasets containing only a few hundred examples). This leads to big discrepancies in models' stated and real-world performance.
We set out to tackle this problem by creating a new set of datasets that mirror many of the same types of challenges that models will face in the real world. We compiled 100 datasets from our community spanning a wide range of domains, subjects, and sizes.
We've benchmarked a couple of models (YOLOv5, YOLOv7, and GLIP) to start, but could use your help measuring the performance of others on this benchmark (check the GitHub for starter scripts showing how to pull the dataset, fine-tune models, and evaluate). We're very interested to learn which models do best in which real-world scenarios & to give researchers a new tool to make their models more useful for solving real-world problems.