At Roboflow, we've seen users fine-tune hundreds of thousands of computer vision models on custom datasets.
We observed that there's a huge disconnect between the types of tasks people are actually trying to perform in the wild and the types of datasets researchers are benchmarking their models on.
Datasets like MS COCO (with hundreds of thousands of images of common objects) are often used in research to compare models' performance, but then those models are used to find galaxies, look at microscope images, or detect manufacturing defects in the wild (often trained on small datasets containing only a few hundred examples). This leads to big discrepancies in models' stated and real-world performance.
We set out to tackle this problem by creating a new set of datasets that mirror many of the same types of challenges that models will face in the real world. We compiled 100 datasets from our community spanning a wide range of domains, subjects, and sizes.
We've benchmarked a couple of models (YOLOv5, YOLOv7, and GLIP) to start, but could use your help measuring the performance of others on this benchmark (check the GitHub for starter scripts showing how to pull the dataset, fine-tune models, and evaluate). We're very interested to learn which models do best in which real-world scenarios & to give researchers a new tool to make their models more useful for solving real-world problems.
Might be good not only to show that YOLO and other do not fully generalize to the presented image domains but also that some specialized SOTA models e.g. for areal images perform well in the subcategories to actually calibrate the benchmark. I otherwise would wonder about the quality of the labels (e.g being contradictive).
Good idea. I haven’t looked too closely yet at the “hard” datasets.
We originally considered “fixing” the labels on these datasets by hand, but ultimately decided that label error is one of the challenges “real world” datasets have that models should work to become more robust against. There is some selection bias in that we did make sure that the datasets we chose passed the eye test (in other words, it looked like the user spent a considerable amount of time annotating & a sample of the images looked like they labeled some object of interest).
For aerial images in particular my guess would be that these models suffer from the “small object problem”[1] where the subjects are tiny compared to the size of the image. Trying a sliding window based approach like SAHI[2] on them would probably produce much better results (at the expense of much lower inference speed).
I have experimented with the platform and am a huge fan. Thanks for all you are doing - have you thought about or are you integrated with tools like Synthesis AI or Sundial.ai? They don’t cover all of the wide range of data I want to gather, but make it so easy for fixed objects.
Haven't heard of those two, but would be really awesome to see an integration. We have an open API[1] for just this reason: we really want to make it easy to use (and source) your data across all the different tools out there. We've recently launched integrations with other labeling[2] and AutoML[3] tools (and have integrations with the big-cloud AutoML tools as well[4]). We're hoping to have a bunch more integrations with other developer tools, labeling services, and MLOps tools & platforms in 2023.
Re synthetic data specifically, we've written a couple of how-to guides for creating data from context augmentation[5], Unity Perception[6], and Stable Diffusion[7] & are talking to some others as well; it seems like a natural integration point (and someplace where we don't need to reinvent the wheel).
The popup "Login or create a free account" took something like 2 minutes to appear when using Firefox. It showed up immediately in Chrome.
The link in the line "You will need an API key. RF100 can be accessed with any key from Roboflow, head over our doc to learn how to get one." is incorrectly quoted and does not work.
When downloading the data, a request to read the terms of service pops up. The terms of service explicitly disallow downloading https://roboflow.com/terms (section 3).
The download script at https://github.com/roboflow/roboflow-100-benchmark/blob/main... got stuck at "Generating version still in progress. Progress: 98.75%". When restarting the script, it got stuck at "Generating version still in progress. Progress: 148.15%".
The websites for the individual classes (e.g. https://universe.roboflow.com/roboflow-100/solar-panels-taxv... ) state that the data is licensed under "CC BY 4.0", but what are the exact attribution instructions? There is some BibTeX which should be cited if the data is used in a research paper, but what about other cases?
Thanks for the call-outs, it looks like our terms of service need to be made more clear (and I’m flagging that internally); they’re supposed to be talking about “The Website” vs “the content on the website” and the governing license for public datasets should be what users chose when they shared them but I agree it’s not well-worded.
~~Could you file an issue on the repo for the download issue you encountered? We just pushed an update to our python package earlier today that might be related.~~
~~Update: I went ahead and filed this issue[0] so the relevant engineer will see it when they wake up.~~
Update 2: I pushed a fix to our backend API & verified it's working by running that download script in a Colab notebook.
> What are the exact attribution instructions?
Creative Commons has some guidance[1] on how they recommend citing attribution.
Regarding the license, I want to know specifically about the part "If You Share the Licensed Material [...], You must: retain [...] identification of the creator(s) of the Licensed Material and any others designated to receive attribution, in any reasonable manner requested by the Licensor (including by pseudonym if designated);"
I could of course click through 100 individual pages and search whether one of the authors left different attribution instructions, but I would rather not.
The download script generates some "README.dataset.txt" file which states "Provided by Roboflow License: CC BY 4.0", even if the dataset has been created by a different entity, so those files are misleading at best.
Thanks, do you have a suggestion for how we should handle this better?
We’d like to make this as easy as possible for folks to use while giving proper credit to the users who did the hard work labeling and sharing these datasets.
I think the correct way would be to include a LICENSE.txt file in each subdirectory of the individual datasets with '"<dataset title>" <link to dataset> by <author> <link to author> licensed under <license> <link to license text>' and any additional attribution instructions mentioned by the author(s), e.g. BibTeX citation instructions or additional links to institutions, web profiles, etc. The authors should be referred to by their preferred name instead of their full name if they wish so. This assumes that the entire subdirectories belong to the same author. If individual images are by different authors, things might be more complicated.
All these little details might sound petty, but there have been many lawsuits in Germany where photographers uploaded images under some creative commons attribution license and then later sued people for using their images if they forgot to include the title in their attribution. Thankfully, those lawsuits have mostly died down, but it is probably better to be safe than sorry.
* Blog Post: https://blog.roboflow.com/roboflow-100/
* Paper: https://arxiv.org/abs/2211.13523
* Github: https://github.com/roboflow-ai/roboflow-100-benchmark
At Roboflow, we've seen users fine-tune hundreds of thousands of computer vision models on custom datasets.
We observed that there's a huge disconnect between the types of tasks people are actually trying to perform in the wild and the types of datasets researchers are benchmarking their models on.
Datasets like MS COCO (with hundreds of thousands of images of common objects) are often used in research to compare models' performance, but then those models are used to find galaxies, look at microscope images, or detect manufacturing defects in the wild (often trained on small datasets containing only a few hundred examples). This leads to big discrepancies in models' stated and real-world performance.
We set out to tackle this problem by creating a new set of datasets that mirror many of the same types of challenges that models will face in the real world. We compiled 100 datasets from our community spanning a wide range of domains, subjects, and sizes.
We've benchmarked a couple of models (YOLOv5, YOLOv7, and GLIP) to start, but could use your help measuring the performance of others on this benchmark (check the GitHub for starter scripts showing how to pull the dataset, fine-tune models, and evaluate). We're very interested to learn which models do best in which real-world scenarios & to give researchers a new tool to make their models more useful for solving real-world problems.