It might be that people want to learn to use deep learning, but they find the NIST AND other sets boring and want to learn on a problem they find interesting, but still produce something that seems to work.
Plus, learning to learn how to learn on less can only help the field of learning. That's the goal of one shot learning right?
Obviously, you do whatever you like when you're playing around, but when you find yourself in that situation (i.e. "I want to learn tool X, but all of my problems are inappropriate for X"), it's an indication that you're misusing/misunderstanding X.
Again, with the forced metaphors: if I buy a new welder, I'm probably suddenly very keen on welding things. That doesn't mean I'm learning how to weld. A big (the biggest?) part of learning a tool is learning when to use it.
In my very limited experience, i hear about cool technology X, and have crazy fantasies about how it actually works. Picking up X and applying it to a problem, no matter how poorly informed, always taught me more about X than anything else.
When you get your welder, you kinda have to weld a lot of things to see when it's effective and when it's not. Everybody gets a free pass with the first few months with a new toy. The only way to learn when it's appropriate is to screw up a few times.
Here is an approach that worked for us well in our startup, that may help other teams who read this thread. We learned it through making a few mistakes in our decision making process.
Whenever we have a new feature which cannot be implemented using existing frameworks, tools, in house technology or existing expertise, we inform the managers to add an extra 2 week to our schedule to evaluate as many options as we can. It is really hard to fit a lot of tools in that time so all the team picks up the work, even the ones that has no past experience or theoretical knowledge on the topic. It actually helps to have those ones in the research group. They are often the ones to be able to tell "since i have no idea in the expertise, i instead searched for this company who apparently ditched this tool because they suffered from this and that". Others who try to acquire the theory on the other hand is able to argue like "X seems to be better than Y". Once we have enough Xs, we already have use cases of X that is proven to be useless. In that attempt, mostly there remain only one X or even none. We either pick up that remaining X or an X that is less scary but a little more boring.
Boring is good because a team can argue on a boring thing more easily. Those arguments produce quality code that remains to be used more than a year. A year or two is enough to allocate more research time for the topic, which eventually helps to find or implement a tool-set that can live much longer.
Here are some examples we used more than a year and ditched or soon will ditch:
Stock Tesseract Server Side OCR -> Properly Trained Server Side Tesseract + Image preprocessing on the mobile device
Rethinkdb change feeds -> Postgres LISTEN
Bluebird.js -> ReactiveX
Ubuntu -> CentOS
Forever -> Docker
Edit: The reason why I am posting under your comment is in some cases, mistakes can hurt a company in a way that is unrecoverable. My advice my not apply to pet projects.
Yes, sure. Playing with something is a good way to learn that thing.
But if you find yourself saying "I really like using this backhoe, but I'm finding that most of my hole-digging problems are too small for a backhoe. Is there a blog post on using backhoes to make sandcastles?", you have perhaps wandered off the path to enlightenment.
Plus, learning to learn how to learn on less can only help the field of learning. That's the goal of one shot learning right?