I've used this probably at least 100 times over the past couple of years, and it's pretty bad. It finds a match maybe only about 20% of the time. Usually I just get a truly bizarre set of random songs from across the globe that say something like "5% match". And none of the stuff I'm looking for is obscure -- they're all top-40 songs from some decade or other that I just can't remember the artist.
And the crazy part is that I have a strong musical background, so when I'm humming, the melody and rhythm are exact. I mean, my input is accurate.
I wish I knew how the algorithm worked, if there were a way to know how to get better matches. Like does it not care about rhythm at all, is it just sequences of melodic pitches? Or is rhythm super-important? Is it better to hum just the chorus, or just a verse, or try to get the end of a verse going into a chorus? Does it only want you to hum the vocal part, or does it want you to hum whatever the main instrumental part is during the vocal breaks? I wish I had some notion of precisely what intermediate information it was deriving from humming and from songs that it was trying to match up and how.
Most of Google's "smart" services work pretty well. Of all of them, I think hum-to-search is the absolute worst-performing "smart" service they offer, by a huge margin. On the whole, it's probably wasted more of my time, than the value I've gotten out of it when it was helpful. It feels like a half-baked feature that they just forgot about rather than trying to improve. Hopefully they use some newer AI model to rebuild the feature from scratch in the future. Or it's a ripe opportunity for a startup to build and sell in a bidding war to Apple/Google/Spotify/Bing.
I think even timbre matters. I got a 95% match singing the first verse of Smells Like Teen Spirit using vocal fry to add some rasp, but couldn’t get over 80% singing in an unaffected soft head voice, though I got a 79% match for a cover by Malia J which wasn’t even originally in the results. Maybe I unconsciously matched the melody more closely when trying to do my best Kurt impression?
I just tried it belting from my chest voice like Michael McDonald and only matched covers; the best match only 33%. None of them sounded like Michael McDonald.
And the crazy part is that I have a strong musical background, so when I'm humming, the melody and rhythm are exact. I mean, my input is accurate.
I wish I knew how the algorithm worked, if there were a way to know how to get better matches. Like does it not care about rhythm at all, is it just sequences of melodic pitches? Or is rhythm super-important? Is it better to hum just the chorus, or just a verse, or try to get the end of a verse going into a chorus? Does it only want you to hum the vocal part, or does it want you to hum whatever the main instrumental part is during the vocal breaks? I wish I had some notion of precisely what intermediate information it was deriving from humming and from songs that it was trying to match up and how.
Most of Google's "smart" services work pretty well. Of all of them, I think hum-to-search is the absolute worst-performing "smart" service they offer, by a huge margin. On the whole, it's probably wasted more of my time, than the value I've gotten out of it when it was helpful. It feels like a half-baked feature that they just forgot about rather than trying to improve. Hopefully they use some newer AI model to rebuild the feature from scratch in the future. Or it's a ripe opportunity for a startup to build and sell in a bidding war to Apple/Google/Spotify/Bing.