I suspect there is no available dataset that can teach the models to learn this. The enjoyment of music is entirely internalized. Skipping a track is an incredibly low fidelity data point.
I've often wondered if Spotify could capture volume control data as part of a track to see if that produces better training data. But again its still too low fidelity.
I've often wondered if Spotify could capture volume control data as part of a track to see if that produces better training data. But again its still too low fidelity.