Hacker News new | past | comments | ask | show | jobs | submit login

Monophonic pitch estimation isn't too hard these days (see e.g. the YIN frequency estimator). Polyphonic pitch detection of even a single instrument is a hard problem. Real music, with multiple instruments, becomes a very challenging unsolved AI problem with current technology.



Translating pitch to a playable guitar tab would even more difficult than the pitch detection. Unlike the piano, the same note can be played in multiple places on the guitar neck. Auto-generating guitar tablature would require not only pitch detection, but some kind of logic that would pick the most appropriate string/fret depending on the surrounding notes/chords.


In the example Django Reinhardt song, there's a bit where he plays the same pitch a few times fretted, then a few times on the neighboring open string.

I can imagine software that would analyze fret noise, vibrato, timbre, hand positions, etc. to get a decent shot at reproducing a performance -- basically, what a real player listens for when transcribing -- but right, this is a really hard problem.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: