By my understanding, yes, in the sense that certain parts of each algorithm are looking to minimize something...the former model used principal components analysis, which is linear in the sense that you are using transforms which pick out for the least correlated pieces of a huge chunk of data, whereas neural networks, which uses a combination of linear and non-linear layers picked by the user to minimize, "errors."
What's interesting is that the former model sounds so much, "better." I wonder if anyone could chime in about how our ears and auditory nerves or perhaps auditory cognition works, and whether they are more, "principal component analysis-y" somehow than "error minimization-y" or something relating to the actual math, which may explain why this new neural network christmas song sounds like absolute crap to us, whereas the older version sounds pretty amazing. Also, whether my understanding of the underlying math is correct or not.
What's interesting is that the former model sounds so much, "better." I wonder if anyone could chime in about how our ears and auditory nerves or perhaps auditory cognition works, and whether they are more, "principal component analysis-y" somehow than "error minimization-y" or something relating to the actual math, which may explain why this new neural network christmas song sounds like absolute crap to us, whereas the older version sounds pretty amazing. Also, whether my understanding of the underlying math is correct or not.