I've found understanding the lambda calculus goes a long way to understanding functional programming—in particular, it does a wonderful job demonstrating just how simple the core ideas underlying languages like Haskell are, contrary to their reputations.
It's also a great way to get a deeper understanding of computation and how it relates to logic. The lambda calculus is special for being an incredibly minimal model of computation that's still expressive enough to write programs (albeit a little awkwardly). Expressing something you care about in lambda calculus is way easier than using a Turing machine directly or even programming in assembly!
I think lambda calculus encodes computable functions, but not algorithms, because changing evaluation order can make you go from O(n) to O(n^2) etc. To define time and space complexity, you need to assume something like Haskell's graph reduction. That's more complicated than starting with assembly and counting steps.
It's also a great way to get a deeper understanding of computation and how it relates to logic. The lambda calculus is special for being an incredibly minimal model of computation that's still expressive enough to write programs (albeit a little awkwardly). Expressing something you care about in lambda calculus is way easier than using a Turing machine directly or even programming in assembly!