When I saw the example in the paper of building a list by copying the entire list for every new entry and trashing the old one I knew performance would be a problem. A smart compiler can optimize that if it knows what you are doing, but they were writing a research compiler.
There is a tradeoff with this kind of immutable oriented programming that you trade off relatively easy parallelism for heinous memory thrashing--and cache misses are already a huge bottleneck on modern architectures. Rust seems to have struck a good balance here, I'm keeping a close eye on it as it continues to mature.
Another thing to keep in mind is that with everything being immutable you end up with a pure language. Once that happens, the compiler starts being able to perform some really nice rewrites of your code really easily. The problem is that this is all or nothing - unless your language is actually pure you can't do this.
There is a tradeoff with this kind of immutable oriented programming that you trade off relatively easy parallelism for heinous memory thrashing--and cache misses are already a huge bottleneck on modern architectures. Rust seems to have struck a good balance here, I'm keeping a close eye on it as it continues to mature.