The interesting thing about compiler writing is the drastic reduction in complexity that's happened. There are essentially off-the-shelf tools that can be used to handle everything before semantic analysis and everything after conversion to a reasonably high-level IR. This has dramatically reduced the cost of language experimentation; it's now feasible to bang out a new language in a day or two (assuming you're familiar with yacc, LLVM, etc.), and generate efficient code on a number of target architectures.
These tools mean that "learning to write a compiler" can take several different paths; on the one hand, we could learn about how each of these tools work (e.g. "how to write a parser generator", "machine-independent optimizations", "code generation", etc.). This is the approach taken by the dragon book. This gives the background theory and mathematics behind compiler-writing (and will likely be something people need to know if they want to write for instance an LLVM optimization pass). The problem with this is that, because of the trends mentioned above, it has little to do with the day-to-day of actual compiler writing and language experimentation (this is only somewhat true, since most "real" compilers will have a custom-written frontend and at least a little bit of knowledge about the target architecture).
The other approach is to use pre-written tools, to focus on language design decisions and applications. To be honest I'm not familiar with any resources that do this particularly well. This is the approach taken by most of the "we're going to build a compiler!" websites mentioned on the Stack Overflow post. This approach is probably more useful for most of the people who want to "write a compiler", but it leaves people with a very shallow knowledge of what's happening beneath the surface of the APIs they use.
I don't mean to imply that one of these approaches is "better" than the other (and indeed, the second requires a little bit of the first - it's difficult to debug an automatically generated parser without knowing at least a little bit of the theory), but it's important to know which approach you're aiming for, and trying to optimize based on that goal. Going to the Dragon book as a "how-to" is a disaster waiting to happen.
I think that combining this short compiler tutorial with LLVM would be a very effective way for someone to get comfortable with basic compiler writing:
Afterward, learning how to do lexing and parsing would be a good addition. The tutorial covers a subset of scheme, which is trivially simple to parse, especially if you're writing the compiler in scheme as well.
These tools mean that "learning to write a compiler" can take several different paths; on the one hand, we could learn about how each of these tools work (e.g. "how to write a parser generator", "machine-independent optimizations", "code generation", etc.). This is the approach taken by the dragon book. This gives the background theory and mathematics behind compiler-writing (and will likely be something people need to know if they want to write for instance an LLVM optimization pass). The problem with this is that, because of the trends mentioned above, it has little to do with the day-to-day of actual compiler writing and language experimentation (this is only somewhat true, since most "real" compilers will have a custom-written frontend and at least a little bit of knowledge about the target architecture).
The other approach is to use pre-written tools, to focus on language design decisions and applications. To be honest I'm not familiar with any resources that do this particularly well. This is the approach taken by most of the "we're going to build a compiler!" websites mentioned on the Stack Overflow post. This approach is probably more useful for most of the people who want to "write a compiler", but it leaves people with a very shallow knowledge of what's happening beneath the surface of the APIs they use.
I don't mean to imply that one of these approaches is "better" than the other (and indeed, the second requires a little bit of the first - it's difficult to debug an automatically generated parser without knowing at least a little bit of the theory), but it's important to know which approach you're aiming for, and trying to optimize based on that goal. Going to the Dragon book as a "how-to" is a disaster waiting to happen.