The only thing better than unification is semi-unification. The fact that semi-unification is a lot harder than unification has meant that imperative OO languages get the shaft with poorer type inference support: both assignment and subtyping are inequalities! What Scala has done with least-upper bounds is absolutely heroic in that context even if it does work better for its functional parts.
I'm not sure if "better" is appropriate in this context; in general, semi-unification is undecidable. Type checking in presence of subtypes is decidable for simple cases (where types form a lattice), but the presence of polymorphism/generics makes it undecidable (Java [1], OCaml[2]). The type inference algorithms for subtyping are not that performant (an algorithm is described in [3]).
The Programming Languages Zoo really helped me in learning type inference implementation. It has a series of increasingly complex languages, all implemented in ML.
> you can almost translate that with an emacs macro
If I knew Haskell, I expect I could. However I am currently learning Haskell, which is why I'd be interested in a repository of example language implementations in Haskell.
> some OCaml specific parser generator
Which looks quite nice, and is evidently modelled on yacc/lex. People rave about Haskell's Parsec but TBH I'm finding it a bit unintuitive.
There is Happy and Alex for Haskell, and also some older tools like Frown and BNFC.
But the coolest thing about Parsec is that you can construct the parser at run-time, for instance calculating it after scanning a file for infix declarations. If you want custom infix in a yaccy grammar, you have to walk the AST afterwards to fix the application precedence.