One of definitive features of Lisps is heterogeneous lists, upon which other features, such as code-generation and hence macros has been built.
All the cleverness of static typing falls apart with heterogeneous lists. That's why we don't have them in Standard ML or Haskell. So, in Lisps we trade static typing for power and flexibility of layered DSLs based on special forms (advanced macros). typecase macro, CLOS, structure-based pattern-marching modules are examples of such DSLs. Strong typing is good-enough.
This is just my opinion, of course. Lisp has been evolved with its own way. Like famous quote form evolutionary psychology (or biology?) goes - Everything is the way it is because it got that way.
I don't think this is really accurate anymore. Using the list data type as the compiler's internal representation causes problems as there is nowhere to store information the compiler needs need, such as locations for error reporting, or value representation for optimisation, and so on. You either end up using some awful hack (e.g. the second element of each list is an a-list of compiler specific info) or you use a different data type. This latter is the approach that Racket, the Lisp I'm most familiar with, takes. Programs are represented using a "syntax" data structure that is not list, though it can be converted into one.
See HLists (e.g. https://hackage.haskell.org/package/HList) for heterogeneous lists in Haskell. The techniques for implementing these are relatively (last 10 years or so).
The problem is very simple - as long as a list is homogenous or it is a finite either-of algebraic data type, you could infer the type of a function on these - it could be a another homogenous list or a value if some other type.
As long as a list could contain anything then it could be mapped to another list of anything (a list of symbols, numbers, etc.) Or a value. Basically, we could say a pointer or a list of pointers, which by definition could point to any kind of a type-tagged value.
Types a-la
(* -> *) -> [*] -> [*]
are't very useful, but this is what we have in Lisps.
Moreover, as long as there is no difference between a pointer to a list and pointer to a value (these are just bindings of symbols to type-tagged values) we could have procedures of type
* -> * -> *
Lisp stands for LISt Processing after all, for performing so-called symbolic manipulations (like our minds do) and this is a pretty good "type" for a procedure doing a symbolic transformation.
So, all the values have type-tags. Add NIL and T, structural types such as arrays or structs, and you will get a crude type hierarchy of a Lisp.
Our lists aren't sets (homogenous) so all the typing based on Set theory is of no use.
My first paragraph was referring to lists (not) being the internal representation of programs within (modern) Lisps. Nothing about heterogeneous lists or type theory there.
I am sorry, I am not a type theorist. Someone added a link to an attempt of define a statically-typed Lisp. I am suggesting to try to look deeper what a Lisp is and why it is the way it is.
All the cleverness of static typing falls apart with heterogeneous lists.
You can have heterogeneous lists in Haskell. What you don't have is open heterogeneity as in Scala (where List[Any] is a valid type). You have to decide how much heterogeneity you'll allow a priori, like so:
data MyLang = Number Double | Str String | Sexp [MyLang] | Fn (MyLang -> MyLang)
expr = Sexp [Str "+", Number 5, Number 7]
What Haskell doesn't give you (unless you use Data.Dynamic) is open heterogeneity. If you want to add ByteString to the above, you have to recompile it with a new case in the union. While you could add an extensible object-oriented system to Haskell (all major languages being Turing-complete, you can write a Ruby or Lisp in Haskell) the types in this language would probably not be recognized by Haskell. For most cases, this defeats the purpose of using Haskell, because we want the expressive types (at the expense of slightly reduced language expressivity). We'd rather have a type signature of (a -> b) -> [a] -> [b] than HRubyObject -> HRubyObject -> HRubyObject, with a bunch of runtime error checks (that take time to specify, reduce performance, and clutter source code).
With advanced uses of typeclasses, you can buy back a lot of the heterogeneity and code-level expressiveness of dynlangs (at the expense of dealing with complicated type signatures, language extensions, and scary-sounding concepts like "monad transformers"). There's little that you "can't" do in Haskell, insofar as you can build an arbitrary dynamic language within it. Haskell just requires you to decide a priori how much heterogeneity and reflection you want, and makes it hard to change that on the fly.
What I like about Haskell is how much discipline it demands. I don't expect to see the static vs. dynamic debate resolved within my lifetime, because those families of languages excel in different ways, but I like the intellectual exercise of having to make sure my design is sound before it even compiles. Static typing isn't "a silver bullet" and only catches ~30-50% of errors if used naively, but if you know how to use it and have a good design sense, you can bring that number way up (90%?). If your goal is to write robust code, you spend as much time thinking about testing in Haskell as in Python. The advantage of "the Haskell Way" is that most of those unit tests don't exist in code; they're implicit in the type system, or very tersely specified (with superior coverage) in QuickCheck.
What we want in a language, loosely speaking, is O(log n) added work ("accidental complexity") in terms of the size of the problem/functionality achieved. We can never get rid of that, but we'd like it to be sublinear as a function of what we're accomplishing. (Since our goals are most often subjective, the big-O notation is used loosely.) Inexpressive languages throw O(n) accidental complexity at you, but poor use of expressive languages throws O(n^2) or worse (in fact, potentially incomputably worse, since arbitrary code comprehension is formally undecidable) maintenance complexity at you. I don't know if Haskell is at the O(log n) level, for a priori accidental complexity, yet. But it's definitely sublinear (maybe O(n1/3)) and, in the real world, that's good enough. (If you program for the business, there's plenty of "other" complexity that is O(n) or worse, and more infuriating than any technical challenge.)
> The advantage of "the Haskell Way" is that most of those unit tests don't exist in code; they're implicit in the type system...
This is very powerful "mantra" indeed, and millions "believe". I am not trying to say that The Haskell Way is worse or better (Java way is worse indeed), all I am trying to say is that, in my opinion, everything "interesting" in Lisps begins with an ability to define a new special form as a macro to extend the language.
Arc language and this site championed this approach to implement complex systems as layers of DSLs embedded in Lisp - the code of arc.arc and news.arc is indeed layers of macros built out of another macros, in accordance with principles described in On Lisp book.
This is in some sense an opposite to The Haskell Way you have described above. It is not better or worse, it is very different and has its own beauty and elegance (and Haskell has its own, especially these either-of and one-of types).
You see, as long as a tiny doing-a-single-thing Arc procedure or a marco (the basic building block) runs correctly in REPL, it will run correctly later - as long as it is free of side-effects - no unit test required (it is not Python or Ruby).
The real difference, is that in Haskell you cannot pass a wrong arguments to a procedure - you will get a compile-time error, while in Arc you could and will get a run-time error.
"Lisp people" it seems, "believe" in really quick prototyping, exploratory programming (on-the-go), like PG named it, while "Haskell people" believe in careful planning in advance and "safety" of double-checks.
I would say that in reality, in mountains, for example, planing in advance almost never worked - you will never cover all the possible scenarios, but making decisions quickly, on the go, adapting to constantly changing conditions of high altitude saves lives and leads to the summits. This is why, perhaps, I am "Lisp person". For me, "safety" comes from flexibility (ability to change/respond quickly), not from any "guarantees".
I would really recommend you learn more Haskell, because your generalizations of it seem wrong - Haskell programming is often exploratory. What the types give you (used right) is another view into the ramifications of your changes. Haskell also has a repl (ghci), and it's quite useful - it will not only tell you the result of running some code, but will also tell you the shape of code you can't run yet! Meanwhile, those double-checks help tremendously when planning fails, because they show you quickly what needs to change when you change your assumptions.
Deep thought and planning can help you in either language - but they're more required when you can't see what breaks with changes.
You're taking the metaphor farther than I would, especially since the N (delivered functionality) is subjective.
In business, you tend to have three relevant scaling cases: better-than-linear ("synergy"), linear, and worse-than-linear ("congestion"). "Bad" (inexpressive) languages tend to be non-synergistic in capability/delivery of features and congested in terms of accidental complexity. "Good" languages, if used competently, go the other way.
One of definitive features of Lisps is heterogeneous lists, upon which other features, such as code-generation and hence macros has been built.
All the cleverness of static typing falls apart with heterogeneous lists. That's why we don't have them in Standard ML or Haskell. So, in Lisps we trade static typing for power and flexibility of layered DSLs based on special forms (advanced macros). typecase macro, CLOS, structure-based pattern-marching modules are examples of such DSLs. Strong typing is good-enough.
This is just my opinion, of course. Lisp has been evolved with its own way. Like famous quote form evolutionary psychology (or biology?) goes - Everything is the way it is because it got that way.
http://karma-engineering.com/lab/wiki/Languages
http://karma-engineering.com/lab/wiki/Bootstrapping2