Implementing a new language was never such an exciting deal as many think. New languages happen more because of the need than because of properties of the language itself. For example, C was created because a language with those features were necessary for the UNIX project. Similarly for Smalltalk and the environment at PARC. Lisp is a little different because it started as theory that was eventually implemented.
The modern language that comes to mind, is of course, Haskell. And "in between" we have Standard ML. And I think one could argue that the Dylan revival project and Rust are various extensions on the idea of invent(academically)-implement(pragmatically). And the Racket ball is still rolling, of course.
So, I don't think it's right to say that things are all different now. Might be that the maturation of computer science into a "science" field feels a bit like it's taking the joy out of things. But I think if you look at stuff that was published earlier, there's a divide between rather conservative work founded in logic and discrete mathematics, and more exploratory work in what would now be considered "computer science". I'm not convinced all of that would really be considered "published research" though (as in qualifying for a ph.d etc).
It's not like you'd be able to publish a study in medicine on the benefit of washing your hands before you deliver a baby, after you've done an autopsy -- we've already figured out a lot of the elegantly simple stuff.
I do think there is a big difference in academic computer science now and then. Rob Pike has written about it from an operating systems perspective [1], and Richard Gabriel from a programming language perspective [2]. It seems that until the early 1990s, there were lots of projects focused on building "systems", i.e. big pieces of software which are in themselves practically useful; then there is a sharp shift and academic research focus on "theory" (in PL research, e.g. a type safety proof for small core calculi, or a particular algorithm for program analysis).
People have always pursued "small" ideas. But I think the fact that we stopped writing "big" systems is a real change. (I guess the reason is that off-the-shelf operating systems and programming languages gradually got better, until it became impossible to "compete" with them. C.f. the Lua langauge, which got written basically because "western" langauges were not easily available at the time). My impression is that there is a lot less diversity of ideas now than there used to be, because everyone is incrementally improving the same set of OSs/langauges.
I'm not sure it's quite so clear cut, don't forget about http://vpri.org/ for example. Or the OLTP (one laptop per child) with assorted projects. Or the work on various unikernels on top of xen (like mirageos). Or Minix3. Or livelykernel (http://www.lively-kernel.org/).
Or perhaps even Dragonfly BSD.
I'm not necessarily disagreeing with you, some of the links above might even support your point -- I'm just not sure we've "stopped" with big, complete systems -- but the field as a whole has gotten much bigger -- and there's only so much hype to go around...