Any and all abstractions. The cost is usually in all of the above.
Examples that I commonly encounter include OO, closures, dependency injection frameworks, complex configuration systems, various code generation systems, and on and on and on.
In general the tradeoff is this. For those who have internalized the abstraction, they can think about more complex things. Those who have not internalized the abstraction find it hard to figure out how the system works at all until they internalize it. So when you work on code that has a lot of abstractions under the hood it becomes either a black box (that occasionally you dig into) or (very often) a requirement that you understand X before you can even start to work on the code.
A rule of thumb that I use is how long the stack backtrace is. If every bug creates a stack backtrace that is dozens of frames, there are a lot of abstraction layers in place. And when all of the layers are actively being worked on, it adds up - quickly.
Deciding whether given abstractions are worthwhile for a given problem involves a judgment call. Unfortunately the people who are in the best position to make those calls tend to be the most senior, and tend to be the least aware of the costs of the abstractions that they add.
I would note that if you start your list with OO then you're including abstractions that almost all professional programmers would consider not abstractions but basic tools. It's impossible to work without such abstractions, except by working in purely procedural code, and even then...
I would add that if "a requirement that you understand X before you can even start to work on the code" is the case then your abstraction has - arguably, I may be wrong! - failed. My DOM iterator abstraction would take some effort of understanding to maintain, which was my concern, but it was a very simple black box to use.
A simple example of an abstraction for which this is not true is the MVC design. If you don't understand how it works, you don't know where to begin looking to make a change to the system.
And yes, some abstractions truly are basic tools. And the more experience you have, the more basic they seem, and the more such tools you have.
I am not arguing against abstraction per se. What I am arguing is that abstractions bring a cost, and that cost adds up. A project needs to find an appropriate balance, and there is a tendency for experienced programmers to draw that balance at a point that may or may not be appropriate for the people who have to maintain the system after them.
Which is why what really matters in programming is "communities of shared abstractions."
This is part of what you get with a language, but it depends on the language. (We've seen Javascript users split into several such communities, I'd say).
Or it can be what you get with a framework -- one of the main values of Rails is that people who have learn it's abstractions can look at each other's code and understand it. This applies to third-party extensions to Rails that get especially popular too, like say Devise. (The downside is when those abstractions aren't good, and you want to use something else instead... now everybody finds your code difficult to understand).
When we talk about a language having a good stdlib, a lot of what we're talking about is providing a good set of abstractions that everyone will learn, making their code understandable as well as interoperable with other developers'. JS's lack of much of a stdlib may be not unrelated to JS schism into several communities of shared abstractions...
I don't think it's really about "minimizing abstractions", it's not even possible to do so -- it's about which abstractions how. The important point you make is that abstractions that are understood by a community of programmers, from which those who work on your code are likely to be from -- actually practically have less cost than abstractions that will be unfamiliar with them.
Replying to myself as I thought of an example from my own past which is relevant.
After discovering the eye-opening expressivity of functional programming years ago with Dylan and Haskell, I had a substantial project in javascript. I found JS supported first class closures and lambdas, which allowed me to make iterators.
The novelty was they were iterators not over lists but over trees (DOM trees in this case but of course any tree could be made iterable).
I understood there would be a learning cost for anyone who took over from me so I left plenty of docs and pointers. It would not be a small cost either, my successors would have to learn to think differently, at a higher level and possibly rather alien (to them) way.
But was it worth the cost, bloody hell yes! Not having that iterable and rather declarative abstraction over DOM trees would have greatly bloated the code and consequently brought in bugs by the bagful. I would do in a couple of lines what would have taken half a page to do, in many locations.
So it had a human cost but if you could overcome that, a seriously huge human benefit.
What happened was interesting and unexpected. I was building on top of an application (call it H) which had the DOM trees to manipulate, and JS to manipulate it with. I was reporting new bugs in H continuously, 1 or 2 every day.
Because of the abstractions I'd done (it wasn't only tree iterators) I could largely work around the bugs invisibly - I pushed the bug-special-case-handling code down to make it invisible when using my abstractions.
That made it all too viable to continue using an crappy, flaky product far longer than would have been possible - or sensible - without those abstractions. I'd turned the abstractions' value into a liability!
I finally told them it wasn't worth continuing with H, they junked it AFAIK and I walked away.
What abstractions?
What costs of these? In human comprehension, in runtime, in reliability, in mem/cpu?
Can you give examples which I can usefully learn so as to avoid?
TIA