NULL in databases have many properties that save a shitload of coding time and help write more secure code. To cite only one of theses useful properties NULL automatically propagate through all operations and aggregations.
Is that the behavior you actually want, though? In many cases "this value is explicitly unknown" has dramatically different semantics from "the computation that produced this value had an unexpected NULL input", and if you interpret the latter as the former, you've likely just corrupted your data.
Monadic Maybe (in higher-level languages like Haskell or Rust) has the semantics you describe, but the advantage that you only get it when you explicitly ask for it. If you care about data integrity you usually want to be particularly precise about the results of your computation; it's helpful if your type system can sanity check them as they propagate through every operation.
Yes of course when I deal with data if I compute an average of over 20 record and one have a NULL value I want to know that something is wrong with that specific operation. Silently returning something wrong would be really bad. Crashing whole query would be anoying as well because if you compute 3Billion aggregates at the same time and only 0.1% return NULL you might want to filter them out instead of doing nothing at all or correcting input.
Of course all of this would be feasible with an arbitrary no data value, but you’d have to basically rewrite all propagation functions that are built in with NULL. As a DB admin I would consider a database without NULL handling as utterly flawed.
In fact with modern SQL database you often get twice the fun because there is a second propagating special value for numeric type called NaN (not a number) to further distinguish lake of data from invalid data if need be!
I think you can say something similar about programming languages that don't have automatic memory assignment.
It can be perfectly valid to have a pointer reserved in memory, without having given it a value to point to yet. I see why this isn't much of an issue in 2015, but it was.
I don't think NULL is a problem at all really, but it sure isn't fun in languages like C# where you can have nullable types, but I think that's a problem with C# and not the concept of NULL.