Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Have you ever dealt with the Maybe(Haskell)/Option(F#) types? If not, then you don't understand what's wrong with NULL and how to easily avoid it without much work.



I find Maybe a bad idea. It forces me to write denormalized code when I know that something is not NULL. It's not possible to specify this knowledge as a data structure since data structures are static but context is dynamic. I much prefer the simple NULL sentinel that blows up like an assertion when I made a mistake. That said, there's not very often a need for NULL at all if you structure the code correctly.


If you know something can't be null, then don't use an option. Simple as that. For example, a SQL library can return a non-nullable column of String as just a String, not an Option[String]. Thus, you actually get a solid distinction that you don't get with null pointers.

There's no reason to include sentinels that will randomly blow up your program.


No. The point is that the data structure can't know if there's a NULL since the data structure is static. Context is dynamic. Code is dynamic as well, and it can know that some things must exist based on other dynamic conditions.

So this "solid" distinction often is just noise and actually blurs the intention of the programmer: An explicit unwrap is required syntactically while it should not be required semantically because really the option data is not an option but a requirement in certain contexts.


If it is a requirement for something not to be null, unwrap the option before you send to it to the part of the program that can't accept nulls, and deal with the case of None in a sane way and in a predetermined place. Then you don't have to worry about unwrapping in the rest of the code. You can escape from Option. It's not like IO. You just have to check for None if you want to get something out, as you should.

In this fashion, you have type safety everywhere, and you deal with the case of a missing value in a predictible way, in a single spot.


> I much prefer the simple NULL sentinel that blows up like an assertion when I made a mistake.

Haskell, for instance, has the 'fromJust :: Maybe A -> A' function that allows you to do just that. It unpacks the Maybe typed value and throws a runtime error if it fails.


Yes. Most Haskellers will sneer at it, while personally I think it's the right thing to do because it conveys the programmer's ideas about invariants. But syntactically an explicit unwrapping function is still a lot of noise. Simple null pointers as we have in C, with an unmapped segment at address zero so that it throws a segmentation fault, are much better.


Replying to your lower comment (the coffee has kicked in):

The situation you describe is one where a null really is an unrecoverable error, and the program should terminate. That is the one case where it makes sense to just let a NPE happen.

However, the vast majority of time, a null is just an absence of value, and does not signify an unrecoverable error. Those are the kind of situations that an Option/Maybe helps with, since it doesn't let you forget to handle the null case.

Even if a null value returned from a function is abnormal, and the program shouldn't continue, an Option is still going to be better most of the time. After all, you probably have connections and stuff you want to cleanly terminate before shutting the program down.


I haven't drank all my coffee yet this morning, but are you saying that throwing a segfault can be a good thing?

Either you unwrap the Option, or you have to remember to do an manual null check. The second option is more verbose.


> but are you saying that throwing a segfault can be a good thing?

Sure, what's bad about it? A logic bug was detected, so the program should be terminated. Or how do you intend to continue?

Segfault is not so different from what happens if you do "fromJust Nothing" in Haskell or get a NullPointerException in Java. You can even write a handler for the segfault, but I guess that's rarely a good idea.


> Sure, what's bad about it? A logic bug was detected, so the program should be terminated. Or how do you intend to continue?

I intend to not have the logic bug in the first place, by encoding my invariants in the type system.

If you "know" that the value is present rather than absent, you must have a reason for knowing it, so explain that reason to the compiler. E.g. maybe you took the first element of a list value that you know is non-empty - so maybe you need to change the type of that value to a non-empty list type. That way the compiler can check your reasoning for you, and will catch the cases where you thought you "knew" but were actually wrong.


> by encoding my invariants in the type system

the way I program that is nothing but a pipe dream.

> If you "know" that the value is present rather than absent, you must have a reason for knowing it, so explain that reason to the compiler.

I might know that it exists for example because it is computed in a post-processing step after a first stage but before a second stage. So it exists in the second stage but not in the first. Relying on global data (which I won't give up) makes it practically impossible to encode that the data is not there in the first stage.

And that's not a problem at all. I simply don't access that data table in the first stage... Trying to explain my processing strategy to a compiler would amount to headaches and no benefits.


> I might know that it exists for example because it is computed in a post-processing step after a first stage but before a second stage. So it exists in the second stage but not in the first.

So the first stage could create a handle to it, or even just a phantom "witness" that you treat as proof that the value is present.

> And that's not a problem at all. I simply don't access that data table in the first stage... Trying to explain my processing strategy to a compiler would amount to headaches and no benefits.

Shrug. I found that errors would make it into production, because human vigilance is always fallible. And the level of testing that I needed to adopt to catch errors was a lot more effort than using a type system.


Accesses to unallocated global data is the type of errors that you typically hit on the first test run. Another example would be function pointers loaded from DLLs.

I don't think type systems help all that much. Type + instead of -, and you're out of luck.


> Accesses to unallocated global data is the type of errors that you typically hit on the first test run.

Depends what conditions cause it; the hard part is being sure that every possible code path through the first stage will initialise the data, even the rare ones like cases where some things time out but not others.

> I don't think type systems help all that much. Type + instead of -, and you're out of luck.

Not my experience at all - what do you mean? If you declare a type as covariant instead of contravariant or vice versa, you'll almost certainly get errors when you come to use it.


1) Pretty easy to guarantee if main looks like stage1(); stage2(); stage3(); etc.

2) Change a plus for a minus and it is still an int.


> 1) Pretty easy to guarantee if main looks like stage1(); stage2(); stage3(); etc.

You can only use the global program order once, I'd rather save it to spend on something more important. val result1 = stage1(); val result2 = stage2(result1); ... means my code dependencies reflect my data dependencies and I'll never get confused about what needs what or comes before or after what (or the compiler will catch me if I do), so I can refactor fearlessly.

> 2) Change a plus for a minus and it is still an int.

True. If you get your core business logic wrong then types won't help you with that (though FWIW I'd argue that it's worth having a distinct type for natural numbers, in which case - and + return different types). But I found that at least 80% of production bugs weren't errors in core business logic but rather "silly little things": nulls, uncaught exceptions, transposed fields... and types catch those more cheaply cheaper than any alternative I've seen.


> I much prefer the simple NULL sentinel that blows up like an assertion when I made a mistake

Are you nuts? I prefer the compiler gives me an error instead of blowing up in production.


I see these as a more sophisticated way of dealing with NULL. It allows me to define alternative default behaviour beyond just throwing an undeclared exception.

They are still NULLs however under the hood and I still need to do the work of defining what I want to happen when they occur. It's just neater.


They're not null. You might use them to represent the same thing that you use null to represent, but the type system won't let you use them in expressions that aren't explicitly built to accept them.


You can conceptualize the Maybe based on the NULL, but there's no point in the compilation or runtime in which they actually become NULLs, it's just a regular container value.


Which is the same as if I go and ensure a default "noop" value is assigned ... it's logically a NULL. I don't know anything about it except that it hasn't been assigned.


It's not a NULL. A NULL is a value that is considered by the type system to be a valid instance of a given type, except it doesn't actually fulfill the type's contract. A Maybe is a completely different type, much like a list or a map or a tree.


Well at this stage you're just being narrow minded.

I think if you read back over this thread it should be clear what I mean.

I proposed that NULL has a purpose. You proposed that Option obviates this. I stress that it's just a neater way to manage the conditions you don't need to model. You point out that in terms of implementation it's different, where I explain that still, logically it's the same thing.

Of course NULL has a very specific meaning in the structure of the language, and when you start using things like Options it makes managing NULL easier, but it's a rose by another name, gift-wrapped, and bundled with some plant food.

Logically however, at the point where you're modelling your problem it's the same.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: