Hacker News new | past | comments | ask | show | jobs | submit login

> The issue with safety is that nothing is really safe.

There is a trade-off between safety and expressiveness. Clearly you can always shoot yourself in the foot if your language is expressive enough (like any Turing-complete language).

But I think that is beside the point here. This is about eliminating whole classes of errors.

A good type system (e.g Rust's, Haskell's..) can eliminate all type errors from your programs.

A good memory model can eliminate all unsafe memory problems.

There are also languages that can eliminate all data races from your programs.

All these advances in PL theory make it easier and safer to deal with hard problems like concurrency, memory management etc. and thus allow us to focus on what our programs can actually do.




> A good type system (e.g Rust's, Haskell's..) can eliminate all type errors from your programs.

It will eliminate errors related to the use of a given programming language. It will not necessarily avoid systemic errors. The programming language is only one part of the problem. Safety is a wider issue than just the use of a programming language.

Especially since the systems we use are often dynamic with changing requirements.


> It will eliminate errors related to the use of a given programming language. It will not necessarily avoid systemic errors.

Like I said: It will eliminate a specific class of errors, namely all type errors. Your program will literally not compile if there are any type errors.

> The programming language is only one part of the problem. Safety is a wider issue than just the use of a programming language.

Sure, I don't disagree with that statement. But it's important to recognize that eliminating whole classes of errors is extremely valuable and allows us to focus on the important things.


Every type system eliminates all its own type errors by definition. Even the trivial system with one type eliminates all its own type errors (vacuously, since there are zero of them).

There is no universal set of errors called type errors. What are type errors depend on your type system. A good type system allows more errors to be encoded as type errors so you can catch them at compile time, but it doesn't mean anything to say that a language like Rust or Haskell eliminates all type errors. There are certainly type systems which could catch more errors.


Sure, not all type systems are created equal. And there are indeed type systems that can catch more errors than Haskell's (although that usually comes at the price of losing type inference).

But I read OP's point as "Well, you can never catch all programming errors with PL_feature_X, so why even bother." And my point is simply that a lot of PL features make formerly hard things easy and thus allow you to go faster and focus on more interesting things.


You completely misunderstood what he was trying to tell you.


There was nothing interesting in what he was trying to say.

Yes, no programming language perfectly eliminates all classes of unsafety. But that's no reason to let the perfect be the enemy of the good! "The issue with safety is..." no issue at all. Being safe in a bunch of problem domains (Rust) is still strictly better than being safe in very few if any of them (C).


So you did it on purpose. That just makes you a bad actor in the conversation.


I am not whoever you imagine you're responding to (rkrzr, I guess)


No, that's you.


More to the point, Rust and other statically typed languages* eliminate type errors at compile-time.

In e.g. Python, the following code:

    foo = Bar()
    foo.baz()
will compile without complaint but, supposing that baz() is not a member of class Bar, will cause errors when the code is actually run. In statically typed languages this will be caught by the compiler and treated as an error; type errors are simply not allowed in compiled programs.

This distinction is significant, as Python and other dynamically-typed languages require comprehensive test suites for any non-trivial software written in them, shifting the burden of ensuring type safety to the programmer. Testing systems for statically typed languages don't need to concern themselves with type safety. Dynamic typing also carries performance penalties at run-time (checking type safety for e.g. every member access).

* Some statically-typed languages (e.g. C++) allow for very specific subversions of type safety at run-time, but its usually clear to the programmer when they are doing something dangerous.


"but its usually clear to the programmer when they are doing something dangerous"

Um, no. Every pointer is fundamentally unsafe, and a lot of C++ code is written with pointers through and through.


"Every type system eliminates all its own type errors by definition."

Nonsense. C/C++ will happily produce a warning, if you specified -Wall, that you violated a type constraint and then go ahead and compile your program. To suggest that such violations are part of C's type system is to pedantically and willfully miss the point.


> A good type system (e.g Rust's, Haskell's..) can eliminate all type errors from your programs.

Do you consider silently trimming value during mandatory explicit type conversion a type error?


No, because if you explicitly converted to another type, that's what you wanted.


Sometimes you want it to be lossy, but not most of the time, and yet there is no choice. I had a bug caused by that, that's why I remember it. Silent explicit type conversions are essentially unsafe.


What do you mean by "silent explicit type conversion"? If you said "silent type conversion" I'd read that as "implicit type conversion". But you said "explicit", which means you've got it very clearly in your code that you're doing a type conversion (to a smaller type), so what's silent about that?


Silent in terms of compiler not complaining.


Why would the compiler complain? Doing a narrowing type conversion is a perfectly legitimate thing to do. So when you ask the compiler to do it, it should.


It forces you to explicitly say that you know you're forcing a value into a narrower type; I think the fact that might mean loss of information is understood, by definition. What would you like it to do?


I would like it to tell me if I accidentally converted to a narrower type. This is a problem, because I don't necessary see the type I'm converting from due to type inference or simply am too far from the context where that type is declared and have to make assumptions to keep going. These assumptions of course fail sometimes and cause bugs. Same problem with precisions, by the way. I'm not sure how exactly compilers should fix this, the easiest fix seems to simply have different operators to explicitly allow lossy conversions, when necessary. But the bigger deal would be to treat numbers as sets of possible values with solvers or whatever to warn about mistakes where you use unhandled values in the code somewhere.


Rust actually has this, this works:

    fn main() {
        let x = 5i32;
        let y: i64 = x.into();
        println!("{}", y);
    }
this doesn't work:

    fn main() {
        let x = 5i32;
        let y: i16 = x.into();
        println!("{}", y);
    }
you have to write:

    fn main() {
        let x = 5i32;
        let y = x as i16;
        println!("{}", y);
    }
where `as` has the potential of being lossy!


I agree that this is where we should be headed; it seems to me that Liquid Haskell, which was submitted to HN recently[1], could actually do what you need, since it uses an SMT solver to check for preconditions.

The casting function could specify the valid input values, and force the programmer to handle the rest of the cases when the input type is wider.

[1] https://news.ycombinator.com/item?id=13125328


> Silent explicit type conversions are essentially unsafe.

This is a contradiction, something cannot be both silent and explicit.


Sounds like an implicit conversion.


In Rust, lossy conversions only occur if you you explicitly write `var as type` and even that syntax is limited to certain types e.g. you can't coerce an integer to a function. In order to do something crazy like that, you'd need to call the unsafe `mem::transmute` function. The language cannot be much safer in this regard short of disallowing any sort of type conversions.


"Clearly you can always shoot yourself in the foot if your language is expressive enough (like any Turing-complete language)."

This is a common misunderstanding of being Turing complete. A program running in an interpreter isn't unsafe just because the programming language and the interpreter are Turing complete. Being Turing complete doesn't mean being able to gain root access, overwrite the boot block, scribble on the hard drive, etc.


> A good type system (e.g Rust's, Haskell's..) can eliminate all type errors from your programs.

It depends what you call a "type error". Is calling `car` on a `nil` instead of a `cons` a type error?


In common lisp 'nil is of type 'null which is a subtype of 'list, which is a union of the types 'null and 'cons so it wouldn't be an error. Other lisps might chose to do it differently.


It is in Rust, although I'm sure you can come up with something where the type system won't save you.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: