Hacker News new | past | comments | ask | show | jobs | submit | rb808's comments login

Anyone know about motherboards? I was going to build a new AMD PC and nearly everything was out of stock. I look now, and I see just 7 <$200.

https://www.newegg.com/p/pl?N=100007625%208000%204131%204814...


Yeah there's a shortage. They end the production run just before Chinese New Year and spin up a new one after. But then covid hit. And many people upgraded their gaming PCs during the lockdown.


Not sure the rules on this... but I am moving from ITX to mATX probably next week. If you're interested in miniITX b450 I don't really have a plan for mine, email is in profile.


I like generics for collections but that is about it. I've seen "gifted" programmers turn everything into generic classes and functions which then makes it very difficult for anyone else to figure out what is going on.

One reason I like go is it doesn't have all the bells and whistles that give too many choices.


Here's some generic code I wrote in Rust recently. I had two unrelated iterators/collections and I needed to flatten and sort them into a single buffer.

    struct Data {
        idx: usize
        // ... 
    }
    
    struct Foo {
        stuff: Vec<Vec<Data>>
    }

    
    struct Bar {
        stuff: Vec<Data>
    }


    fn flatten_and_sort (foo:Foo, bar:Bar) -> Vec<Data> {
        let mut output = 
            foo.stuff.into_iter()
               .flat_map(|v| v.into_iter())
               .chain(bar.stuff.into_iter())
               .collect::<Vec<_>>();

        output.sort_by(|lhs, rhs| lhs.idx.cmp(&rhs.idx));
        output
    }
Now you could argue that this is just "generics for collections" but the way those iterator combinators are written make heavy use of generics/traits that aren't immediately applicable for collections. Those same combinator techniques can be applied to a whole host of other abstractions that allow for that kind of user code, but it's only possible if the type system empowers library authors to do it.


You can actually use the power of generics to get rid of the two inner calls to `into_iter()`:

    let mut output: Vec<_> =  
        foo.stuff.into_iter()
            .flat_map(|v| v)
            .chain(bar.stuff)
            .collect();


I believe you can just use `.flatten()` instead of `.flat_map(|v| v)`:

    let mut output: Vec<_> =  
        foo.stuff.into_iter()
            .flatten()
            .chain(bar.stuff)
            .collect();
I might make it slightly more clear what the intent is by doing this:

    let mut output: Vec<_> =
        Iterator::chain(foo.stuff.into_iter().flatten(), bar.stuff)
            .collect();
But still basically the same.


Yes, functional programming patterns are nice. It's possible to write that even more concisely in JavaScript.


I've certainly encountered codebases that abused inheritance and templates/generics to the point of obfuscation but you can abuse anything really. Besides in my experience the worst offenders where in C++ where the meta-programming is effectively duck-typed. Trait-based generics like in Rust go a long way into making generic code readable since you're always aware of what meta-type you're working with exactly.

I definitely don't use generics if they can be avoided, and I think preemptive use of genericity "just in case" can lead to the situation you describe. If I'm not sure I'll really need generics I just start writing my code without them and refactor later on if I find that I actually need them.

But even if you only really care about generics for collections, that's still a massive use case. There's a wealth of custom and optimized collections implemented in third party crates in Rust. Generics make these third-party collections as easy and clean to use as first party ones (which are usually themselves implemented in pure Rust, with hardly any compiler magic). Being easily able to implement a generic Atomic type, a generic Mutex type etc... without compiler magic is pretty damn useful IMO.


What's wrong with this?

  class Result<T>
  {
    bool IsSuccess {get; set;}
    string Message {get; set;}
    T Data {get; set;}
  }
On many occasions, I like using result types for defining a standard response for calls. It's typed and success / fail can be handled as a cross-cutting concern.


That's a generic container of 0 or 1 elements ;)

It's also incredibly unsafe and why generics aren't enough. C++, Java, and so on have had generics for ages and with types like the one above, null pointer exceptions are incredibly common. Nothing prevents the user from attempting to retrieve the data without first retrieving the success status.

On the other hand, this improves on it dramatically:

    enum Result<T, E> {
      Success(T),
      Failure(E)
    }


I'm convinced that lack of Sum Types like this in languages like Java/C#/Go are one of the key reasons that people prefer dynamic languages. It's incredibly freeing to be able to express "or". I do it all the time in JavaScript (variables in dynamic languages are basically one giant enum of every possible value), and I feel incredibly restricted when using a language that requires a class hierarchy to express this basic concept.


I completely agree. Every passing day I become more convinced that a statically typed language without sum types or more broadly ADTs is fundamentally incomplete.

The good news is that many languages seem to be cozying up to them, and both the JVM (through Kotlin, Scala, et all) and .net (through F# or C# w/ language-ext) support them.

Even better news is that the C# team has stated that they want to implement Sum Types and ADTs into the language, and are actively looking into it.


I just don't see, in properly designed code, that there would be that much use for sum types if you have generics. When are you creating functions take or return radically different types that need to be expressed this way?

I dislike dynamic languages where parameters and variables can take on any type -- it's rarely the case that same variable/parameter would ever need to contain a string, a number, or a Widget in the same block of code.

I find it much more freeing to have the compiler be in charge of exactness so I can make whatever changes I need knowing that entire classes of mistakes are now impossible.


> When are you creating functions take or return radically different types that need to be expressed this way

Let's say you're opening a file that you think is a CSV. There can be several outcomes:

- the file doesn't exist

- the file can't be read

- the file can be read but isn't a valid CSV

- the file can be read and is valid, and you get some data

All of these are different types of results. You can get away with treating the first 3 as the same, but not the last. Without a tagged union, you'll probably resort to one of a few tricks:

- You'll have some sort of type with an error code, and a nullable data field. In reality, this is a tagged union, it's just that your compiler doesn't know about it and can't catch your errors.

- you'll return an error value and have some sort of "out" value with the data: this is basically the same as the previous example.

- you'll throw exceptions, which usually ends up with people writing code that forgets about the exception because the compiler doesn't care about it, and the code works 99% of the time until it completely blows up.


If you want to force people to handle the above 3 cases, couldn't you just throw separate checked exceptions (eg in Java)? In that case the compiler does care about it. You can still catch and ignore but that imo is not a limitation of the language's expressiveness.


Checked exceptions would have been an ok idea if it weren't for the fact that at least when I was writing Java last (almost 10 years ago) they were expressly discouraged in most code bases. Partially because people just get in the lazy habit of catch and rethrow RuntimeException, or catch and log, etc. when confronted with them. Partially because the JDK itself abused them in the early days for things people had no hope of handling properly.

They also tend to defer handling out into places where the context isn't always there.

The trend in language design does seem to be more broadly away from exceptions for this kind of thing and into generic pattern matching and status result types.


> Checked exceptions would have been an ok idea

> Partially because people just get in the lazy habit of catch and rethrow RuntimeException, or catch and log, etc. when confronted with them.

After quite a while of thinking this way, I came to the conclusion that:

95% of the time, there's no way to 'handle' an error in a 'make it right' sense. Disk write failed? REST request failed? DNS lookup? There usually isn't an alternative to logging/rethrowing.

When there is a way to handle an error (usually by retrying?), it's top level anyway.

Furthermore, IO is the stuff that can just 'go wrong' regardless of how good the programmer is, and IO tends to sit at the bottom in most Java programs. This means every method call is prone to IOExceptions.


Yes, after a few years of Java we all end up there. Frankly it's a good argument for the Erlang "Let It Crash" philosophy.

https://verraes.net/2014/12/erlang-let-it-crash/

If IOException on a read is truly happening, and it isn't just a case of a missing file, there are serious issues that aren't going to be fixed with a catch-and-log, or be able to be handled further up the call stack.


One benefit I've found with error-enums is just being aware of all the possible errors that can occur. You're right: 95% of the time you can't do anything except log/retry. But that 5% of the time become runtime bugs which are a massive pain. It's really nice when that is automatically surfaced for you at development time.


Note that checked exceptions are essentially the same thing as returning a tagged union, from a theoretical perspective at least.

They're not popular in Java though, because the ergonomics is a lot worse than working with a Result type.


Honest question: do you think this kind of stuff is going to be adopted by the majority in the next decade or two? Because I'm looking at it and adding even more language features like that seems to make it even harder to read someone else's code.


um... you realize the parent post is talking about having sum types in statically typed languages (eg. rust), when you already do this all the time in dynamic languages like javascript and python right?

So, I mean, forget 'the next decade or two'; the majority of people are doing this right now; python and js are the probably the two most popular languages in use right now.

Will it end up in all statically typed languages? Dunno; I guess probably not in java or C# any time soon, but swift and kotlin support them already).

...ie. if your excuse for not wanting to learn it is that it's probably an edge case that most people don't have to care about now, and probably never will, you're mistaken I'm afraid.

It's a style of code that is very much currently in use.


Are the majority actually writing code like this though? In the case of dynamic languages, this property seems more like an additional consequence of how the language behaves. It's not additional syntax.


> Are the majority actually writing code like this though?

Yes.

For example, some use cases: https://www.typescriptlang.org/v2/docs/handbook/unions-and-i...

This sort of code is very common.

I really don't know what more to say about this; if you don't want to use them, don't. ...but if your excuse for not using them is that other people don't, it's wrong.


Because even with generics, you are not able to express "or"; two different choices of types that have _different_ APIs. With generics, you can express n different choices of types that have all the _same_ API.

It's a good software engineering principle to make control and data flow as streamlined as possible, for similar data. Minimize branches and special cases. Generics help with this, they hide the "irrelevant" differences, surfacing only the relevant.

On the other hand, if there are _actually_ different cases, that need to be handled differently, you want to branch and you want to express that there are multiple choices. Sum types make this a compiler-checked type system feature.


Let's take Rust's hash map entry api[0], for example. How would you represent the return type of `.entry()` using only a class hierarchy?

    let v = match map.entry(key) {
        Entry::Occupied(o) => {
            o.get_mut() += 1;
            o.into_mut()
        }
        Entry::Vacant(v) => {
            update_vacant_count(v.key());
            v.insert(0)
        }
    };
I view sum types as enabling the exact same exactness as you describe in your last line; especially since you can easily switch/match based on a specific subtype if you realize you need that, without adding another method to the base class and copying into the x subclasses that you have for implementing the different behavior.

[0]: https://doc.rust-lang.org/std/collections/hash_map/enum.Entr...


Rust has both generics and sun types, and benefits enormously from both.

And sum types aren’t for “radically different types”. You can define an error type to be one of different options (I.e. a more principled error code), or to represent nullability in the type system, or to indicate fallibility without relying on exceptions, etc.

Rust uses all of these to great effect, and does so because these sum types are generic.


> I'm convinced that lack of Sum Types like this in languages like Java/C#/Go are one of the key reasons that people prefer dynamic languages.

It doesn't hurt that static languages (TypeScript) or tools (mypy) that lightly lay on top of dynamic languages often do support sum types.


> It's also incredibly unsafe and why generics aren't enough. C++, Java, and so on have had generics for ages and with types like the one above, null pointer exceptions are incredibly common.

uh, you'd never get a null-pointer exception in C++ given the type that OP mentioned. Value types in C++ cannot be null (and most things are value types by a large margin).


> Value types in C++ cannot be null

They can just not exist. And C++ being C++, dereferencing an empty std::optional is UB. In practice this particular UB often leads to way worse consequences than more "conventional" null-pointer derefs.


Then write your own optional that always checks on dereference or toggle whatever compilation flag enables checking in the standard library you are using.


Instead you can have undefined behaviour in C++.

Don't think get;set is C++, though it breaks encapsulation.


You can also constrain a generic type only to value types in C#:

  class Result<T> where T: struct
  {
  ...
  }
In that case it can't be null with C# either.


Then you can't construct it unless it's successful, no?

A Result<T> that can only contain successful values doesn't seem very useful


You can, it's possible to address "missing values" with a default construct. Example:

  int x = default; // x becomes zero
  T x = default; // x becomes whatever the default value for struct is


Then we're back to accessing that value being an enormous footgun, yes?


Then you can't construct it unless it's successful, no?

A Result<T> that can only contain successful values doesn't seem very useful


No, you just are forced to use methods like foo.UnwrapOr(default_value) to get the Result. Or depending on the language, you get a compile error if you don't handle both possible values of the Result enum in a switch statement or if/else clause.

See for example https://doc.rust-lang.org/std/result/enum.Result.html#method... in rust, https://docs.oracle.com/javase/8/docs/api/java/util/Optional... in Java, and https://en.cppreference.com/w/cpp/utility/optional/value_or in C++.


Who are you replying to? Is any of your elaboration related to this result type?

    class Result<T>
    {
      bool IsSuccess {get; set;}
      string Message {get; set;}
      T Data {get; set;}
    }


Ah you're quite correct.


Yes you can? The equivalent type in C++ is std::expected[1] which doesn't even contain a pointer that could be dereferenced (unless T is a pointer obviously).

[1] unfortunately not standardized yet https://github.com/TartanLlama/expected


Who are you replying to? Is it in any way related to the original comment I replied to and this type?

    class Result<T>
    {
      bool IsSuccess {get; set;}
      string Message {get; set;}
      T Data {get; set;}
    }


I am replying to you and its pretty obviously related to your comment.

You: "C++, Java, and so on have had generics for ages and with types like the one above, null pointer exceptions are incredibly common."

jcelerier: "you'd never get a null-pointer exception in C++ given the type that OP mentioned."

You: "Then you can't construct it unless it's successful, no?"

Me: "The equivalent type in C++ [to what the OP mentioned] is std::expected". It is not possible to get a null-pointer exception with this type and yet you can construct it.


It sounds quite a lot like you took the type the OP posted and changed it in your reply to a different type that isn't standardized yet, do I have that right?


The code the OP posted is not C++. If you translate it to C++ there is no way to get a null pointer exception.

It sounds quite a lot like you're looking to be pointlessly argumentative, do I have that right?


There are two things being discussed in this thread.

1. The first, my original point was that a high quality type system enforces correctness by more than just having generics. There's no proper way in C++ to create this class and make a sum type - there's no pattern matching or type narrowing like can be done in languages with more interesting type systems and language facilities. Generics is just a first step to a much more interesting, safer way of writing code.

2. The second, my replies to folks who have corrected me, and I'll borrow your little paraphrase here:

> [Me]: "C++, Java, and so on have had generics for ages and with types like the one above, null pointer exceptions are incredibly common."

>

> jcelerier: "you'd never get a null-pointer exception in C++ given the type that OP mentioned."

>

> [Me]: "Then you can't construct it unless it's successful, no?"

I think this is exactly correct still. If it's impossible to create an instance of Result<T> without... well, a successful result, you may as well just typedef Result<T> to T, right? If it can't store the "failure" case, it's totally uninteresting.

If it _can_ store the failure case, making it safe in C++ is fairly challenging and I dare say it will be a little longer and a little less safe than a Result I can write in a few lines of TypeScript, Scala, Rust, an ML or Haskell derivative, and so on.

Now, I'd love to be proven wrong, I haven't written C++ for a while so the standard may have changed, but is there a way to write a proper enum and pattern match on the value?

It looks like this std::expected thing is neat, but can a _regular person_ write it in their own code and expect it to be safe? I can say with certainty that I can do that for the languages I listed above and in fewer than 10 lines of code.

The C++ answer to that is, well, this:

https://github.com/TartanLlama/expected/blob/master/include/...

I don't think it's a comparison.


> The C++ answer to that is, well, this:

the linked link has a ton of things that are "quality-of-life" things. For instance comparing two Result values efficiently (you don't want to compare two Result<T> bitwise, and you don't want the "is_valid" flag to be first in the structure layout to fallback on the automatic default of lexical order as that would sometimes waste a few bytes, but you want the "is_valid" flag to be the first thing being compared for instance. Do you know of a language that would do that automatically ?).

It also supports back to C++11 and GCC4.9 with various fixes for some specific compiler versions's bugs, supports being used with -fno-exceptions (so a separate language than ISO C++) - sure, today's languages can do better in terms of prettiness, but so would a pure-C++20 solution that only needs to work with a single implementation.

If you are ready to forfeit some amount of performance, for instance because you don't care that the value of your Result will be copied instead of moved when used in a temporary chain (e.g. `int x = operation_that_gets_a_result().or_else([] (auto&& error) { return whatever; });` 3/4 of the code can go away (and things will still likely be faster than most other languages).


Well, T can be a pointer / reference here.


That wouldn't change anything to Result<T>'s implicit safety properties. "safe + unsafe == unsafe" - to have a meaningful discussion we should focus on the safe part, else it's always possible to bring up the argument of "but you can do ((char*)&whatever)[123] = 0x66;"


With c# 8 you have nullable references and you can use the compiler to guard you against null pointer exceptions.


> That's a generic container of 0 or 1 elements ;)

Then chances are so are most if not all of the uses of generics OP criticises. The only "non-container" generics I can think of is session types where the generic parameter represents a statically checked state.


Result types are much better than multiple return values. But now the entire Go ecosystem has to migrate, if we want those benefits (and we want consistent behavior across APIs). It'd be like the Node.js move to promises, only worse...


I'm not sure why you'd use a class like this in Go when you have multiple returns and an error interface that already handles this exact use case.


Because multiple return values for handling errors is a strictly inferior and error prone way for dealing with the matter.


    func foo() (*SomeType, error) {
        ...
        return someErr
    }

    ...
    result, err := foo()
    if err != nil {
        // handle err
    }
    // handle result
vs

    type Result struct {
        Err error
        Data SomeType
    }

    func (r *Result) HasError() bool {
        return r.Err != nil
    }

    func bar() *Result {
        ...
        return &Result { ... }
    }

    ...
    result := bar()
    if result.HasError() {
       // handle result.Err
    }
    // handle result

I'm not really sure I see the benefit to the latter. In a language with special operators and built-in types it may be easier (e.g. foo()?.bar()?.commit()), but without these language features I don't see how the Result<T> approach is better.


Go can't really express the Result<T> approach. In Go, it's up to you to remember to check result.HasError(), just like it's up to you to check if err != nil. If you forget that check, you'll try to access the Data and get a nil pointer exception.

The Result<T> approach prevents you from accessing Data if you haven't handled the error, and it does so with a compile-time error.

Even with Go's draconian unused variable rules, I and my colleagues have been burned more than once by forgotten error checks.


there are linters that will help you with that.

https://github.com/kisielk/errcheck

https://golangci-lint.run/usage/linters/ has a solid set of options.


I just wish the linter was integrated into the compiler. And that code that didn't check would simply not compile


> without these language features I don't see how the Result<T> approach is better.

That's the point! I want language features!

I don't want to wait 6 years for the designers to bake some new operator into the language. I want rich enough expression so that if '?.' is missing I just throw it in as a one-liner.

Generics is one such source of richness.


A language with sun types will express Result as Success XOR Failure. And then to access the Success, the compiler will force you to go through a switch statement that handles each case.


The alternative is not the Result type you defined, but something along the lines of what languages like Rust or Haskell define: https://doc.rust-lang.org/std/result/


It's interesting that you say this, because I've had the opposite experience. I wouldn't say it's strictly inferior, because there are definitely upsides. If it was strictly inferior, why would a modern language be designed that way -- there must be some debate right?

I love multiple returns/errors. I find that I never mistakenly forget to handle an error when the program won't compile because I forgot about the second return value.

I don't use go at work though, I use a language with lots of throw'ing exceptions, and I regularly miss handling exceptions that are hidden in dependencies. This isn't the end of the world in our case, but I prefer to be more explicit.


> If it was strictly inferior, why would a modern language be designed that way

golang is not a modern language (how old it is is irrelevent), and the people who designed it did not have a proper language design background (their other accomplishments are a different matter).

Having worked on larger golang code bases, and I've seen several times where errors are either ignored or overwritten accidentally. It's just bad language design.


I cannot think of a language where errors cannot be ignored. In go it is easy to ignore them, but they stick out and can be marked by static analysis. The problems you describe are not solved at the language level, but by giving programmers enough time and incentives to write durable code.


The following line in golang ignores the error:

    fmt.Println("foo")
Compare to a language with exception handling where an exception will get thrown and bubbles up the stack until it either hits a handler, or crashes the program with a stack trace.

And I was referring to accidental ignoring. I've seen variations of the following several times now:

    res, err := foo("foo")
    if err != nil { ... }
    if res != nil { ... }
    res, err = foo("bar")
    if res != nil { ... }


Usage of linters fixes this:

>The following line in golang ignores the error:

   fmt.Println("foo")
fmt.Println() is blacklisted for obvious reasons, but this:

    a := func() error {
        return nil 
    }
    a()
results in:

    go-lint: Error return value of 'a' is not checked (errcheck)
>And I was referring to accidental ignoring. I've seen variations of the following several times now:

    res, err := foo("foo")
    if err != nil { ... }
    if res != nil { ... }
    res, err = foo("bar")
    if res != nil { ... }
results in:

    go-lint: ineffectual assignment to 'err' (ineffassign)


> fmt.Println() is blacklisted for obvious reasons

That's the issue with the language, there are so many special cases for convenience sake, not for correctness sake. It's obvious why it's excluded, but it doesn't make it correct. Do you want critical software written in such a language?

Furthermore, does that linter work with something like gorm (https://gorm.io/) and its way of handling errors? It's extremely easy to mis-handle errors with it. It's even a widely used library.


Huh, I have seen enough catch blocks in Java code at work which are totally empty. How is that better than ignoring error?


Because it's an explicit opt-in, as opposed to accidental opt out. And static checking can warn you about empty catch blocks.


In rust, errors are difficult to ignore (you need to either allow compiler warnings, which AFAICT nobody sane does, or write something like `let _ = my_fallible_function();` which makes the intent to ignore the error explicit).

Perhaps more fundamental: it’s impossible to accidentally use an uninitialized “success” return value when the function actually failed, which is easy to do in C, C++, Go, etc.


Or .unwrap(), which I see relatively often.


That’s not ignoring errors, it’s explicitly choosing what to do in case of one (crash).


Error handling is hard, period. Error handling in go is no worse than any other language, and in most ways it is better being explicit and non-magic.

> people who designed it did not have a proper language design background

Irrelevant.

> It's just bad language design.

try { ... } catch(Exception ex) { ... }


Exceptions don't lead to silent but dangerous and hard to debug errors. The program fails if exception is not handled.


> try { ... } catch(Exception ex) { ... }

The error here is explicitly handled, and cannot be accidentally ignored. Unlike golang where it's quite easy for errors to go ignored accidentally.


Nevertheless, this is how it is mostly done in Java. I haven't used eclipse in eons, but last time I did it even generated this code.

If you care with go, use errcheck.


Does errcheck work well with gorm (https://gorm.io/) and it's way of returning errors? This is not an obscure library, it's quite widely used.


Does any language save you from explicitly screwing up error handling? Gorm is doing the Go equivalent of:

     class Query {
         class QueryResult {
             Exception error;
             Value result;
         }
         public QueryResult query() {
             try {
                 return doThing();
             } catch(Exception e){
                 return new QueryResult(error, null);
             }
         }
     }
Gorm is going out of its way to make error handling suck.


> Does any language save you from explicitly screwing up error handling?

It's about the default error handling method being sane. In exception based languages, an unhandled error bubbles up until it reaches a handler, or it crashes the program with a stacktrace.

Compare to what golang does, it's somewhat easy to accidentally ignore or overwrite errors. This leads to silent corruption of state, much worse than crashing the program outright.


> It's about the default error handling method being sane.

Gorm isn't using the default error handling.


That's one point in this discussion. The language allows error handling that way. Compared to a language with proper sum types or exceptions, where one would have to actively work against the language to end up with that mess.


> That's one point in this discussion. The language allows error handling that way. Compared to a language with proper sum types or exceptions, where one would have to actively work against the language to end up with that mess.

I've seen a bunch of code that does the equivalent of the Java I posted above. Mostly when sending errors across the network.


because it has try/catch. Without that (which would be similar to not checking the err in go) it explodes or throws to a layer up that may not expect it.

Each language has its wonks.


> Without that (which would be similar to not checking the err in go) it explodes or throws to a layer up that may not expect it.

It's not similar to that at all. Without it, the exception bubbles up until it gets caught somewhere, or crashes the program with a useful stacktrace.

With golang, it just goes undetected, and the code keeps running with corrupt state, without anyone knowing any better.


I would say it is a very ergonomic way of doing this. It allows for writing in a more exploratory way until you know what your error handling story is. Then, even if you choose to propagate it later, you just add it to your signature. Also it is very easy to grok and clear. Definitely not strictly inferior.


It's a lot cleaner to pass a Result<T> through a channel or a slice than to create two channels or slices and confirm everyone's following the same convention when using them.


I concede that there are probably scenarios where this design makes sense within that context. I typically find that either I care about a single error and terminating the computation, or I don't care about errors at all. In the former case, the primitives in the sync package (or just an error channel which we send to once and close) are adequate. The latter case presents no issues, of course.

At $work we definitely have examples where we care about preserving errors, and if that tool were implemented in Go a solution like a Result struct containing an error instance and a data type instance could make sense.


It has a bunch of invalid states (message and data both set, neither set, message set but IsSuccess is true, etc.). So you have to either check it every time, or you'll get inconsistent behaviour miles away from where the actual problem is. It's like null but even more so.


Well, for one thing, it doesn't actually work like a proper Optional<T> or Either<T, string> type. It works more like Either<(T, string),(T, string)>, which might have some uses, but isn't typically a thing someone would often reach for if they had a type system that readily supported the other two options.


> What's wrong with this?

That it's mutable, at the very least!


I feel like such a class should either be part of the language, and part of language idioms etc, or it shouldn't be used.


Can you articulate why? it seems to me that 'feel' should not be part of the discussion.


Not GP, but I've sometimes found libraries implementing similar concepts differently causing issues.

E.g.

    libraryA.Result struct {
        Err error
        Data SomeDataType
    }

    libraryB.Result struct {
        err string
        Data SomeDataType
    }
    func (r libraryB.Result) Error() string {
         return r.err
    }
Now you have two different implementations of the same fundamental idea, but they each require different handling. In Go, where many things simply return an error type in addition to whatever value(s), you would now have three different approaches to error handling to deal with as opposed to just whatever the language specified as the best practice.


This is what interfaces are for.

Let your caller bring their own error type and instantiate your library code over that.


Not GP but:

It may frustrate coworkers who need to edit the code.

It adds another dependency into your workflow.


> which then makes it very difficult for anyone else to figure out what is going on

Or we can learn to read them. Just treat types like a first class value. You either assign names to types like you do to values, or you can assign a name to a function that returns a type, this being generics.


> or we can learn to read them

That's an awful way to think about hard to read code. I could produce the most unreadable one liners you've ever seen in your life. We should condemn that and not blame it on others to "learn how to read".


> That's an awful way to think about hard to read code

Most of the time I hear about "hard to read code" is "pattern I don't currently have a mental model for". We didn't move on from COBOL by letting that be a deterrant.


Fair, I've actually seen both types of situations. I only complain after having some domain knowledge of the project and the language/tools. After sufficient understanding, I will make sure that the code that gets merged into master is highly readable. Simple > complicated. Always. Don't be ashamed to write simple code.


You write code for an audience. In that audience, sit yourself in your current state, yourself a year+ from now, your colleagues (you know their level) and the compiler. With bad luck, your current self i a state pulling your hair out to debug.

Think about the audience when you code.


I assume you only program in readable languages like COBOL and AppleScript.


Ah, blub. It will never leave us.


I expect after a flurry of initial excitement, the community will settle on some standards about what it is and is not good for that will tend to resemble "Go 1.0 + a few things" moreso than "A complete rewrite of everything ever done for Go to be in some new 'generic' style".


> I like generics for collections but that is about it.

What about algorithms (sorts, folds, etc) on those containers? I write a lot of numerical code. It sucks to do multiple maintenance for functions that work on arrays of floats, doubles, complex floats, and complex doubles. Templates/Generics are a huge win for me here. Some functions work nicely on integer types too.


I think this is probably the single best use case for Generics for Go - numerical operations on the range of number types.


At this point I'd like to summon to go-generics defense all the PHP and Javascript developers who assert unwaveringly "Bad language design doesn't cause bad code; bad programmers cause bad code."


Counterpoint: languages (and libraries, and frameworks, and platforms) so well-designed that they introduce a "pit of success"[1] such that bad programmers naturally write better code than they would have done otherwise.

For example, what if PHP could somehow detect string-concatenation in SQL queries and instantly `die()` with a beginner-friendly error message explaining to use query parameterisation from the very beginning: tens of billions of dollars of PHP SQL injection vulnerabilities simply never would have happened - and people who were already writing database queries with string-concatenation in VB and Java who gave PHP a try would then be forced to learn about the benefits of parameterisation and they'd then take that improved practice back to their VB and Java projects - a significant net worldwide improvement in code-quality!

[1]: https://blog.codinghorror.com/falling-into-the-pit-of-succes...

I've been writing in TypeScript for about 5 years now - and I'm in-love with its algebraic type system and whenever I switch back to C#/.NET projects it's made me push the limits of what we can do with .NET's type system just so I can have (or at least emulate as closely as possible) the features of TypeScript's type system.

(As for generics - I've wondered "what if every method/function was "generic" insofar as any method's call-site could redefine that method's parameter types and return types? Of course then it comes down to the "structural vs. nominative typing" war... but I'd rather be fighting for a hybrid of the two rather than trying to work-around an poorly-expressive type system.


And that's among the reasons it's been left out of Go. Go design was guided by experience working on large software systems; the risk with making a language too flexible is that developers begin building domain-specific metalanguages inside the language, and before you know it your monolingual codebase becomes a sliced-up fiefdom of various pieces with mutually-incompatible metasyntax that defeats the primary advantage of using one language: developers being able to transition from one piece of the software to another without in-depth retraining.

For enterprise-level programming (which is the environment Go grew up in), a language that's too flexible is a hindrance, because you can always pay for more eng-hours, but you can't necessarily afford smarter programmers.


What about genetics for phantom types?

Ex.

    class ID<T> {
      int id;
    }
The idea is that an ID is just an int under the hood, but ID<User> and ID<Post> are different types so you can’t accidentally pass in a user id where a post is is expected.

Now, this is just a simple example that probably won’t catch too many bugs, but you can do more useful things like have a phantom parameter to represent if the data is sanitized, and then make sure that only sanitized strings are displayed.


Just to note, for this specific example, Go supports this with type definitions:

  // UserID and PostID are distinct types
  type UserID int
  
  type PostID int


This isn't quite the same, because it's just an alias - you can pass a UserID to a function accepting a PostID: https://play.golang.org/p/nSOgcJs_66y

It still provides a documentation benefit of course.

EDIT: Whoops, yes, as lentil points out, they are indeed distinct types not aliases. So it does provide the benefit of the Rust solution.


No, it's not an alias, they are distinct types. You can't use the types interchangeably (unless you cast them).

Your playground example didn't try to pass a UserID to a function accepting a PostID, but if you do that, you'll see the error:

https://play.golang.org/p/vyiJ_sLzy4O


Oh neat! Most languages make it a little bit verbose to create these kinds of wrapper types for type safety (with zero overhead), so it's nice that Go has that.

I think the generic approach is a little bit better because of the flexibility, but this approach is still better than not having it at all.


The go team's attempt at involving everyone in the priorities of the language has meant they lost focus on the wisdom of the original design. I spent 10 years writing go and I'm now expecting to have to maintain garbage go2 code as punishment for my experience. I wish they focused on making the language better at what it does, instead of making it look like other languages.


That said the go team is incredibly talented and deserve a lot of kudos for moving much of the web programming discussion into a simpler understanding of concurrency and type safety. Nodejs and go came out at the same time and node is still a concurrency strategy salad.


considering the vast majority of programming involves loops I don't see "just for collections" as some minor thing—it's most of what I do


If you don't understand someone else's code, you can either tell them they stuff is too complicated or learn and understand better. There can be a middle ground of course.


Most of the time if code is hard to understand its bad code. Just because someone writes complex code that uses all the abstractions, doesnt mean its good. Usually it means the opposite


I'd like generics for concurrency constructs. Obvious ones like Mutex<T> but generics are necessary for a bunch of other constructs like QueueConsumer<T> where I just provide a function from T -> error and it will handle all the concurrent consumption implementation for me. And yes, that's almost just a chan T except for the timeouts and error handling and concurrency level, etc.


There is an underappreciated advantage to using generics in function signatures: they inform you about exactly which properties of your type a function is going to ignore (this is called parametricity: https://en.wikipedia.org/wiki/Parametricity)

For instance, if you have a function `f : Vec<a> -> SomeType`, the fact that `a` is a type variable and not a concrete type gives you a lot of information about `f` for free: namely that it will not use any properties of the type `a`, it cannot inspect values of that type at all. So essentially you already know, without even glancing at the implementation, that `f` can only inspect the structure of the vector, not its contents.


Not all generics are parametric.


Agreed. From a quick skim of the Go generics proposal I get the impression that they are in fact aiming for parametric generics though (in fact they use the term "parametric polymorphism" in the background section).


I like generics but I find that it is often best to start out writing a version which is not generic (i.e. explicitly only support usize or something) then make it generic after that version is written. As a side benefit, I find that this forces me to really think about if it should actually be generic or not. One time I was writing a small Datalog engine in Rust and was initially going to make it take in generic atoms. However, I ended up deciding after going through the above process that I could just use u64 identities and just store a one to one map from the actual atoms to u64 and keep the implementation simpler.

I agree with the sentiment that it is very easy to overuse genetics though there are scenarios where they are very useful.


I can think of a few other potential use cases in Go. Some ideas:

- Promises

- Mutexes like Rust's Mutex<T> (would be much nicer than sync.Mutex)

- swap functions, like swap(pointer to T, pointer to T)

- combinators and other higher-order functions


For java / c#, in my experience, I've done that mistake because in both language the class declaration is very verbose. Then using generic is the only way to solve a problem which can only be solved by dynamic typing / variables.

In typescript I don't need generic too much / too complex, because the typing definition is more lax, and we can use dynamic in the very complex scenario.

I don't know which approach go is taking.


Honestly as long as you learn when to use generics and when to not use them there are a lot of very useful ways to encode state/invariant into the type system.

But I also have seen the problem with overuse of generics and other "advanced" type system features first hand (in libraries but also done by myself before I knew better).


I've done this to one of my pet projects (thankfully unreleased). It just makes debugging/editing on the fly more difficult. I'd love to unwind the mess. But that'll take days fixing I caused in minutes! It's a big foot gun.


Mathematically, almost everything generic can be viewed as a collection.


Functions ;)

s -> (s, a) is generic and a Functor (mappable - often conflated with a collection) but it's no collection!


Yeah i actually think just having a built in genecic linked list, tree and a few other abstract data types would solve 90% of everyones problems. Part of the good thing about go is you solve problem more then you create them.


I agree, those grapes are probably sour anyway...


What I dont understand is that so many of the world's great intellectuals lived in beautiful cities and towns. University towns often are really pretty with quaint buildings both in Europe and the East Coast.

Then how did SV manage to do well? Butt ugly warehouses and suburban office buildings.


Does the SV attract "many of the world's great intellectuals"?


I would argue that, today, the definition of great intellectual involves ability to make money for investors and shareholders.

This is not meant to be flippant, judgmental, or snide. Greatness and intellectualism is tied to net worth in society today.


While I can agree that people do tend to think that good equates to rich, I don't think being able to make money makes you somehow intellectual or great. Sure, people conflate these things in general but that doesn't make it the fact of the matter because people think it.

This is similar to how some people view the law as being a moral compass for right or wrong. While a lot of people do tend to view doing something illegal as doing something wrong, it's not actually true that just because you're doing something illegal you are doing something wrong.

So what you said is true in that people see it that way, not in that it actually is that way, or ought to be that way. I think your comment sort of conflates this disinction whether intentionally or unintentionally, hence the downvotes.


It's 100% unintentional.

I meant that is the perception, not the reality. I should've worded that better, apparently.

The perception in pop culture is that wealth equates intellectual greatness.


As a Brit who's been to SV a few times, I don't get the typical American disdain for the scenery. Stanford University is absolutely beautiful and so many of the neighborhoods I saw in Palo Alto were lined with trees and full of beautiful houses. You have a great hilly backdrop and even the 280 through Los Altos is gorgeous. There are endless blocks of offices and more modest housing too, of course, but I think the majority of British cities look a lot worse than what the Bay Area has to offer – they're not all as gorgeous as Cambridge, Oxford or Bath. Forgetting the horrific cost of living, I'd rather live amongst the scenery of SV than on the outskirts of Reading, Guildford, or Manchester, say.


Other than Stanford, I don't think that SV really has educational institutions as its driving force.


A lot of amazing research and startups come out of UC Berkeley, and while they are on the fringe of the Bay Area and arguably outside of it UC Santa Cruz and UC Davis do very interesting research in bio chemistry and materials


I wouldn't consider Santa Cruz to be part of SV and definitely not Davis. There are plenty of significant educational institutions in the Bay Area and even more if you expand the net to Northern California, but SV's prominence is driven more by the location there of early tech companies than any educational drivers outside of Stanford (although Stanford is very important).


I disagree, I think education is one of the foundational pieces of what created SV and helps maintain is tech ecosystem. There was something that got those early tech companies to be here in the first place and I don't think it was just luck.

My opinion is that there are 3 main drivers and that top tier education is one of the most important ones.

They go like this:

Higher education that has enough gravity to aggregates diverse 'cutting edge' people from a variety of pursuits.

A reason for those smart people to stay that isn't just money. I think that's lifestyle and access to nature. You have the ocean, mountains, wine country, and relatively good weather.

The last is capital, I think saying companies is putting the cart before the horse. When I say capital I don't just mean startup capital. The ecosystem has to also have liquidity. There are lots of areas that have one of the 3, but very few that have all of them.


Western Civilization stopped building beautiful buildings more or less after WWII, which is when the SV was created.


i think perception of novelty and 'quaintness' depends on where you grew up. As someone from the UK, Oxford and Cambridge are, yes, absolutely beautiful: but they were built for the education of, and sponsored by kings and society's elite. They're pretty good places to live, but they're not amazing at least by modern standards. They're also not the source of the UK's industries, which have historically been based in perhaps still quaint by new world standards, but much less idyllic cities like Manchester and of course London, where labor and markets are strong.

The most beautiful cities in the UK are beautiful externally, but very hard to live in unless you want to live a very specific kind of life out in the countryside or are very rich. They're either very expensive, or very quiet and usually both.

From my perspective having worked between the UK and the US for 6 years: California has amazing amenities, and housing tends to be cheaper than London, especially outside of San Francisco. The cities are much dirtier and scarier, but the weather is incredible and natural beauty when you leave the cities is amazing.

I think it's easy to have a positively skewed picture of Europe as a non-european because of how drastically different 'bad' or 'common' places look. It's similar for me when I visit the US. We have endless Victorian housing estates that were built for the very poor that look like fairytale cottages to those from across the ocean.

Ultimately, though I think industry in north california is a product of the money and workers that live there, rather than its attractiveness as a place to live as with other industrial booms.


Haha. It’s ugly for sure but it’s the people that matters.


They area around SV is quite beautiful, but certainly the cities are boring as hell.


I think you're conflating doing well in "status games" (academia) with doing well in wealth creation (SV).


The entirety of Western academic thought is a “status game”?


SV did well because it was one of the first places that didn't enforce non-competes.


SV during its initial rise was mostly orchards.


There are two conflicting attitudes for Prestige universities

A) Ivies/Standford are better than regular universities because they have resources, powerful alumni and well connected students B) They should let XYZ in because they have great test scores

If B was really the most important A would no longer be true.


I have a Surface which I found the best tablet as it was bigger than most.

In the end I gave up buying ebooks and went back to regular books. If there is a pdf I have to read I'll do on my workstation.


"printing money" yes today, but if GCP/Azure or something else comes in and competes heavily the margins could easily shrink to something negative.


There are so few clouds, that it doesn’t work like this. When I saw it first hand the way it was done is prices were set to match competition. Occasionally somebody would reduce the prices, and others would match it. Since offerings are not exactly the same, there are variations, but overall for basic services like VMs and storage neither cloud will give you significant advantage in price.


I guess, but these products have such strong lock in.


Even if Google’s prices were half that of Amazon’s, it’s really hard to quantify the savings since the platform offerings are not identical, plus engineering switching costs could easily outstrip a company’s yearly cloud costs.


Not necessarily negative margins, but low enough that EBITDA less Capex looks thinner and thinner


Switching costs.


The problem is that software no-one wants often has the best careers.

I worked on a system used by many people and it was hard work, was getting 10-15 years old, held together with tape and lots of users who all had their gripes and requests.

I moved to a system hardly used by anyone and life is so much better. Get to play with interesting technology and no one minds because there is now risk of annoying a our users who dont really care. The best thing is I get paid more here too.


What area? I've mostly quit software because of how much I hate working on complete garbage and also don't have the schooling/engineering skills to work on truly cool stuff. Pivoted to wood working.


Funny, I know a guy who was did woodworking (and construction) that pivoted to software.

I worked with him on a consulting project used by nearly 30 million people, great dude and great experience.

On the other hand, I didn't have an engineering degree, and worked on software for a major bank, and then a consulting company where I worked with ex-construction worker.

I found a gig on a team at another company where I make 40% more. The team I work on has zero real customers at this point, and we're doing real interesting stuff. We plan on converting our customer base over to the work my team has done, but that's still in the works.

Anyway, I think schooling isn't as important as people may think. You can learn some really cool and cutting edge things if you put your mind to it. And if those skills are marketable (not necessarily valuable, although companies might think so), you can get jobs utilizing those skills.

Tinkering with the tech you're interested in can be a good way to get into a position working on that piece of tech. It might not always work out, but it's worth a shot if you really want a fulfilling and interesting job in that area.


Been in the industry for over a decade most of which was in fintech, some in crypto. I don't count any of that as "cool". I've still got some years left because it's good money but eventually I want to pivot to wood working. I started working on my own apps this year and it's much more enjoyable.


This feels so true! I had a similar experience.. spent 3 years working for big US companies, completely focused on whatever trend would impress investors the most with no one giving any thought to what might actually be _useful_ for people who, you know, actually give us money.

Now I work for a small company doing very niche work and I don't think I could love my work more. I mean people do use our software, but there's no VC funding so no pretense of needing to hop on the latest bandwagon. It's just so much better.


That might look good in the short term, but there are many companies and roles which require you to show the actual number of users, or the load of the service that you worked on. Also many technically challenging issues only come out under load, and actually working on challenging things are very different from reading about them. Just my 2 cents.


    there are many companies and roles which require you 
    to show the actual number of users, or the load of the
    service that you worked on.
I got my first job as a programmer in 2001 and not once was I asked that. I'm sure they exist but I wouldn't count on that being so common as to significantly impact the OP's career prospects.

Two things I've most often noticed people care about when hiring:

1. experience with the exact tech / field that they're hiring for

2. having brand-name job experience (google, amazon, etc).

It's sad but you'll probably get better mileage from having worked on a useless prestige/pet project at google using fashionable tech than a critical system written with JavaEE & serving a lot of high-value customers at Alliance Generic Insurance Services Corp.


That last sentence sums up the sad state of hiring in the industry well.


probably depends if you are applying to another faang or if you want to work on mission critical insurance technology

people who really need skills are probably good at indentifying said skills


What happens when one is working on something that is neither of the two things in your last sentence?


> there are many companies and roles which require you to show the actual number of users, or the load of the service that you worked on

Really? What do you base that statement on?


I do agree that there is a risk of it all crashing down. I dont think they ask us for load or users, but they notice an area where money isn't coming in.


Yeah I was enthusiastic but my kids really aren't interested. I'm not sure if I should force them to do it or just give up.


Tell them it's yours and they're not to mess with it. Give in after much begging.


^ I can tell you have kids.


Mine is on to that already sadly. Only thing that works now is to start on something when it's really supposed to be bedtime. Then anything becomes extremely interesting and has to be explored immediately.


Hahaha that's genius, I'll give that a shot someday!


Buy it for yourself and enjoy using it, then they will pay more attention to it, especially when they see it making you happier.


ICE does support Black and Brown people by keeping illegal immigrants from flooding the market. Wages in many restaurants/building sites in NYC, LA, Florida, Texas are lower than other parts of the country because illegal labor gives an alternative to hiring black employees that rightfully require at least a minimum wage.


Looks like a good product. Its an interesting test of how much people value privacy, I'd imagine 99.9% of people would rather be tracked and have ads than pay for email.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: