Yeah there's a shortage. They end the production run just before Chinese New Year and spin up a new one after. But then covid hit. And many people upgraded their gaming PCs during the lockdown.
Not sure the rules on this... but I am moving from ITX to mATX probably next week. If you're interested in miniITX b450 I don't really have a plan for mine, email is in profile.
I like generics for collections but that is about it. I've seen "gifted" programmers turn everything into generic classes and functions which then makes it very difficult for anyone else to figure out what is going on.
One reason I like go is it doesn't have all the bells and whistles that give too many choices.
Here's some generic code I wrote in Rust recently. I had two unrelated iterators/collections and I needed to flatten and sort them into a single buffer.
Now you could argue that this is just "generics for collections" but the way those iterator combinators are written make heavy use of generics/traits that aren't immediately applicable for collections. Those same combinator techniques can be applied to a whole host of other abstractions that allow for that kind of user code, but it's only possible if the type system empowers library authors to do it.
I've certainly encountered codebases that abused inheritance and templates/generics to the point of obfuscation but you can abuse anything really. Besides in my experience the worst offenders where in C++ where the meta-programming is effectively duck-typed. Trait-based generics like in Rust go a long way into making generic code readable since you're always aware of what meta-type you're working with exactly.
I definitely don't use generics if they can be avoided, and I think preemptive use of genericity "just in case" can lead to the situation you describe. If I'm not sure I'll really need generics I just start writing my code without them and refactor later on if I find that I actually need them.
But even if you only really care about generics for collections, that's still a massive use case. There's a wealth of custom and optimized collections implemented in third party crates in Rust. Generics make these third-party collections as easy and clean to use as first party ones (which are usually themselves implemented in pure Rust, with hardly any compiler magic). Being easily able to implement a generic Atomic type, a generic Mutex type etc... without compiler magic is pretty damn useful IMO.
class Result<T>
{
bool IsSuccess {get; set;}
string Message {get; set;}
T Data {get; set;}
}
On many occasions, I like using result types for defining a standard response for calls. It's typed and success / fail can be handled as a cross-cutting concern.
It's also incredibly unsafe and why generics aren't enough. C++, Java, and so on have had generics for ages and with types like the one above, null pointer exceptions are incredibly common. Nothing prevents the user from attempting to retrieve the data without first retrieving the success status.
On the other hand, this improves on it dramatically:
I'm convinced that lack of Sum Types like this in languages like Java/C#/Go are one of the key reasons that people prefer dynamic languages. It's incredibly freeing to be able to express "or". I do it all the time in JavaScript (variables in dynamic languages are basically one giant enum of every possible value), and I feel incredibly restricted when using a language that requires a class hierarchy to express this basic concept.
I completely agree. Every passing day I become more convinced that a statically typed language without sum types or more broadly ADTs is fundamentally incomplete.
The good news is that many languages seem to be cozying up to them, and both the JVM (through Kotlin, Scala, et all) and .net (through F# or C# w/ language-ext) support them.
Even better news is that the C# team has stated that they want to implement Sum Types and ADTs into the language, and are actively looking into it.
I just don't see, in properly designed code, that there would be that much use for sum types if you have generics. When are you creating functions take or return radically different types that need to be expressed this way?
I dislike dynamic languages where parameters and variables can take on any type -- it's rarely the case that same variable/parameter would ever need to contain a string, a number, or a Widget in the same block of code.
I find it much more freeing to have the compiler be in charge of exactness so I can make whatever changes I need knowing that entire classes of mistakes are now impossible.
> When are you creating functions take or return radically different types that need to be expressed this way
Let's say you're opening a file that you think is a CSV. There can be several outcomes:
- the file doesn't exist
- the file can't be read
- the file can be read but isn't a valid CSV
- the file can be read and is valid, and you get some data
All of these are different types of results. You can get away with treating the first 3 as the same, but not the last. Without a tagged union, you'll probably resort to one of a few tricks:
- You'll have some sort of type with an error code, and a nullable data field. In reality, this is a tagged union, it's just that your compiler doesn't know about it and can't catch your errors.
- you'll return an error value and have some sort of "out" value with the data: this is basically the same as the previous example.
- you'll throw exceptions, which usually ends up with people writing code that forgets about the exception because the compiler doesn't care about it, and the code works 99% of the time until it completely blows up.
If you want to force people to handle the above 3 cases, couldn't you just throw separate checked exceptions (eg in Java)? In that case the compiler does care about it. You can still catch and ignore but that imo is not a limitation of the language's expressiveness.
Checked exceptions would have been an ok idea if it weren't for the fact that at least when I was writing Java last (almost 10 years ago) they were expressly discouraged in most code bases. Partially because people just get in the lazy habit of catch and rethrow RuntimeException, or catch and log, etc. when confronted with them. Partially because the JDK itself abused them in the early days for things people had no hope of handling properly.
They also tend to defer handling out into places where the context isn't always there.
The trend in language design does seem to be more broadly away from exceptions for this kind of thing and into generic pattern matching and status result types.
> Partially because people just get in the lazy habit of catch and rethrow RuntimeException, or catch and log, etc. when confronted with them.
After quite a while of thinking this way, I came to the conclusion that:
95% of the time, there's no way to 'handle' an error in a 'make it right' sense. Disk write failed? REST request failed? DNS lookup? There usually isn't an alternative to logging/rethrowing.
When there is a way to handle an error (usually by retrying?), it's top level anyway.
Furthermore, IO is the stuff that can just 'go wrong' regardless of how good the programmer is, and IO tends to sit at the bottom in most Java programs. This means every method call is prone to IOExceptions.
If IOException on a read is truly happening, and it isn't just a case of a missing file, there are serious issues that aren't going to be fixed with a catch-and-log, or be able to be handled further up the call stack.
One benefit I've found with error-enums is just being aware of all the possible errors that can occur. You're right: 95% of the time you can't do anything except log/retry. But that 5% of the time become runtime bugs which are a massive pain. It's really nice when that is automatically surfaced for you at development time.
Honest question: do you think this kind of stuff is going to be adopted by the majority in the next decade or two? Because I'm looking at it and adding even more language features like that seems to make it even harder to read someone else's code.
um... you realize the parent post is talking about having sum types in statically typed languages (eg. rust), when you already do this all the time in dynamic languages like javascript and python right?
So, I mean, forget 'the next decade or two'; the majority of people are doing this right now; python and js are the probably the two most popular languages in use right now.
Will it end up in all statically typed languages? Dunno; I guess probably not in java or C# any time soon, but swift and kotlin support them already).
...ie. if your excuse for not wanting to learn it is that it's probably an edge case that most people don't have to care about now, and probably never will, you're mistaken I'm afraid.
It's a style of code that is very much currently in use.
Are the majority actually writing code like this though? In the case of dynamic languages, this property seems more like an additional consequence of how the language behaves. It's not additional syntax.
I really don't know what more to say about this; if you don't want to use them, don't. ...but if your excuse for not using them is that other people don't, it's wrong.
Because even with generics, you are not able to express "or"; two different choices of types that have _different_ APIs. With generics, you can express n different choices of types that have all the _same_ API.
It's a good software engineering principle to make control and data flow as streamlined as possible, for similar data. Minimize branches and special cases. Generics help with this, they hide the "irrelevant" differences, surfacing only the relevant.
On the other hand, if there are _actually_ different cases, that need to be handled differently, you want to branch and you want to express that there are multiple choices. Sum types make this a compiler-checked type system feature.
Let's take Rust's hash map entry api[0], for example. How would you represent the return type of `.entry()` using only a class hierarchy?
let v = match map.entry(key) {
Entry::Occupied(o) => {
o.get_mut() += 1;
o.into_mut()
}
Entry::Vacant(v) => {
update_vacant_count(v.key());
v.insert(0)
}
};
I view sum types as enabling the exact same exactness as you describe in your last line; especially since you can easily switch/match based on a specific subtype if you realize you need that, without adding another method to the base class and copying into the x subclasses that you have for implementing the different behavior.
Rust has both generics and sun types, and benefits enormously from both.
And sum types aren’t for “radically different types”. You can define an error type to be one of different options (I.e. a more principled error code), or to represent nullability in the type system, or to indicate fallibility without relying on exceptions, etc.
Rust uses all of these to great effect, and does so because these sum types are generic.
> It's also incredibly unsafe and why generics aren't enough. C++, Java, and so on have had generics for ages and with types like the one above, null pointer exceptions are incredibly common.
uh, you'd never get a null-pointer exception in C++ given the type that OP mentioned. Value types in C++ cannot be null (and most things are value types by a large margin).
They can just not exist. And C++ being C++, dereferencing an empty std::optional is UB. In practice this particular UB often leads to way worse consequences than more "conventional" null-pointer derefs.
Then write your own optional that always checks on dereference or toggle whatever compilation flag enables checking in the standard library you are using.
No, you just are forced to use methods like foo.UnwrapOr(default_value) to get the Result. Or depending on the language, you get a compile error if you don't handle both possible values of the Result enum in a switch statement or if/else clause.
Yes you can? The equivalent type in C++ is std::expected[1] which doesn't even contain a pointer that could be dereferenced (unless T is a pointer obviously).
I am replying to you and its pretty obviously related to your comment.
You: "C++, Java, and so on have had generics for ages and with types like the one above, null pointer exceptions are incredibly common."
jcelerier: "you'd never get a null-pointer exception in C++ given the type that OP mentioned."
You: "Then you can't construct it unless it's successful, no?"
Me: "The equivalent type in C++ [to what the OP mentioned] is std::expected". It is not possible to get a null-pointer exception with this type and yet you can construct it.
It sounds quite a lot like you took the type the OP posted and changed it in your reply to a different type that isn't standardized yet, do I have that right?
There are two things being discussed in this thread.
1. The first, my original point was that a high quality type system enforces correctness by more than just having generics. There's no proper way in C++ to create this class and make a sum type - there's no pattern matching or type narrowing like can be done in languages with more interesting type systems and language facilities. Generics is just a first step to a much more interesting, safer way of writing code.
2. The second, my replies to folks who have corrected me, and I'll borrow your little paraphrase here:
> [Me]: "C++, Java, and so on have had generics for ages and with types like the one above, null pointer exceptions are incredibly common."
>
> jcelerier: "you'd never get a null-pointer exception in C++ given the type that OP mentioned."
>
> [Me]: "Then you can't construct it unless it's successful, no?"
I think this is exactly correct still. If it's impossible to create an instance of Result<T> without... well, a successful result, you may as well just typedef Result<T> to T, right? If it can't store the "failure" case, it's totally uninteresting.
If it _can_ store the failure case, making it safe in C++ is fairly challenging and I dare say it will be a little longer and a little less safe than a Result I can write in a few lines of TypeScript, Scala, Rust, an ML or Haskell derivative, and so on.
Now, I'd love to be proven wrong, I haven't written C++ for a while so the standard may have changed, but is there a way to write a proper enum and pattern match on the value?
It looks like this std::expected thing is neat, but can a _regular person_ write it in their own code and expect it to be safe? I can say with certainty that I can do that for the languages I listed above and in fewer than 10 lines of code.
the linked link has a ton of things that are "quality-of-life" things. For instance comparing two Result values efficiently (you don't want to compare two Result<T> bitwise, and you don't want the "is_valid" flag to be first in the structure layout to fallback on the automatic default of lexical order as that would sometimes waste a few bytes, but you want the "is_valid" flag to be the first thing being compared for instance. Do you know of a language that would do that automatically ?).
It also supports back to C++11 and GCC4.9 with various fixes for some specific compiler versions's bugs, supports being used with -fno-exceptions (so a separate language than ISO C++) - sure, today's languages can do better in terms of prettiness, but so would a pure-C++20 solution that only needs to work with a single implementation.
If you are ready to forfeit some amount of performance, for instance because you don't care that the value of your Result will be copied instead of moved when used in a temporary chain (e.g. `int x = operation_that_gets_a_result().or_else([] (auto&& error) { return whatever; });` 3/4 of the code can go away (and things will still likely be faster than most other languages).
That wouldn't change anything to Result<T>'s implicit safety properties. "safe + unsafe == unsafe" - to have a meaningful discussion we should focus on the safe part, else it's always possible to bring up the argument of "but you can do ((char*)&whatever)[123] = 0x66;"
> That's a generic container of 0 or 1 elements ;)
Then chances are so are most if not all of the uses of generics OP criticises. The only "non-container" generics I can think of is session types where the generic parameter represents a statically checked state.
Result types are much better than multiple return values. But now the entire Go ecosystem has to migrate, if we want those benefits (and we want consistent behavior across APIs). It'd be like the Node.js move to promises, only worse...
type Result struct {
Err error
Data SomeType
}
func (r *Result) HasError() bool {
return r.Err != nil
}
func bar() *Result {
...
return &Result { ... }
}
...
result := bar()
if result.HasError() {
// handle result.Err
}
// handle result
I'm not really sure I see the benefit to the latter. In a language with special operators and built-in types it may be easier (e.g. foo()?.bar()?.commit()), but without these language features I don't see how the Result<T> approach is better.
Go can't really express the Result<T> approach. In Go, it's up to you to remember to check result.HasError(), just like it's up to you to check if err != nil. If you forget that check, you'll try to access the Data and get a nil pointer exception.
The Result<T> approach prevents you from accessing Data if you haven't handled the error, and it does so with a compile-time error.
Even with Go's draconian unused variable rules, I and my colleagues have been burned more than once by forgotten error checks.
> without these language features I don't see how the Result<T> approach is better.
That's the point! I want language features!
I don't want to wait 6 years for the designers to bake some new operator into the language. I want rich enough expression so that if '?.' is missing I just throw it in as a one-liner.
A language with sun types will express Result as Success XOR Failure. And then to access the Success, the compiler will force you to go through a switch statement that handles each case.
The alternative is not the Result type you defined, but something along the lines of what languages like Rust or Haskell define: https://doc.rust-lang.org/std/result/
It's interesting that you say this, because I've had the opposite experience. I wouldn't say it's strictly inferior, because there are definitely upsides. If it was strictly inferior, why would a modern language be designed that way -- there must be some debate right?
I love multiple returns/errors. I find that I never mistakenly forget to handle an error when the program won't compile because I forgot about the second return value.
I don't use go at work though, I use a language with lots of throw'ing exceptions, and I regularly miss handling exceptions that are hidden in dependencies. This isn't the end of the world in our case, but I prefer to be more explicit.
> If it was strictly inferior, why would a modern language be designed that way
golang is not a modern language (how old it is is irrelevent), and the people who designed it did not have a proper language design background (their other accomplishments are a different matter).
Having worked on larger golang code bases, and I've seen several times where errors are either ignored or overwritten accidentally. It's just bad language design.
I cannot think of a language where errors cannot be ignored. In go it is easy to ignore them, but they stick out and can be marked by static analysis. The problems you describe are not solved at the language level, but by giving programmers enough time and incentives to write durable code.
Compare to a language with exception handling where an exception will get thrown and bubbles up the stack until it either hits a handler, or crashes the program with a stack trace.
And I was referring to accidental ignoring. I've seen variations of the following several times now:
res, err := foo("foo")
if err != nil { ... }
if res != nil { ... }
res, err = foo("bar")
if res != nil { ... }
> fmt.Println() is blacklisted for obvious reasons
That's the issue with the language, there are so many special cases for convenience sake, not for correctness sake. It's obvious why it's excluded, but it doesn't make it correct. Do you want critical software written in such a language?
Furthermore, does that linter work with something like gorm (https://gorm.io/) and its way of handling errors? It's extremely easy to mis-handle errors with it. It's even a widely used library.
In rust, errors are difficult to ignore (you need to either allow compiler warnings, which AFAICT nobody sane does, or write something like `let _ = my_fallible_function();` which makes the intent to ignore the error explicit).
Perhaps more fundamental: it’s impossible to accidentally use an uninitialized “success” return value when the function actually failed, which is easy to do in C, C++, Go, etc.
> Does any language save you from explicitly screwing up error handling?
It's about the default error handling method being sane. In exception based languages, an unhandled error bubbles up until it reaches a handler, or it crashes the program with a stacktrace.
Compare to what golang does, it's somewhat easy to accidentally ignore or overwrite errors. This leads to silent corruption of state, much worse than crashing the program outright.
That's one point in this discussion. The language allows error handling that way. Compared to a language with proper sum types or exceptions, where one would have to actively work against the language to end up with that mess.
> That's one point in this discussion. The language allows error handling that way. Compared to a language with proper sum types or exceptions, where one would have to actively work against the language to end up with that mess.
I've seen a bunch of code that does the equivalent of the Java I posted above. Mostly when sending errors across the network.
because it has try/catch. Without that (which would be similar to not checking the err in go) it explodes or throws to a layer up that may not expect it.
I would say it is a very ergonomic way of doing this. It allows for writing in a more exploratory way until you know what your error handling story is. Then, even if you choose to propagate it later, you just add it to your signature. Also it is very easy to grok and clear. Definitely not strictly inferior.
It's a lot cleaner to pass a Result<T> through a channel or a slice than to create two channels or slices and confirm everyone's following the same convention when using them.
I concede that there are probably scenarios where this design makes sense within that context. I typically find that either I care about a single error and terminating the computation, or I don't care about errors at all. In the former case, the primitives in the sync package (or just an error channel which we send to once and close) are adequate. The latter case presents no issues, of course.
At $work we definitely have examples where we care about preserving errors, and if that tool were implemented in Go a solution like a Result struct containing an error instance and a data type instance could make sense.
It has a bunch of invalid states (message and data both set, neither set, message set but IsSuccess is true, etc.). So you have to either check it every time, or you'll get inconsistent behaviour miles away from where the actual problem is. It's like null but even more so.
Well, for one thing, it doesn't actually work like a proper Optional<T> or Either<T, string> type. It works more like Either<(T, string),(T, string)>, which might have some uses, but isn't typically a thing someone would often reach for if they had a type system that readily supported the other two options.
Now you have two different implementations of the same fundamental idea, but they each require different handling. In Go, where many things simply return an error type in addition to whatever value(s), you would now have three different approaches to error handling to deal with as opposed to just whatever the language specified as the best practice.
> which then makes it very difficult for anyone else to figure out what is going on
Or we can learn to read them. Just treat types like a first class value. You either assign names to types like you do to values, or you can assign a name to a function that returns a type, this being generics.
That's an awful way to think about hard to read code. I could produce the most unreadable one liners you've ever seen in your life. We should condemn that and not blame it on others to "learn how to read".
> That's an awful way to think about hard to read code
Most of the time I hear about "hard to read code" is "pattern I don't currently have a mental model for". We didn't move on from COBOL by letting that be a deterrant.
Fair, I've actually seen both types of situations. I only complain after having some domain knowledge of the project and the language/tools. After sufficient understanding, I will make sure that the code that gets merged into master is highly readable. Simple > complicated. Always. Don't be ashamed to write simple code.
You write code for an audience. In that audience, sit yourself in your current state, yourself a year+ from now, your colleagues (you know their level) and the compiler. With bad luck, your current self i a state pulling your hair out to debug.
I expect after a flurry of initial excitement, the community will settle on some standards about what it is and is not good for that will tend to resemble "Go 1.0 + a few things" moreso than "A complete rewrite of everything ever done for Go to be in some new 'generic' style".
> I like generics for collections but that is about it.
What about algorithms (sorts, folds, etc) on those containers? I write a lot of numerical code. It sucks to do multiple maintenance for functions that work on arrays of floats, doubles, complex floats, and complex doubles. Templates/Generics are a huge win for me here. Some functions work nicely on integer types too.
At this point I'd like to summon to go-generics defense all the PHP and Javascript developers who assert unwaveringly "Bad language design doesn't cause bad code; bad programmers cause bad code."
Counterpoint: languages (and libraries, and frameworks, and platforms) so well-designed that they introduce a "pit of success"[1] such that bad programmers naturally write better code than they would have done otherwise.
For example, what if PHP could somehow detect string-concatenation in SQL queries and instantly `die()` with a beginner-friendly error message explaining to use query parameterisation from the very beginning: tens of billions of dollars of PHP SQL injection vulnerabilities simply never would have happened - and people who were already writing database queries with string-concatenation in VB and Java who gave PHP a try would then be forced to learn about the benefits of parameterisation and they'd then take that improved practice back to their VB and Java projects - a significant net worldwide improvement in code-quality!
I've been writing in TypeScript for about 5 years now - and I'm in-love with its algebraic type system and whenever I switch back to C#/.NET projects it's made me push the limits of what we can do with .NET's type system just so I can have (or at least emulate as closely as possible) the features of TypeScript's type system.
(As for generics - I've wondered "what if every method/function was "generic" insofar as any method's call-site could redefine that method's parameter types and return types? Of course then it comes down to the "structural vs. nominative typing" war... but I'd rather be fighting for a hybrid of the two rather than trying to work-around an poorly-expressive type system.
And that's among the reasons it's been left out of Go. Go design was guided by experience working on large software systems; the risk with making a language too flexible is that developers begin building domain-specific metalanguages inside the language, and before you know it your monolingual codebase becomes a sliced-up fiefdom of various pieces with mutually-incompatible metasyntax that defeats the primary advantage of using one language: developers being able to transition from one piece of the software to another without in-depth retraining.
For enterprise-level programming (which is the environment Go grew up in), a language that's too flexible is a hindrance, because you can always pay for more eng-hours, but you can't necessarily afford smarter programmers.
The idea is that an ID is just an int under the hood, but ID<User> and ID<Post> are different types so you can’t accidentally pass in a user id where a post is is expected.
Now, this is just a simple example that probably won’t catch too many bugs, but you can do more useful things like have a phantom parameter to represent if the data is sanitized, and then make sure that only sanitized strings are displayed.
Oh neat! Most languages make it a little bit verbose to create these kinds of wrapper types for type safety (with zero overhead), so it's nice that Go has that.
I think the generic approach is a little bit better because of the flexibility, but this approach is still better than not having it at all.
The go team's attempt at involving everyone in the priorities of the language has meant they lost focus on the wisdom of the original design. I spent 10 years writing go and I'm now expecting to have to maintain garbage go2 code as punishment for my experience. I wish they focused on making the language better at what it does, instead of making it look like other languages.
That said the go team is incredibly talented and deserve a lot of kudos for moving much of the web programming discussion into a simpler understanding of concurrency and type safety. Nodejs and go came out at the same time and node is still a concurrency strategy salad.
If you don't understand someone else's code, you can either tell them they stuff is too complicated or learn and understand better.
There can be a middle ground of course.
Most of the time if code is hard to understand its bad code. Just because someone writes complex code that uses all the abstractions, doesnt mean its good. Usually it means the opposite
I'd like generics for concurrency constructs. Obvious ones like Mutex<T> but generics are necessary for a bunch of other constructs like QueueConsumer<T> where I just provide a function from T -> error and it will handle all the concurrent consumption implementation for me. And yes, that's almost just a chan T except for the timeouts and error handling and concurrency level, etc.
There is an underappreciated advantage to using generics in function signatures: they inform you about exactly which properties of your type a function is going to ignore (this is called parametricity: https://en.wikipedia.org/wiki/Parametricity)
For instance, if you have a function `f : Vec<a> -> SomeType`, the fact that `a` is a type variable and not a concrete type gives you a lot of information about `f` for free: namely that it will not use any properties of the type `a`, it cannot inspect values of that type at all. So essentially you already know, without even glancing at the implementation, that `f` can only inspect the structure of the vector, not its contents.
Agreed. From a quick skim of the Go generics proposal I get the impression that they are in fact aiming for parametric generics though (in fact they use the term "parametric polymorphism" in the background section).
I like generics but I find that it is often best to start out writing a version which is not generic (i.e. explicitly only support usize or something) then make it generic after that version is written. As a side benefit, I find that this forces me to really think about if it should actually be generic or not. One time I was writing a small Datalog engine in Rust and was initially going to make it take in generic atoms. However, I ended up deciding after going through the above process that I could just use u64 identities and just store a one to one map from the actual atoms to u64 and keep the implementation simpler.
I agree with the sentiment that it is very easy to overuse genetics though there are scenarios where they are very useful.
For java / c#, in my experience, I've done that mistake because in both language the class declaration is very verbose. Then using generic is the only way to solve a problem which can only be solved by dynamic typing / variables.
In typescript I don't need generic too much / too complex, because the typing definition is more lax, and we can use dynamic in the very complex scenario.
Honestly as long as you learn when to use generics and when to not use them there are a lot of very useful ways to encode state/invariant into the type system.
But I also have seen the problem with overuse of generics and other "advanced" type system features first hand (in libraries but also done by myself before I knew better).
I've done this to one of my pet projects (thankfully unreleased). It just makes debugging/editing on the fly more difficult.
I'd love to unwind the mess. But that'll take days fixing I caused in minutes!
It's a big foot gun.
Yeah i actually think just having a built in genecic linked list, tree and a few other abstract data types would solve 90% of everyones problems.
Part of the good thing about go is you solve problem more then you create them.
What I dont understand is that so many of the world's great intellectuals lived in beautiful cities and towns. University towns often are really pretty with quaint buildings both in Europe and the East Coast.
Then how did SV manage to do well? Butt ugly warehouses and suburban office buildings.
While I can agree that people do tend to think that good equates to rich, I don't think being able to make money makes you somehow intellectual or great. Sure, people conflate these things in general but that doesn't make it the fact of the matter because people think it.
This is similar to how some people view the law as being a moral compass for right or wrong. While a lot of people do tend to view doing something illegal as doing something wrong, it's not actually true that just because you're doing something illegal you are doing something wrong.
So what you said is true in that people see it that way, not in that it actually is that way, or ought to be that way. I think your comment sort of conflates this disinction whether intentionally or unintentionally, hence the downvotes.
As a Brit who's been to SV a few times, I don't get the typical American disdain for the scenery. Stanford University is absolutely beautiful and so many of the neighborhoods I saw in Palo Alto were lined with trees and full of beautiful houses. You have a great hilly backdrop and even the 280 through Los Altos is gorgeous. There are endless blocks of offices and more modest housing too, of course, but I think the majority of British cities look a lot worse than what the Bay Area has to offer – they're not all as gorgeous as Cambridge, Oxford or Bath. Forgetting the horrific cost of living, I'd rather live amongst the scenery of SV than on the outskirts of Reading, Guildford, or Manchester, say.
A lot of amazing research and startups come out of UC Berkeley, and while they are on the fringe of the Bay Area and arguably outside of it UC Santa Cruz and UC Davis do very interesting research in bio chemistry and materials
I wouldn't consider Santa Cruz to be part of SV and definitely not Davis. There are plenty of significant educational institutions in the Bay Area and even more if you expand the net to Northern California, but SV's prominence is driven more by the location there of early tech companies than any educational drivers outside of Stanford (although Stanford is very important).
I disagree, I think education is one of the foundational pieces of what created SV and helps maintain is tech ecosystem. There was something that got those early tech companies to be here in the first place and I don't think it was just luck.
My opinion is that there are 3 main drivers and that top tier education is one of the most important ones.
They go like this:
Higher education that has enough gravity to aggregates diverse 'cutting edge' people from a variety of pursuits.
A reason for those smart people to stay that isn't just money. I think that's lifestyle and access to nature. You have the ocean, mountains, wine country, and relatively good weather.
The last is capital, I think saying companies is putting the cart before the horse. When I say capital I don't just mean startup capital. The ecosystem has to also have liquidity. There are lots of areas that have one of the 3, but very few that have all of them.
i think perception of novelty and 'quaintness' depends on where you grew up. As someone from the UK, Oxford and Cambridge are, yes, absolutely beautiful: but they were built for the education of, and sponsored by kings and society's elite. They're pretty good places to live, but they're not amazing at least by modern standards. They're also not the source of the UK's industries, which have historically been based in perhaps still quaint by new world standards, but much less idyllic cities like Manchester and of course London, where labor and markets are strong.
The most beautiful cities in the UK are beautiful externally, but very hard to live in unless you want to live a very specific kind of life out in the countryside or are very rich. They're either very expensive, or very quiet and usually both.
From my perspective having worked between the UK and the US for 6 years: California has amazing amenities, and housing tends to be cheaper than London, especially outside of San Francisco. The cities are much dirtier and scarier, but the weather is incredible and natural beauty when you leave the cities is amazing.
I think it's easy to have a positively skewed picture of Europe as a non-european because of how drastically different 'bad' or 'common' places look. It's similar for me when I visit the US. We have endless Victorian housing estates that were built for the very poor that look like fairytale cottages to those from across the ocean.
Ultimately, though I think industry in north california is a product of the money and workers that live there, rather than its attractiveness as a place to live as with other industrial booms.
There are two conflicting attitudes for Prestige universities
A) Ivies/Standford are better than regular universities because they have resources, powerful alumni and well connected students
B) They should let XYZ in because they have great test scores
If B was really the most important A would no longer be true.
There are so few clouds, that it doesn’t work like this. When I saw it first hand the way it was done is prices were set to match competition. Occasionally somebody would reduce the prices, and others would match it. Since offerings are not exactly the same, there are variations, but overall for basic services like VMs and storage neither cloud will give you significant advantage in price.
Even if Google’s prices were half that of Amazon’s, it’s really hard to quantify the savings since the platform offerings are not identical, plus engineering switching costs could easily outstrip a company’s yearly cloud costs.
The problem is that software no-one wants often has the best careers.
I worked on a system used by many people and it was hard work, was getting 10-15 years old, held together with tape and lots of users who all had their gripes and requests.
I moved to a system hardly used by anyone and life is so much better. Get to play with interesting technology and no one minds because there is now risk of annoying a our users who dont really care. The best thing is I get paid more here too.
What area? I've mostly quit software because of how much I hate working on complete garbage and also don't have the schooling/engineering skills to work on truly cool stuff. Pivoted to wood working.
Funny, I know a guy who was did woodworking (and construction) that pivoted to software.
I worked with him on a consulting project used by nearly 30 million people, great dude and great experience.
On the other hand, I didn't have an engineering degree, and worked on software for a major bank, and then a consulting company where I worked with ex-construction worker.
I found a gig on a team at another company where I make 40% more. The team I work on has zero real customers at this point, and we're doing real interesting stuff. We plan on converting our customer base over to the work my team has done, but that's still in the works.
Anyway, I think schooling isn't as important as people may think. You can learn some really cool and cutting edge things if you put your mind to it. And if those skills are marketable (not necessarily valuable, although companies might think so), you can get jobs utilizing those skills.
Tinkering with the tech you're interested in can be a good way to get into a position working on that piece of tech. It might not always work out, but it's worth a shot if you really want a fulfilling and interesting job in that area.
Been in the industry for over a decade most of which was in fintech, some in crypto. I don't count any of that as "cool". I've still got some years left because it's good money but eventually I want to pivot to wood working. I started working on my own apps this year and it's much more enjoyable.
This feels so true! I had a similar experience.. spent 3 years working for big US companies, completely focused on whatever trend would impress investors the most with no one giving any thought to what might actually be _useful_ for people who, you know, actually give us money.
Now I work for a small company doing very niche work and I don't think I could love my work more. I mean people do use our software, but there's no VC funding so no pretense of needing to hop on the latest bandwagon. It's just so much better.
That might look good in the short term, but there are many companies and roles which require you to show the actual number of users, or the load of the service that you worked on. Also many technically challenging issues only come out under load, and actually working on challenging things are very different from reading about them. Just my 2 cents.
there are many companies and roles which require you
to show the actual number of users, or the load of the
service that you worked on.
I got my first job as a programmer in 2001 and not once was I asked that. I'm sure they exist but I wouldn't count on that being so common as to significantly impact the OP's career prospects.
Two things I've most often noticed people care about when hiring:
1. experience with the exact tech / field that they're hiring for
2. having brand-name job experience (google, amazon, etc).
It's sad but you'll probably get better mileage from having worked on a useless prestige/pet project at google using fashionable tech than a critical system written with JavaEE & serving a lot of high-value customers at Alliance Generic Insurance Services Corp.
I do agree that there is a risk of it all crashing down. I dont think they ask us for load or users, but they notice an area where money isn't coming in.
Mine is on to that already sadly. Only thing that works now is to start on something when it's really supposed to be bedtime. Then anything becomes extremely interesting and has to be explored immediately.
ICE does support Black and Brown people by keeping illegal immigrants from flooding the market. Wages in many restaurants/building sites in NYC, LA, Florida, Texas are lower than other parts of the country because illegal labor gives an alternative to hiring black employees that rightfully require at least a minimum wage.
Looks like a good product. Its an interesting test of how much people value privacy, I'd imagine 99.9% of people would rather be tracked and have ads than pay for email.
https://www.newegg.com/p/pl?N=100007625%208000%204131%204814...