Hacker News new | past | comments | ask | show | jobs | submit | valenterry's comments login

Happy to see I'm not alone.

> Go's simplifications often introduce complexities elsewhere

Exactly this.

Basically: have a complex compression algorithm? Yes, it's complex, but the resulting filesize (= program complexity) will be low.

If you use a very basic compression algorithm, it's easier the understand the algorithm, but the filesize will be much bigger.

It's a trade-off. However, as professionals, I think we should really strive to put time to properly learn the good complex compression algorithm once and then benefit for all the programs we write.


> I think we should really strive to put time to properly learn

[insert Pike's Google young programmers quote here]

That's just not the philosophy of the language. The convention in Go is to be as obvious as possible, at the cost of more efficient designs. Some people like it, others don't. It bothers me, so I stopped using Go.


Exactly. Look how we are getting downvoted for the truth.

I understand your feeling. There are many magical frameworks like e.g. Spring that do these things and it's super hard to figure out what's going on.

The solution is usually to have an even better language. One, where the typesystem is so powerful, that such hacks are not necessary. Unfortunately, that also means you have to learn that typesystem to be productive in language, and you have to learn it more or less upfront - which is not something that Google wanted for golang due to the turnover.


I don't think it's so much a question of a better language so much as a different language. There are obviously tradeoffs one makes with all language designs.

What might be interesting is a language ecosystem, where one can write parts of a system in one language and other parts in another. The BEAM and JVM runtimes allow for this but I don't think I've seen any good examples of different languages commingling and playing to their strengths.


Well, what I meant is that there is some language feature missing. So either the language gets "better" in the sense that this feature is being added, or a different language is chosen, that is already "better" in that the feature already exists. So yeah.

> The BEAM and JVM runtimes allow for this but I don't think I've seen any good examples of different languages commingling and playing to their strengths.

Probably because the runtime is always the lowest common denominator. That being said, there are lots of tools e.g. written in Scala but then being used by Java, such as Akka or Spark. And the other way around of course.


How does AWS do that though? Do the re-implement all the code in every region? Because even the slightest re-use of code could trigger a synchronous (possibly delayed) downtime of all regions.

Reusing code doesn't trigger region dependencies.

> Do the re-implement all the code in every region?

Everyone does.

The difference is AWS very strongly ensures that regions are independent failure domains. The GCP architecture is global with all the pros and cons that implies. e.g GCP has a truly global load balancer while AWS can not since everything is at core regional.


They definitely roll out code (at least for some services) one region at a time. That doesn't prevent old bugs/issues from coming up but it definitely helps prevent new ones from becoming global outages.

Right, that makes sense. But if it's an evil bug that triggers e.g. over a year-change only, then that might not help.

So I suppose theoretically also AWS can go down all together, even if less likely.


Region (and even availability zones) in AWS are independent. The regions all have overlapping IPv4 addresses, so direct cross-region connectivity is impossible.

So it's actually really hard to accidentally make cross-region calls, if you're working inside the AWS infrastructure. The call has to happen over the public Internet, and you need a special approval for that.

Deployments also happen gradually, typically only a few regions at a time. There's an internal tool that allows things to be gradually rolled out and automatically rolled back if monitoring detects that something is off.


No, Apple should have three options Yes, No, Never ask again. Like every other software with good UX.

I think "No" means "Never ask again" in Apple land.

But that doesn't prevent the app from having custom in-app dialogs about it. What are they supposed to do, use "AI" to censor them?


> But that doesn't prevent the app from having custom in-app dialogs about it. What are they supposed to do, use "AI" to censor them?

Can't Apple enforce it by store policy?


> Can't Apple enforce it by store policy?

Not sure. For one they're already famous for their reviews being ... random.

For two, a spammer could just give Apple an account to test with for which the custom popups don't show.


This is spot on. And it's a general misunderstanding of security in practice. Availability is often missed/ignored (but it is part of security) and attention is an important currency that needs to be treated carefully - or you and up with the mentioned MFA fatigue attacks or people writing down their passwords.

I have tried to point out that poorly implemented or non contructive security controls reduce system availability. As employes are not able to get to the information they need in a timely manner.

But it's been a dead end to many an argument. For some the underlying issue is a refusal to accept that product usability and security are not mutually exclusive and a difficult to use system just leeds to grey IT in the org.

The most odd reply I have received was pedantics on the definition of security availability, i.e.,

"Ensuring data and network resources are accessible to authorized users when needed"

Beacause it contains the word "authorized" any controls for authorisation can therefore never affect availability as they have to be authorized before we can consitter it an impediment to availability...

If anyone has a reply better than that's ridiculous, please help me here


The most secure thing would be to unplug the servers.

edit: I'm agreeing with parent. Availability is part of security. If it weren't, you could unplug the server and call it a day.


"Du" and "du" are generally 100% equivalent. Regular casing-rules apply, e.g. in the beginning of a sentence it's "Du" but inside it's "du". "Kannst du mir helfen?". "Du kannst dir doch selbst helfen!"

Sometimes it's written "Du" even if in the middle of the sentence when addressing someone directly. It's technically incorrect, but it's used for emphasis and hence politeness, and that's probably where your feeling comes from.

The same can happen with other words that are getting capitalized for similar reasons, but when going strictly by the book it's grammatically incorrect. An example would be "das Große Ganze" where it should be "große" but it is capitalized to emphasize the connection/phrase.


>It's technically incorrect, but it's used for emphasis and hence politeness, and that's probably where your feeling comes from.

That's wrong, it's not technically incorrect. In fact before 2006 the only correct way to address someone personally in written form was to capitalize the Du / Sie / Ihr. Since then you are allowed to write it either way. I still use the capitalized form because I'm old and that's what I learned back in school.


Fair enough.

> Since then you are allowed to write it either way

Okay, my interpretation is that it doesn't really make sense within the language rules, so they changed it but allowed to use the old style to make the transition easier. ;-)

> I still use the capitalized form because I'm old and that's what I learned back in school.

Impossible to keep up with all the Rechtschreibreformen anyways.


Thank you and nosebear for the clarification! Now I understand better why some of my colleagues (like my boss, older) use "Du" and some don't. I'll stick to not using it, there are enough grammatic pitfalls elsewhere in the German language (not that French is any easier for foreigners, I'm sure).

I love and hate German for this; it's a language whose formal pitfalls and vagaries seem almost designed to sort people into highly-refined strata of education.

It must be so cool to see all of them "from the top" (i.e. someone who has been natively-and-highly educated, immersed in the language for their whole life); but it's from the outside it's like a fancy club that you just can't seem to get into :)


For a single developer? What do you mean?


Not the parent but I imagine something like "the more software caters to big orgs, the worse fit it becomes for individuals"


Sorry, what I meant was the features on BugSnag’s free plan is enough for me. I feel like Sentry caters more on their business customer.


Don't get your hopes up.


> lets say it can process a request normally in 20us.

Then what if the OS/thread hangs? Or maybe a hardware issue even. Seems a bit weird to have critical path be blocked by a single mutex. That's a recipe for problems or am I missing something?


Hardware issues happen, but if you're lucky it's a simple failure and the box stops dead. Not much fun, but recovery can be quick and automated.

What's real trouble is when the hardware fault is like one of the 16 nic queues stopped, so most connections work, but not all (depends on the hash of the 4-tuple) or some bit in the ram failed and now you're hitting thousands of ECC correctable errors per second and your effective cpu capacity is down to 10% ... the system is now too slow to work properly, but manages to stay connected to dist and still attracts traffic it can't reasonably serve.

But OS/thread hangs are avoidable in my experience. If you run your beam system with very few OS processes, there's no reason for the OS to cause trouble.

But on the topic of a 15ms pause... it's likely that that pause is causally related to cascading pauses, it might be the beginning or the end or the middle... But when one thing slows down, others do too, and some processes can't recover when the backlog gets over a critical threshold which is kind of unknowable without experiencing it. WhatsApp had a couple of hacks to deal with this. A) Our gen_server aggregation framework used our hacky version of priority messages to let the worker determine the age of requests and drop them if they're too old. B) we had a hack to drop all messages in a process's mailbox through the introspection facilities and sometimes we automated that with cron... Very few processes can work through a mailbox with 1 million messages, dropping them all gets to recovery faster. C) we tweaked garbage collection to run less often when the mailbox was very large --- i think this is addressed by off-heap mailboxes now, but when GC looks through the mailbox every so many iterations and the mailbox is very large, it can drive an unrecoverable cycle as eventually GC time limits throughput below accumulation and you'll never catch up. D) we added process stats so we could see accumulation and drain rates and estimate time to drain / or if the process won't drain and built monitoring around that.


> we had a hack to drop all messages in a process's mailbox through the introspection facilities and sometimes we automated that with cron...

What happens to the messages? Do they get processed at a slower rate or on a subsystem that works in the background without having more messages being constantly added? Or do you just nuke them out of orbit and not care? That doesn't seem like a good idea to me since loss of information. Would love to know more about this!


Nuked; it's the only way to be sure. It's not that we didn't care about the messages in the queue, it's just there's too many of them, they can't be processed, and so into the bin they go. This strategy is more viable for reads and less viable for writes, and you shouldn't nuke the mnesia processes's queues, even when they're very backlogged ... you've got to find a way to put backpressure on those things --- maybe a flag to error out on writes before they're sent into the overlarge queue.

Mostly this is happening in the context of request/response. If you're a client and connect to the frontend you send a auth blob, and the frontend sends it to the auth daemon to check it out. If the auth daemon can't respond to the frontend in a reasonable time, the frontend will drop the client; so there's no point in the auth daemon looking at old messages. If it's developed a backlog so high it can't get it back, we failed and clients are having trouble connecting, but the fastest path to recovery is dropping all the current requests in progress and starting fresh.

In some scenarios even if the process knew it was backlogged and wanted to just accept messages one at a time and drop them, that's not fast enough to catch up to the backlog. The longer you're in unrecoverable backlog, the worse the backlog gets, because in addition to the regular load from clients waking up, you've also got all those clients that tried and failed going to retry. If the outage is long enough, you do get a bit of a drop off, because clients that can't connect don't send messages that require waking up other clients, but that effect isn't so big when you've only got a large backlog a few shards.


If the user client is well implemented either it or the user notices that an action didn't take effect and tries again, similar to what you would do if a phone call was disconnected unexpectedly or what most people would do if a clicked button didn't have the desired effect, i.e. click it repeatedly.

In many cases it's not a big problem if some traffic is wasted, compared to desperately trying to process exactly all of it in the correct order, which at times might degrade service for every user or bring the system down entirely.


Depending on what you want to do, there are ways to change where the blocking occurs, like https://blog.sequinstream.com/genserver-reply-dont-call-us-w...


Part of the problem with BEAM is it doesn't have great ways of dealing with concurrency beyond gen_server (effectively a mutex) and ETS tables (https://www.erlang.org/doc/apps/stdlib/ets). So I think usually the solution would be to use ETS if its possible which is kind of like a ConcurrentHashMap in other languages or to shard or replicate the shared state so it can be accessed in parallel. For read only data that does not change very often the BEAM also has persistent term (https://www.erlang.org/doc/apps/erts/persistent_term.html).


> you no longer know when code is executed - it's magic

Kind of. That is what makes react a framework and not a library in my opinion. That being said, it's still learnable and managable (in pure react).

> state can be managed in ~5 different ways, with a lot of ceremony and idiosyncrasies

Not a valid argument. Just use react state and be done with it. If you go for anything on top, well yeah. But to be honest, other frameworks have the same problem, even in the backend.

> It is, absolutely, an eDSL (`useState` & `useEffect` introduce new semantics using JS syntax), and a proper DSL would be a lot easier to learn.

That is absolutely true and a valid point. It's mostly a shortcoming of javascript/typescript where it would be too annoying to pass down dependencies/props in a long chain of elements. So in an insufficient programming language you have the choice between typesafety+verbosity (react without hooks) or conciseness+unsafety (hooks are not typesafe, they are not regular functions and hence can't be refactored like them etc.). Basically, they break composibility.

Honestly, the frontend world circles around that problem and every blue moon something things they found the enlightenment and then they give up the previous benefits for new benefits. And the circle repeats.

I wonder if maybe effect.website might be able to change that in the future. It has the potential.


> It's mostly a shortcoming of javascript/typescript where it would be too annoying to pass down dependencies/props in a long chain of elements

Do you know a language that has solution to this on a language level?

The only similar thing I can think of are dynamic vars in Lisp. At a quick glance they remind me of React Context.


I personally use that style in Scala with ZIO. Because the same problem exists in basically every language. I think more and more languages capture something like that and more under the term "capabilities". In a sense, the Rust borrowchecker is a very specialized case.

https://effect.website/ offers something very similar that in their effect-type; the "environment" parameter.

Basically, imagine you have many foos like `fun foo1(props: Foo1Props): string {...}` and so on, and they all call each other. Now you have one foo999 at the very bottom that is called by a foo1 at the very top. But not directly. foo1 calls foo5 which calls foo27 and so on, which then calls foo999.

Now if foo999 needs a new dependency (let's say it needs access to user-profile info that it didn't need before) you have to change the type signature and update Foo999Props. To provide it, you need to update the caller, foo567, and also add it there. And so on, until you find the first fooX that already has it. That is annoying, noisy (think of PR reviews) and so on.

Using the effect-type of effect.website you basically can the move the dependency into the returntype. So instead of returning `string`, `fun foo1` will now return `Effect<string, Error, Dependencies>` where Dependencies would be Foo1Props - which means it will not have the props parameter anymore. (or, they way I do it, I only use the Dependencies for long-lived services, not for one-off parameters)

Inside of fun1 (or funX) you can now access the "Dependencies". It's basically a kind of "inversion of dependecies" if you want so.

Seems like just moving the required dependencies from the parameter into some weird complex result type. But, there is a big difference! You have to annotate the parameter, but you can have the compiler infer the return type.

So now if you have a huge call chain / call graph, the compiler will infer automatically all dependencies of all functions for you (thank god typescript has good union types that make that work - many other languages fail at being able to do so).

So you write:

    def foo5(nonReleventProps: Props) = 
      (props) => { const x1 = foo10(); const x2 = foo20(); return x1+x2; }
Where foo10 is of type `Props => ServiceA` and foo20 is `Props => ServiceB` and the compiler will infer that foo5 returns an effect that needs both ServiceA and ServiceB - and so on. (note that you have to combine the results in a different style, I used just `const x = ...` to keep it simple)

So if you make a change very far down the call graph, it will automatically propagate up (as long as you don't explicitly annotate types). But, you still have full typesafety and can see for each function (from the inferred type) which dependencies it needs and so you know during a test hat you have to pass and what not.

Lisp is a completely different world because it is dynamically typed. In a sense it doesn't have the problem from the beginning, but it sacrifices type-safety for that.

> At a quick glance they remind me of React Context.

Yes, the intend is basically the same, but React Context is... well, an inferior version, because if you use a context where it has not been provided, it just blows up at runtime and no one protects you from that.


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: