Hacker News new | past | comments | ask | show | jobs | submit | bluepizza's comments login

It's surprisingly complex.

Connections are dropped all the time, and then your code, on both client and server, need to account for retries (will the reconnection use a cached DNS entry? how will load balancing affect long term connections?), potentially missed events (now you need a delta between pings), DDoS protections (is this the same client connecting from 7 IPs in a row or is this a botnet), and so on.

Regular polling great reduces complexity on some of these points.


Long polling has nearly all the same disadvantages. Disconnections are harder to track, DNS works exactly the same for both techniques, as does load balancing, and DDoS is specifically about different IPs trying to DoS your system, not the same IP creating multiple connections, so irrelevant to this discussion.

Yes, WS is complex. Long polling is not much better.

I can’t help but think that if front end connections are destroying your database, then your code is not structured correctly. You can accept both WS and long polls without touching your DB, having a single dispatcher then send the jobs to the waiting connections.


My understanding is that long polling has these issues handled by assuming the connection will be regularly dropped.

Clients using mobile phones tend to have their IPs rapidly changed in sequence.

I didn't mention databases, so I can't comment on that point.


Well, it’s the same in both cases. You need to handle disconnection and reconnection. You need a way to transmit missed messages, if that’s important to you.

But websockets also guarantee in-order delivery, which is never guaranteed by long polling. And websockets play way better with intermediate proxies - since nothing in the middle will buffer the whole response before delivering it. So you get better latency and better wire efficiency. (No http header per message).


That very in order guarantee is the issue. It can't know exactly where the connection died, which means that the client must inform the last time it received an update, and the server must then crawl back a log to find the pending messages and redispatch them.

At this point, long polling seems to carry more benefits, IMHO. WebSockets seem to be excellent for stable conditions, but not quite what we need for mobile.


> It can't know exactly where the connection died, which means that the client must inform the last time it received an update, and the server must then crawl back a log to find the pending messages and redispatch them.

I don't see how this is meaningfully different for long polling. The client could have received some updates but never ack'd it successfully over a long poll, so either way you need to keep a log and resync on reconnection.


If a connection is closed, isn't the browser's responsibility to solve DNS when you open it again?

Mercosur, Australia-New Zealand, and the Gulf Cooperation Council all have a similar agreement on movement of labour. It is not particularly rare.

My understanding was that System 1/System 2 thinking is unproven conjecture[1] that can't even be replicated[2]. It would be unwise to analyse behaviour using this framework.

1: https://www.psychologytoday.com/intl/blog/a-hovercraft-full-... 2: https://replicationindex.com/2016/01/31/a-revised-introducti...


I don't want to argue the basis of system 1/system 2 as described in [1], because the point I'm taking away is more about whether they interoperate at times of decision making. The point I'm making is system 2 is a far more costly (effortful in the article) mechanism of decision making.

The point I'm making is, as an organism we avoid utilizing higher-effort or higher-cost actions when unnecessary. An untrained lower-cost (IR1 in the article or System 1 in my definition) decision will result in not caring about quality. A trained lower-cost decision will utilize heuristics to bias for higher quality.


The point of the links I've shared is that there is no such thing as System 1/2, and decision effort/cost is not a factor.

Respectfully, I don't think you took away the correct implications. Specifically in the implications section of [1]:

"The key to effective intuitive decision making, though, is to learn to better calibrate one’s confidence in the intuitive response (i.e., to develop more refined meta-thinking skills) and to be willing to expand search strategies in lower confidence situations or based on novel information."

and

"Relatedly, it also means we should stop assuming that more conscious and effortful decision-making is necessarily better than more heuristically-driven intuitive decision-making."

I would say that while the article makes very interesting objections to the S1/S2 thinking framework, its objections are that they are far more intertwined as measured. However, the article still very clearly agrees that S1 is lower cost than S2.


> most notably that many of the properties attributed to System 1 and System 2 don’t actually line up with the evidence, that dual-process theories are largely unfalsifiable, and that most of the claimed support for them is “confirmation bias at work”

The article absolutely does not agree that S1 is lower cost than S2, as the article does not agree that S2 exists at all.


I see so this may be semantics then as the article agrees with intuitive decision making. I think I understand where we’re saying the same things. I will consider replacing my terminology in the future, thank you!

My personal theory (which is also baseless speculation) is that we use intuition to consider the decision pipeline closed and the matter settled. We keep at it until it feels right.

In this representation, "system 1" is simply an early pipeline decision, where one intuitively feels that it is the correct decision immediately. And if a satisfying decision doesn't come up, we keep looping over the decision, adding more factors, until we finally find the factors that make our intuition agree with it and close the matter. The longer we try to find a satisfactory decision, the more factors we try out, and therefore, someone came up with "system 2", but I see "system 2" as a particularly bad misrepresentation: it is still the same system looping, we are just staying in it longer.

The source of my theory is the interesting effect of a broken intuition: OCD sufferers are unable to break from this cycle, and even when intellectually satisfied with a conclusion, they perceive their brains as "stuck" in the question.

So fundamentally, I agree with your general idea: intuition plays a major role in this system, and when it breaks, people get paralyzed in it, no matter how good the decision is intellectually. My only point is that there is no division of systems. It's one single subsystem, integrated with many others, forming one single blackbox entity. The fast/slow thinking framework is a misrepresentation that doesn't really help one understand people's behaviors. It's a bad map.


We tend to be pushed towards immigration because of a lack of safety, of growth opportunities, and no hope that things will get any better.

With that in mind, if Latin America had safety, I suspect at least half of the immigrants wouldn't leave, especially the ones who are able to hold a middle class job.

Most of us would live in a lower standard of life if it allowed to stay close to friends and family. But not being able to walk down the street bears a heavy weight on our anxieties.


And the impoverished areas of America are also where gun crime and drug overdoses are the most common. Oh, and don't forget losing healthcare and education services as the area continues to decline. These things go together just like in Latin America.

Moving in response to this reality is not an American values problems. I find the instinct to blame Americans for their discontent while framing others in the same situation as victims quite odd.


More $$$ is a strong motivator.

Latin American tends to be unsafe (physically), but the money probably plays a bigger motivating factor. Remittances and ‘doing it for the family back home’ are common themes.


> Latin American tends to be unsafe (physically)

Depending on which country and which city, Latin American cities are not more dangerous than risky US cities. Many of our cities are reasonably safe. There are burglaries, muggings and robbery like in most big cities all over the world -- no more, and no less.

There are some "trouble" hot spots that are particularly dangerous, of course. The same can be said of the US.


Risky US cities are pretty risky though. Which is why I said that.

Let me rephrase then: average Latin American cities in many countries are comparable to average US cities.

There are trouble hotspots (and countries) just as there are trouble hotspots in the US.

It's not true that Latin America as a whole is "unsafe". It's not Ciudad Juárez everywhere. I live in Buenos Aires and there's crime comparable to any big city (with better and worse periods, of course).


And large portions of the population (much more than in the US) live in areas that have violent crime and murder rates higher than the worse parts of Oakland. And that isn’t even counting Guatemala as it’s just ‘adjacent’.

Southern South America isn’t bad, but also doesn’t have many people in it.


The children of that immigrants are growing up and seem to have less concern about the cousins back in the old country - their home is the US as are all their friends. The people back in the old country are interesting but not really relevant.

Same as it ever was. Not a lot of Irish, Norwegian, English, French, Germans still keep in contact with ‘home’ after 2-3 generations either.

Not parent, but I have a similar impression. Design patterns, clean code, and several of these well known tools were particularly useful during C++ and early Java eras, where footguns were abundant, and we had very little discussion about them - the Internet was a much smaller place back then. Most of the developer work was around building and maintaining huge code bases, be it desktop or server, monoliths were mostly the only game. And many initiatives grew trying to tame the inherent hazard.

I think that microservices (or at least, smaller services) and modern languages allow the code to stay more manageable, to the point where Java devs now are able to dismiss Spring and go for a much simpler Quarkus.


> Like with everything else, using your brain cells can quickly make you realize it's a lot more than "gambling"

Which sounds similar to how gambling addicts report their thoughts on their addiction - that they are smart enough to learn poker, or they know enough basketball to be able to predict outcomes.

Like with everything else, some research and data interpretation shows that with the remarkable exception of two or three highly specialized companies that employ some of the best mathematicians alive, most active investors underperform.


This comment should not be greyed out. See the Buffett Bet [1] for the most famous example of this.

Warren Buffet bet a million against Ted Seides (head of Protege at the time) that a simple index fund investment would outperform any handpicked selection of hedge funds over a decade. And critically this bet was made in 2008 just before the market crashed! That's when hedge funds should disproportionately shine, as per their name, by hedging. In the end it wasn't even close.

[1] - https://www.investopedia.com/articles/investing/030916/buffe...


I think it would be much better if the comments here focused on the benefits of passive, long term investments in index funds. As it is, I worry people will “correct” by leaving money in a savings account instead.

>Which sounds similar to how gambling addicts report their thoughts on their addiction - that they are smart enough to learn poker, or they know enough basketball to be able to predict outcomes.

Some people are smart enough to learn poker, and build careers out of them. And there are many books on poker. You picked a poor example.


A gambling addict thinks they can learn to play poker because there is a small elite of professional poker players.

Which is exactly what an amateur trader thinks - I will learn to beat the market, because Ray Dalio did.

I used the poker example precisely because of this parallel between a very small elite and an overwhelming majority of loss makers.


You're right in that sense that people think they can learn to be the best, and fail at it, as only few succeed. The game wouldn't have tournaments if that were true no one could consistently win financially.

You're also right that it triggers the addiction itch in some vulnerable individuals!

I play poker and would never be stupid enough to put real money on it. It's great as a social game for me where 25 dollars is the risk.

Don't hate the game...


> I play poker and would never be stupid enough to put real money on it

This whole thread is casinos arguing against regulation because casual poker players don’t get addicted. We have billions of dollars targeting young people (mostly single young men) with gambling, from zero-day options trading to crypto and sports betting.


Because it is a fantastic movie, with fascinating characters, a thrilling storyline, and full of interesting concepts.

The popcorn and soda public loved it - it was the highest grossing movie in 2017.

There is no mystery in its rave reviews. Both critics and general public enjoyed it. The vicious hate this movie gets is from Star Wars fans only. Nobody hates Star Wars as much as its fans.


I'm not sure if those still count as fans or former fans suffering from nostalgia. Consider Call of Duty. I really liked 3 and 4. After that, not so much, and I have not even bothered with recent ones. They could've stopped there, as far as I'm concerned. I also did not really like The Phantom Menace either, but I get the appeal for _others_ (and I liked the next one a lot more). Do I include that in my review? If it is a personal review, no. If it were a professional review for a magazine or website (i.e. critic)? Sure. I believe (cannot prove) the main reason the latest Star Wars movies gets a lot of flak is Star Wars = Disney, and Disney is too 'woke' for some people. They don't like a female POC lead.


> I believe (cannot prove) the main reason the latest Star Wars movies gets a lot of flak is Star Wars = Disney, and Disney is too 'woke' for some people. They don't like a female POC lead.

If you earnestly believe this, you really need to seek out more diverse news. Everyone I know absolutely hated both TLJ and TRoS, and I can guarantee it's not because they're racist or misogynist. They are just really bad movies (TRoS in particular is a series of nonsensical mcguffin quests held together by the most nonsensical plot). TFA is pretty universally agreed in my experience to have been fine, but too safe.

It also feels like a big stretch to call Kelly Marie Tran the lead, even if the harassment she experienced was abhorrent. It's an ensemble cast and she has 12th billing.


I think the issue is that bringing the application down might mean cutting short concurrent ongoing requests, especially requests that will result in data mutation of some sort.

Otherwise, some situations simply don't warrant a full shutdown, and it might be okay to run the application in degraded mode.


"I think the issue is that bringing the application down might mean cutting short concurrent ongoing requests, especially requests that will result in data mutation of some sort."

Yes but what is worse is silently corrupting the data or the state because of running in buggy state.


This is a false choice.


If you don't know why a thing that's supposed to never be null ended up being null, you don't know what the state of your app is.

If you don't know what the state of your app is, how do you prevent data corruption or logical errors in further execution?


> If you don't know what the state of your app is, how do you prevent data corruption or logical errors in further execution?

There are a lot of patterns for this. Its perfectly fine and often desirable to scope the blast radius of an error short of crashing everything.

OSes shouldn't crash because a process had an error. Servers shouldn't crash because a request had an error. Missing textures shouldn't crash your game. Cars shouldn't crash because the infotainment system had an error.


If you can actually isolate state well enough, and code every isolated component in a way that assumes that all state external to it is untrusted, sure.

How often do you see code written this way?


This is basically all code I've worked on. You have a parsing/validation layer that passes data to your logic layer. I could imagine it working less well for something like a game where your state lives longer than 2 ms and an external database is too slow, but for application servers that manipulate database entries or whatever it's completely normal.

In most real-world application programming languages (i.e. not C and C++), you don't really have the ability to access arbitrary memory, so if you know you never gave task B a reference to task A or its resources, then you know task B couldn't possibly interfere with task A. It's not dissimilar to two processes being unable to interfere with each other when they have different logical address spaces. If B does something odd, you just abort it and continue with A. In something like an application server, it is completely normal for requests to have minimal shared state internal to the application (e.g. a connection pool might be the only shared object, and has a relatively small boundary that doesn't allow its clients to directly manipulate its own internals).


You can "drop" that request which fails instead of crashing the whole app (and dropping all other requests too).


Sure. You wouldn't want a webserver to crash if someone sends a malformed request.

I'd have to think long and hard about each individual case of running in degraded mode though. Sometimes that's appropriate: an OS kernel should keep going if someone unplugs a keyboard. Other times it's not: it may be better for a database to fail than to return the wrong set of rows because of a storage error.


That's exactly what the attacker wants you to do after their exploit runs: ignore the warning signs.


You don't ignore it. You track errors. What you don't do is crash the server for all users, giving an attacker an easy way to DoS you.


A DoS might be the better option vs. say, data exfiltration.


Most bugs aren't going to create any risk for data exfiltration. In most real application servers (which are very rarely written in C or C++ these days), requests are almost completely isolated from each other except to the extent that they interact with a database. If you detect a bug in one request, you just abort the one request, and there's likely no way it could affect others.

This is part of why something like Rust is usable at all; in the real world a lot of logic has straightforward, linear lifecycles. To the extent that it doesn't, you can push the long-lived state into something like an external database, and now your application has straightforward lifecycles again where the goal of a task is to produce commands to manipulate the database and then exit.


Sure, but i was talking about an individual process. If you don't know what state it's in, you simply can't trust it to run anymore. That's all.


Except you usually can because the state isn't completely unknown. You might not expect some field in a structure to be null, but you still know for example that there's no way for one request to have a reference to another, so you just abort the one request and continue.


No, if you have been compromised, you cannot make these assumptions.


And what does DOS attacker want you to do? Not crashing the whole service to deny others of the service?


That is a valid tradeoff in many situations, yes.


> If you don't know what the state of your app is, how do you prevent data corruption or logical errors in further execution?

Even worse, you might be in an unknown state because someone is trying to exploit a vulnerability.


If you crash then you've handed them a denial of service vulnerability.


That's an issue handled higher up the stack with process isolation etc. It's still not ok to continue running a process that is in an unknown state.


It reminds me of the pre-symbolic mathematical notations where equations would be described in long paragraphs.


Taxi doors already automatically open and close in major Japanese metro areas.

There is a certain romance around good service, but the good service is not the reason why people use taxis here.

One could make a similar argument that self service restaurants serving revolving sushi, or tablet ordered sushi miss the good service of a great restaurant. Yet these places are wildly popular, because one goes there to eat.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: