Every social media platform is manipulated by it's owners and elites. There's no way to get around it, not when your KPIs are user engagement and advertising dollars.
Twitter has become a particularly nasty version of it. In the before times, Google, Twitter, Reddit, etc. usually spent their efforts trying to manipulate things in a mostly benign way.
If you like free markets, then you must be opposed to Twitter. This is a market controlled by a few. Competition is rigorously hunted down. Lies and fake social proof packaged into "free speech." Only the chosen ones are allowed audience.
This is the opposite of capitalism. This is the worst of cronyism.
Force switching all accounts to unfollow Democracts and follow Republicans and Elon, signal boosting right wing conspiracy theorists, blocking or suspending left or liberal accounts, it's just naked power centralization all the way down...
This is the only comment you made so far that made sense, with clear assertions and references. Everything else was unfounded or inflammatory without any concrete assertion, which is why it vibed like "Fox News talking points."
While I do think what you describe under the label of "liberal universalism" mostly makes sense, I do challenge it's consistency. By all measures, some countries are trending towards becoming liberal democracies. Why shouldn't we help them?
Ukraine being a viable liberal democracy, a useful geopolitical ally, and in opposition to a destabilizing and dehumanizing autocracy, makes for a perfect candidate for support beyond naive global liberalism. It is in our interests in many practical terms, separate from ideology.
Ukraine I think is a pretty good example of where we've learned our lesson. We're working with allies, we aren't involving US or NATO troops, we're broadening our coalition and isolating our adversaries, etc. I think we could do better (Russia is super winning the propaganda war inside the US), but it's a positive trend from Iraq/Afghanistan.
Fraud sucks and is ever evolving. Everyone gets hit in increasingly elaborate scams, and companies with degrading services makes it easier.
Some things I'm surprised weren't in the article, given that the author describes extensive background in security:
1. Suspiciously well timed fraud attempts happen when you are vulnerable, because the attacker is tipped off. Travelling and visiting unfamiliar locations raises a lot of smoke, information wise. Relying on secrets doesn't work, because information is leaked in an uncountable number of ways. You should no longer be thinking "did my card number, phone number, PID, or other secret get stolen?" It should instead be "given that my info was stolen, did anything bad happen and who do I need to securely talk to?"
2. Always blow off incoming calls, you can always get a callback or fix later, and check email, text, or other comms to see if something important is going on. Saying anything is information. As little as a few seconds of your voice being recorded can be used to generate a usable AI voice clone, and at worse it only takes a few minutes. The act of answering a phone call is information, confirming that your phone number is active and belongs to you.
Ironically, the reliance on a local CU also seems to be a miss. IME, big evil banks are more reliable in this area. They get scammed way more often, and as a result are much more resistant to these attacks via pure attrition.
This fits what I've read. Data driven decision making is the way to go, but good data is hard to come by in these fields. We have no baseline for many parts of physical health, let alone mental health. Every time a pop culture scientist tries to justify their world perspective without taking the source of data seriously, it does the field as a whole serious harm.
Not a big fan of Heidt. Way too much political, unsubstantiated and handwavy rhetoric from him. And his track record has been entirely unconvincing.
The only reasonable way I can interpret his work is that of a ideologically driven puritan, trying to make the data fit his perception. In doing so, he often misses the forest for the trees.
In particular, this interpretation of teenage mental illness is predicated on data. But the data sucks. Why should we assume we have accurate data on mental illness? It's ridiculous when you consider our society's terrible history in dealing with this stigma. Sexist notions like female hysteria, gross misuse of sedatives and psychoactive drugs by medical charlatans, repression of discussion about mental illness, bias/inexperience/abuse by governmental and health care institutions, horrible conditions at sanitoriums, the list goes on. Give it another 100 years and maybe we can draw some better patterns.
In no particular order, here are some highlights on medical data and institutions in history...
1. The consensus on normal heart rate is still erroneously assumed to be 60-100 bpm. For over 100 years, this myth was predicated on poor data and grossly misinterpreted literature. Our current, best consensus based on better data is 50-90 bpm, though virtually all textbooks and educational programs have failed to update themselves. If we can't even get statistics on heart rates right, how are we supposed to get statistics on mental illness correct?
2. There is no consensus on average breaths per minute. Textbooks often contradict themselves, quoting different fabricated ranges for respiratory rates within the same page. Data in this area is sparse and and lacks adjustment for demographics. The most comprehensive data is from a study done in 1846... We just don't know because we haven't put in the rigorous effort to definitively identify this range. Again, this basic measurement is much easier than mental illness, and we haven't done it yet.
3. The most established field of medicine is considered to be Obstetrics. That is to say, this is the medical field with the oldest depth of knowledge that we consider to be accurate or useful. Which makes sense given the significance of managing childbirth for societies. And yet there are still severe deficiencies in this field. We have a long way to go in the most mature field of medical science, and an even longer way to go in every other field. In comparison, mental health was not even mentioned in writing until 1946.
4. Prior to the gross misuse of opioids perpetuated by the Sackler family, was the gross misuse of benzodiazepines (Xanax, Valium). Prior was of Barbiturates. Prior to that was alcohol. And countless more medications in between those broad categories. Humanity has abused substances to numb itself from crippling anxiety for centuries, and in the modern sense has almost always sought to mass market these drugs before careful consideration of their appropriate use. Each medication was so abused that each generation has their own idioms and memes regarding their misuse.
I'm certain that we can do better. It's just going to take time, and Heidt's work just seems like reading tea leaves and putting people in arbitrary pigeon holes.
I think the answer is that ECS isn't supposed to solve those problems directly. It's just a framework that makes a certain class of problems very easy to solve. But big picture complexity is still up to the developer's skill and wisdom. Knowing which parts of the game should go into ECS, and which parts go somewhere else. Which systems flow from another system. Which system is allowed to manipulate the data directly and which systems are only able to read data reasonably, but not write. And how your design of the game may need to change to suit the limitations of your computer and your ability to program.
ECS is like any other framework. It is a tool or system, for organizing your efforts. Be very liberal with using it in its intended scope. Be judicious when its at the edge of its scope. Be very skeptical when its outside of its scope.
You seem to be coming from somewhere closer to "like SQL" than the data-oriented end of the spectrum.
On either end of the spectrum, there's much less flexibility. The "SQL side" is optimizing things for bulk operations[0] and flexibility coming from composition[1]. The "data-oriented side" is optimizing for performance, and it so happens that stuffing data that's processed together into arrays you can just scan in a cache-friendly way, also yields a component-like division of data.
Both those approaches are quite inflexible. They do kind of meet in the middle, as they yield similar data organization, but I'm increasingly convinced this is a surface-level, entirely incidental similarity. Philosophically, the two extremes of "ECS" are entirely unlike.
--
[0] - Again, AFAIR, ECS originally came from MMO world, where they do use relational databases for storing game state.
[1] - Also reason to use databases if you're making an MMO, as relational tables are known quantity, while serializing polymorphic object graphs is plain annoying.
The performance side is a bit overblown, in that of course performance strongly depends on access patterns and the access patterns strongly depend on what you're actually simulating. ECS can help with performance but it can also hurt: e.g. the devs of factorio, who have a very large simulation, based on their profiling, have found the game is almost entirely memory bandwidth limited, and so an ECS SoA system like is often touted would not work very well, as it's far better to run every system on each entity than it is to feed each entity through each system, and when inevitably different systems care about the same components, the first will almost certainly find the component data in the cache, while the latter almost certainly won't be and results in the same data being loaded from RAM multiple times.
I'm not sure if confabulate is exactly what LLMs do (though it seems closer than the implications of hallucinate).
But neurotypical, neurodiverse, perfectly functional people, etc. all confabulate or do something similar on a regular basis, in verbal and written mediums, and often do so in good faith. It's human instinct to communicate, even if you are uncertain and unaware of the full context of the discussion.
Teachers, customer service reps, executives, shop keepers, doctors, nurses, domain experts, authors of textbooks, it doesn't matter who it is, they'll probably confabulate or equivocate or do some other type of communication that isn't immediately useful. Yet it's still a useful activity to just talk to someone or read a less than rigorous book for the purposes of learning (discounting the relationship forming part, which is also useful). And so is using LLMs, even for casual users. So long as they understand that limitation, whether its with a chatbot or a real person. Not everything they say will be useful or truthful, but we are already capable of adjusting to that.
I'm not sure whether neurofancy people confabulating should be given an honest bill of truth health based on their category.
Sounds like #believeallconfabulators :)
Honest communication is difficult for some people to assess, and not for others. But I think we should learn from any recent #believeall... that we shouldn't base trust ratios on categorisation.
Honest communication is difficult to do, like weight-lifting, and takes a lot of practise to do well.
It also makes your BS meter more finely attuned, so is a good practise.
With that in mind, you will think this is arrogant to say if you lie for a living but not if you regularly tell the truth:
Liars lie with liars and lie to rid themselves of truth troubles.
Think about that when you next talk to a chatbot/human confabulator :)
I'm not really into Twitter, so I'm guessing there's some drama you are referencing that is topical.
But, I'm not talking about any society wide issue or philosophical treatise about trust and breakdowns in communication. Just talking about day to day interactions, where the stakes are completely different.
Trivial Example: If the sign on a mailbox says "Last pickup 5:00pm," what exactly does that mean? Will it be picked up, processed, and sent out of town that same day? Or just merely picked up, to be processed and sent the next day. This piece of written communication isn't a purely useful truth - it's ambiguous.
Pretend that this is an important behavior to know for your business, like if you were mailing huge checks for some obscure financial process. So you call up the local post office and ask. The worker who picks up the phone might know exactly what you are talking about and helpfully tell you the right answer. Or they might not, and tell you honestly that they don't know. Or they ask their supervisor, transfer you, make something up, tell you it doesn't matter, mail will get where it goes, or that you shouldn't worry about it, it's just mail.
ChatGPT could give you same distribution of answers as that worker did: helpful truth, meaningless equivocation, reassurance, redirection, confabulation, or lie to you. ChatGPT can be just as useful as talking to people in spite of those flaws, because people do the same thing.
This is reliant on the prior assumption that you gracefully handle unreliable communication, and that communication with people is useful. While this might seem a bit farfetched to some people, remember that we have a ready analog in computer science - networking TCP over UDP.
Twitter has become a particularly nasty version of it. In the before times, Google, Twitter, Reddit, etc. usually spent their efforts trying to manipulate things in a mostly benign way.
If you like free markets, then you must be opposed to Twitter. This is a market controlled by a few. Competition is rigorously hunted down. Lies and fake social proof packaged into "free speech." Only the chosen ones are allowed audience.
This is the opposite of capitalism. This is the worst of cronyism.
Force switching all accounts to unfollow Democracts and follow Republicans and Elon, signal boosting right wing conspiracy theorists, blocking or suspending left or liberal accounts, it's just naked power centralization all the way down...