I’m certainly not an expert, but just based on my personal experiences, I think “character” is the distillation of a lot of different aspects of self, some of which are binary haves/don’t haves (“people listen when you speak”) and others that are more of a spectrum (a “willingness to speak up” is easier when the consequences are low).
That is to say, it’s really really hard to pinpoint exactly what makes up character and whether someone has it. So when we DO cross paths with those who clearly have character it’s all the more reason to network, communicate, and keep those people in our orbit, so that we might learn from them and maybe have a little bit of their character rub off on us.
This is a fairly defeatist approach to the issue (read that as a statement of fact, not an accusation or argument). The problem with taking this stance, for many people, is that you’re giving a mouse a cookie, except the cookie is marginally more and more control over your life in the form of the ability to control what you see, what communities you’re allowed to engage with, and what you’re allowed to do online.
This battle for online privacy and control is just that, a battle, and you are correct that it is not a fair fight. But engaging and pushing back, through advocacy, speaking out, and acts of noncompliance does three things:
First, it slows the progress of these measures and thus limits the amount of control over our lives we give up, hopefully until some more politically friendly people come to power.
Second, it provides a barometer (via its effectiveness) for assessing the state of that fight, and how dire it is becoming.
Finally, people voicing their concerns about these laws gives information that helps inform more powerful and potentially altruistic advocates with more resources (such as the EFF) in how those resources should be allocated.
Maybe those aren’t good reasons for you, and that’s okay. Lots of people just want to browse twitter and see sports scores and they don’t really care if they have to show ID to do that. For anybody else reading this though, there are lots of reasons why your involvement and engagement in this issue should not stop with “that’s just how the world works”.
The issue here for me has always been about the difference between treating a symptom and treating the illness.
Excessive surveillance is necessary when you cannot convince people of the merits of your politics or morals on their own and need to use the power of the State to intimidate and control their access.
For the issue on minors, if you have a child (guilty here) you are obligated to actively raise and educate them on the nature of the world. For access to online interactions this doesn’t necessarily only mean active limits (as one might judge appropriate for the child), but also teaching them that people do not always have positive intent, and anonymity leads to lack of consequence, and consequently potentially antisocial behavior.
A person’s exposure to these issues are not limited to interactions online. We are taught to be suspicious of strangers offering candy from the back of panel vans. We are taught to look both ways when entering a roadway.
The people demanding the right to limit what people can say and who they can talk to do so under the guise of protecting children, but these tools are too prone to the potential for abuse. In the market of ideas it’s better (and arguably safer, if not significantly more challenging) to simply outcompete with your own.
I read it more as "give the mouse a cookie because it's already getting crumbs"
These types of arguments are quite common due to how beneficial they are for authoritarian. People forget that authoritarians don't need a lot of supporters, but they do need a lot of people to be apathetic or feel defeated. With that in place even a very small group can exert great power. Which also tends to make their power appear larger than it is, in order to create that feedback loop
The update thing struck me as slightly out of touch; if I were to make a list of my top 10 most used consumer products that can be updated, probably 8-9 of them have abused updates to make things worse.
We spend so much time training people that if you hit update, it’s going to suck: you’re going to suddenly get ads in your favorite app, or some new feature is going to get paywalled, or the UI is going to completely change with no warning. It seems counterproductive to accept that our industry does this stuff and then publish an open letter finger-wagging people for not updating.
The neat thing about all this is that you don’t get a choice!
Your favorite services are adding “AI” features (and raising prices to boot), your data is being collected and analyzed (probably incorrectly) by AI tools, you are interacting with AI-generated responses on social media, viewing AI-generated images and videos, and reading articles generated by AI. Business leaders are making decisions about your job and your value using AI, and political leaders are making policy and military decisions based on AI output.
I do have a choice, I just stop using the product. When messenger added AI assistants, I switched to WhatsApp. Now WhatsApp has one too, now I’m using Signal. Wife brought home a win11 laptop, didn’t like the cheeky AI integration, now it runs Linux.
Sadly, almost none of my friends care or understand (older family members or non-tech people). If I tried to convince friends to move to Signal because of my disdain for AI profiteering, they'd react as if I were trying to get them to join a church.
Visa hasn't worked for online purchases for me for a few months, seemingly because of a rogue fraud-detection AI their customer service can't override.
Is there any chance that's just a poorly implemented traditional solution rather than feeding all my data into an LLM?
If by "traditional solution" you mean a bunch of data is fed into creating an ML model and then your individual transaction is fed into that, and it spits out a fraud score, then no, they'd not using LLMs, but at this high a level, what's the difference? If their ML model uses a transformers-based architecture vs not, what difference does it make?
Traditional fraud-detection models have quantified type-i/ii error rates, and somebody typically chooses parameters such that those errors are within acceptable bounds. If somebody decided to use a transformers-based architecture in roughly the same setup as before then there would be no issue, but if somebody listened to some exec's hairbrained idea to "let the AI look for fraud" and just came up with a prompt/api wrapping a modern LLM then there would be huge issues.
I run a small online software business and I am continually getting cards refused for blue chip customers (big companies, universities etc). My payment processor (2Checkout/Verifone) say it is 3DS authentication failures and not their fault. The customers tell me that their banks say it isn't the bank's fault. The problem is particularly acute for UK customers. It is costing me sales. It has happened before as well:
I've recently found myself having to pay for a few things online with bitcoin, not because they have anything to do with bitcoin, but because bitcoin payments actually worked and Visa/MC didn't!
For all the talk in the early days of Bitcoin comparing it to Visa and how it couldn't reach the scale of Visa, I never thought it would be that Visa just decided to place itself lower than Bitcoin.
Kind of the same as Windows getting so bad it got worse than Linux, actually...
Even if my favorite service is so irreplaceable, I still can use it without touching AI part of it. If majority who use a popular service never touch AI features, it will inevitably send a message to the owner one way or another - you are wasting money with AI.
Nah the owner will get a filtered truth from the middle managers that present them with information that everything's going great with AI, and the lost money is actually because of those greedy low level employees drinking up all the profit by working from home! The entire software industry has a massive Truth-To-Power problem that just keeps getting worse. I'd say the software industry in this day and age feels like Lord of the Flies but honestly feels too kind.
Exactly this. "AI usage is 20% of our customer base" "AI usage has increased 5% this quarter" "Due to our xyz campaign, AI usage has increased 10%"
It writes a narrative of success even if it's embellished. Managers respond to data and the people collecting the data are incentivised to indicate success.
They wouldn’t do this stuff unless it worked at large scale.
The irony is, at least in my case, I made the impulse decision to just cancel outright instead of accepting the lower price, which lost them what had been a 15 year recurring customer. I’m one person, but I wonder how many others did the same.
> Just project yourself 50 years from now: our current web pages will look archaic. Everything will be conversational, using language, vision, the whole spectrum.
To what end?
We’ve interacted with the internet using the same text-oriented protocols, the same markup languages, and even the same layout elements for 36 years. What profit motive exists to upend that and standardize on a new format like conversational language?
And, based on the development trends of the internet over its entire history, what suggests that if the world were to commit to some radical shift in the foundational technology underpinning the web, it would move towards voice, or vision (what does this mean?) based interfaces.
I get that AI is cool, and it has legitimate use cases, but is it possible that we as technologists might be falling into that age-old trap of having a solution in search of a problem?
> I find it easy to imagine some disabled, or disfigured, otherwise blocked-from-stardom person using tech like this to transform themselves and be able to express their truth without being unfairly judged by the physical form they were born into.
Outside of a select number of A-list actors, are there situations where the other 85-90% of actors are able to express their truth today?
One of the common problems with creative industries (and the primary reason I switched away from pursuing game development) was that you're not expressing your truth; you're expressing someone else's truth in exchange for money. And unless you have lots of other intangible and often uncontrollable qualities, and are willing to play politics, you will probably never end up in a position to express your truth (with any degree of notoriety) through your own or other people's work.
I am not disabled or disfigured, and while I'm blocked-from-stardom that's just because I have a fairly uninteresting existence overall that wouldn't warrant it on it's own. So I can only guess at this stuff from an outside perspective, but from where I sit, I don't see AI as a sea-change enabler for the people you're referring to.
That is to say, it’s really really hard to pinpoint exactly what makes up character and whether someone has it. So when we DO cross paths with those who clearly have character it’s all the more reason to network, communicate, and keep those people in our orbit, so that we might learn from them and maybe have a little bit of their character rub off on us.
reply