His examples of “feeling poor” are nothing to do with being poor and are in fact mostly signs of largesse or affluence.
Councils expanded the number of SEND pupils eligible for free school transport, increasing spending from £9m to £20m - how is that being poor? It’s an improvement in quality of life…? Ok, it might be a waste of money (or might not be) but it’s not a cut is it.
Being poor is like your country sends 10 athletes to the Olympics and they all get eliminated in the heats. No Western nation is actually poor. All Western nations are roughly the same, there’s nothing notable about the UK other than it is towards the richer end and is a cultural superpower.
> Councils expanded the number of SEND pupils eligible for free school transport, increasing spending from £9m to £20m - how is that being poor?
Because the money is sucked out of the council coffers and they then can't provide a lot of the basics. It's not an improvement in quality for people who are, for instance, not SEND pupils. The point being that more and more expenses like this appear each year and then the budget is gone when it comes to looking after the library, fixing potholes etc, Small every day things that make the country look and feel like it's functioning are ignored until everywhere just feels a bit grim and broken.
I'm interested to see what Labour have in store for local government reform overall. Where I live will be affected since I'm just into Derbyshire and they're consolidating all boroughs into unitary authorities, which I'm in favour of since I have four tiers of local government where I live.
Intuitively, the richer and more productive a country becomes, the better care they can take of the populace. Education, transport, welfare etc... . Yet it feels like the opposite is happening. We're seeing transport costs and tuition fees endlessly rising, cuts to disability payments, and broken promises to lift caps on welfare. All in the name of "stimulating the economy".
I had the complete opposite impression from that thread. It seemed like people were politically motivation to interpret the law in a certain way, so they could act like they were being coerced.
These closures are acts of protest, essentially.
I agree with @teymour's description of the law. It is totally normal legislation.
Not only is this law terrible, there are several other laws like this that have existed for years.
People saying criticism is politically motivated (ignoring the fact that this law was drafted by the Tories and passed by Labour...so I am not exactly clear what the imagined motivation might be) ignore the fact that the UK has had this trend in law for a long time and the outcome has generally been negative (or, at best, a massive waste of resources).
Legislation has a context: if we lived in a country where police behaved sensibly, I could reasonably see how someone could believe this was sensible...that isn't reality though. Police have a maximalist interpretation of their powers (for example, non-crime hate incidents...there is no legislation governing their use, they are used regularly to "question the thinking" of people who write critical things about politicians, usually local, or the police...no appointed authority gave them this power, their usage his been questioned by ministers...they register hundreds of thousands of a year still).
Btw, if you want to know how the sausage is made: security services/police want these laws, some event happens, and then there is a coordinated campaign with the media (the favour is usually swapped for leaks later) to build up "public support" (not actual support, just the appearance of support), meetings with ministers are arranged "look at the headlines"...this Act wasn't some organic act of legislative genius, it was the outcome of a targeted media campaign from an incident that, in factual terms, is unrelated with what the Act eventually became (if this sounds implausible, remember that May gave Nissan £30m on the back of SMMT organising about a week's worth of negative headlines, remember that Johnson brought in about 4m migrants off the back of about two days of briefing against him by a six-month old lobbying group from hotels and poultry slaughterhouses...this is actually how the govt works...no-one reads papers apart from politicians).
Giving Ofcom this power, if you are familiar with their operations, is an act of literal insanity. Their budget has exploded higher (I believe near a quarter of a billion now). If you think tech companies are actually going to enforce our laws for us, you are wrong. But suggesting that Ofcom with their new legions of civil servants is supposed to the watchdog of online content...it makes no sense, it cannot be described as "totally normal" in a country other than China.
It's incumbent on the prosecution to prove that you know the key they are claiming you are withholding. It is a defence to say you forgot it, or that the data is random. The prosecution would have to prove that you didn't forget it and that the data is not random.
s3 makes it clear that if you plausibly claim you have forgotten it, then the prosecution must prove this is not the case (i.e. you still know it) beyond reasonable doubt.
> "It’s like imagining that a printer could actually feel pain because it can print bumper stickers with the words ‘Baby don’t hurt me’ on them. It doesn’t matter if the next version of the printer can print out those stickers faster, or if it can format the text in bold red capital letters instead of small black ones. Those are indicators that you have a more capable printer but not indicators that it is any closer to actually feeling anything"
Love TC but I don't think this argument holds water. You need to really get into the weeds of what "actually feeling" means.
To use a TC-style example... suppose it's a major political issue in the future about AI-rights and whether AIs "really" think and "really" feel the things they claim. Eventually we invent an fMRI machine and model of the brain that can conclusively explain the difference between what "really" feeling is, and only pretending. We actually know exactly which gene sequence is responsible for real intelligence. Here's the twist... it turns out 20% of humans don't have it. The fake intelligences have lived among us for millennia...!
I disagree. The reason humans anthropomorphize "AI" is because we apply our own meta-models of intelligence to llms, etc., where they simply don't apply. The model can spit out something that seems extremely intelligent and well thought out that would truly be shocking if a monkey said it for example due to our meta-model of intelligence, and that may be valid in that case if we determined it wasn't simply memorized. His argument can certainly be more fleshed out, but the point he's making is correct, which is that we can't treat the output of a machine designed to replicate human input as though it contains the requisite intelligence/"feeling"/etc to produce that output on it's own.
I agree that with current LLMs the error goes the other way; they appear more conscious than they are, compared to, say, crows or octopuses which appear less conscious than they actually are.
My point is that "appears conscious" is really the only test there is. In what way is a human that says "that hurts" really feeling pain? What about Stephen Hawking "saying it", what about if he could only communicate through printed paper etc etc. You can always play this dial-down-the-consciousness game.
People used to say fish don't feel pain, they are "merely responding to stimulus".
The only actual difference in my view is that somehow we feel that we are so uber special. Besides that, it seems there's no reason to believe that we are anything more than chemical signals. But the fact that we have this strong "feeling" that we are special refuses us to admit that. I feel like I'm special, I feel like I exist. That's the only argument for being more than something else.
Interestingly the movie Companion, out this weekend, illustrates this case exactly. It's a thriller, not a philosophical treatise, so don't expect it to go deep into the subject, but the question of what "pain" means to an AI is definitely part of the story.
Well, he is making an analogy that real internal experience cannot be confirmed externally, however convincing the performance, but this is the only way we know about the internal experience of all things, including ones we typically assign "real" consciousness to (humans, dogs) and ones we don't (amoeba, zygotes, LLMs).
To be clear I'm not for a moment suggesting current AIs are remotely comparable to animals.
> You need to really get into the weeds of what "actually feeling" means.
We don’t even know what this means when it’s applied to humans. We could explain what it looks like in the brain but we don’t know what causes the perception itself. Unless you think a perfect digital replica of a brain could have an inner sense of existence
Since we don’t know what “feeling” actually is there’s no evidence either way that a computer can do it. I will never believe it’s possible for an LLM to feel.
“Feeling” is disconnected from reality, it’s whatever you perceive it as. Like morality, you can’t disprove someone’s definition of feeling, you can only disagree with it.
If scientists invent a way to measure “feeling” that states 20% of people don’t feel, including those otherwise indistinguishable from feeling ones, most people would disagree with the measurement. Similarly, most people would disagree that a printer that prints “baby don’t hurt me” is truly in pain.
These discussions seem to me to still get hung up on the classical sci-fi view of an AI, even talking about Companion here, of some single identifiable discrete entity that can even potentially be the locus of things like rights and feelings.
What is ChatGPT? Ollama? DeepSeek-R1? They're software. Software is a file. It's a sequence of bytes that can be loaded into memory, with the code portion pulled into a processor to tell it what to do. Between instructions, the operating system it runs on context switches it out back to memory, possibly to disk. Possibly it may crash in the middle of an instruction, but if the prior state was stored off somewhere, it can be recovered.
When you interact through a web API, what are you actually interacting with? There may be thousands of servers striped across the planet constantly being brought offline and online for maintenance, upgrades, A/B tests, hardware decommissioning. The fact that the context window and chat history is stored out of band from the software itself provides an illusion that you're talking to some continually existing individual thing, but you're not. Every individual request may be served by a separate ephemeral process that exists long enough to serve that request and then never exists again.
What is doing the "feeling" here? The processor? Whole server? The collection? The entire Internet? When is it feeling? In the 3 out of 30000 time slices per microsecond that the instruction executing is one pulled from ChatGPT and not the 190 other processes running at the same time that weren't created by machine learning and don't produce output that a human would look at and might think a human produced it?
I'll admit that humans are also pretty mysterious if you reduce us to the unit of computation and most of what goes on in the body and brain has nothing to do with either feeling or cognition, but we know at least there is some qualitative, categorical difference at the structural level between us and sponges. We didn't just get a software upgrade. A GPU running ChatGPT, on the other hand, is exactly the same as a GPU running Minecraft. Why would a fMRI looking at one versus the other see a difference? It's executing the same instructions, possibly even acting on virtually if not totally identical byte streams, and it's only at a higher-level step of encoding that an output device interprets one as rasters and one as characters. You could obfuscate the code the way malware does to hide itself, totally changing the magnetic signature, but produce exactly the same output.
Consider where that leads as a thought experiment. Remove the text encodings from all of the computers involved, or just remove all input validation and feed ChatGPT a stream of random bytes. It'll still do the same thing, but it will produce garbage that means nothing. Would you still recognize it as an intelligent, thinking, feeling thing? If a human suffers some injury to eyes and ears, or is sent to a sensory deprivation chamber, we would say yes, they are still a thinking, feeling, intelligent creature. Our ability to produce sound waves that encode information intelligible to others is an important characteristic, but it's not a necessary characteristic. It doesn't define us. In a vacuum as the last person alive with no way to speak and no one to speak to, we'd still be human. In a vacuum as the last server alive with no humans left, ChatGPT would be dirty memory pages never getting used and eventually being written out to disk by its operating system as the server it had been running on performs automated maintenance functions until it hits a scheduled shutdown, runs out of power, or gets thermally throttled by its BIOS because the data center is no longer being actively cooled.
I think Ted Chiang is doing us a service here. Behavioral equivalence with respect to the production of digitally-encoded information is not equivalence. These things are not like us.
It seems for a lot of people that's all that matters: "if it quacks like a duck it must be a duck!". I find that short-sighted at best, but it's always difficult to present arguments that would "resonate" with the other side...
It isn’t just an idea for a science fiction story though. It is also a philosophical argument, just predicated on something unexpected, which is probably not true, but which presents for us an interesting scenario, and which isn’t technically ruled out by existing evidence (although, it seems unlikely of course).
Well, I guess that’s what the best science fiction stories are. But, the best science fiction stories aren’t just science fiction stories!
It's also perfect for RL because it can compile it's output and check it against the input. It's a translation exercise where there's already a perfect machine translator in one direction.
It probably just hasn't happened because decompilation is not a particularly useful thing for the vast majority of people.
Does any money laundering offence not have a mens rea, though? You typically have to have at least suspected that you were dealing with the proceeds of crime.
Structuring (splitting up cash deposits to a bank so that they don't trigger the bank to file a CTR) comes to mind. It does have a mens rea component, but the money being entirely clean doesn't make the act not-structuring, IIUC.