Never thought I'd even consider this, but is this a case where those involved, producing and developing, this software should be tried for murder/crimes against humanity?
My understanding is that AI in it's current form is not an applicable technology to be anywhere near this type of use.
Again my understanding: Inference models by their very nature are largely non-deterministic, in terms of being able to evaluate accurately against specific desired outcomes. They need large scale training data available to provide even low levels of accuracy. That type of training data just isn't available, its all likely to be based on one big hallucination, is my take. I'd be surprised if this AI model was even 10% accurate. It wouldn't surprise me if it was less than 1% accurate. Not that accuracy appears to be critical from what I've read.
This specific application and the claimed rationale is as close as I have come to seeing what I consider true and deliberate "Evil application" of technology out in the open.
Depends on what "AI" means here. There is a spectrum of "we have a bunch of data in a database and some folks hand tuning queries" to "we built a deep learning network to predict XYZ." In the middle of that spectrum lie things like decision trees which provide explainable results.
It's the former, but from what I read in the article, a shady version of that is my take.
The computer basically appeared to randomise a high number of people to kill based on a very shallow dataset, weak data linking and a high desire to kill people who are Palestinian.
It's hard to read the details from the Guardian article and think of it as anything other than a randomised Israeli state murder machine. I can't envisage it being accurate to a point any reasonable person would accept it be used against someone they know, in any circumstances.
That it targeted 10's of thousands is utterly horrifying. That it was involved in any actual battleground scenario makes me think those involved in it's creation and sale are culpable.
It is not a naive take. Not by a long shot. Knowingly working on the development or upkeep of such a system, full well knowing it's limitations, and knowing of it's aftermath obliterates any level of clean hands in my eyes.
It’s amply clear from reporting that the IDF has no formal RoE on the ground - low level commanders have full autonomy to kill whomever, whenever, with zero oversight.
The “AI” exists to retcon the justification for any particular genocidal act, but this is really just an old school mindless slaughter driven by anger and racism.
Sorry. Guns do kill people. That's their whole point.
I know roughly ~1000 people. Maybe 10 of them have the physical capability of killing someone, in case you don't know, it's not actually that easy to do it yourself.
Of those not all could mentally do it under anything but the most extreme of circumstances. 2, maybe 3 might be actually capable of ending a life under extreme circumstances.
With a gun probably, at a guess, ~400 - 700 could kill someone if they got anxious/scared enough is my bet. Even if I'm way off it's a lot more than without a gun. Couple of hundred at least. Not 2, or 3!
So yes, I'm sorry, guns definitely, 100% kill people.
And more people will absolutely kill someone if they possess a gun, than if the didn't. And by extension same is true if AI.
I'm interested how you even come up with that response? It's obviously factually and logically wrong. What makes you think it makes a reasonable argument to anyone?
Also, worth pointing out, thar AI in this case is insanely unfit for it's purpose (unlike a gun) and will have randomly killed lots of innocent people, even if the AI algorithm says otherwise.
> I know roughly ~1000 people. Maybe 10 of them have the physical capability of killing someone, in case you don't know, it's not actually that easy to do it yourself.
Do you primarily work with invalids or children? Heck, even children can kill, but it usually requires working together. I was reading the other day about a group of under 10yos that buried alive another kid in a village because he looked weird.
Of everyone person I've ever met between the ages of 16 and 60, I'd say 99% are physically capable of killing somebody - you only need to push someone at the right time to have them fall to their death. Frail old woman have killed babies by covering their faces. There are poisonings.
Do guns make it easier or more accessible? Absolutely. Can a 95 lbs woman physically take on a 250 lbs man? Not likely in a 1:1 fight, but I met one who killed her husband with a knife.
I primarily work with people who have an issue with killing other people.
That, and that it is non-trivial without a gun, or more powerful weapon, to kill someone.
Which is why, in a lot of places it's extremely difficult to own or have a gun. And sane people consider very carefully a guns use. Most refuse to own or even consider even holding one never mind using one.
The AI discussed here is similar to me. It shouldn't be available or in use, ever. It even strips away the benefit a gun has of the user contemplating the end result.
I am agreeing that guns enable killing and make it more easy and more available. I also agree that most everyone I know have an issue will killing.
You claimed the vast majority of people you know are physically unable to kill. I think that is laughably naive.
If you mean that it is harder than you'd imagine to kill someone bareheaded, I also agree. But humans are tool makers and users. A big stick or rock to the back of the head was a common way to die in our distant past. And if you want to not allow any mechanical leverage in the killing, most people are _physically_ capable of pushing someone. That could be off a cliff, down the stairs, or on level ground where someone trips and hits their head.
This isn't a question of morality: it is a matter of physics.
It does nothing to those barriers. They are still absolutely the same. Unless you're trying to argue that somehow guns magically imbue in people the intent to kill.
I assure you, they do not. In point of fact, the hobby can get rather onerous to upkeep due to maintenance costs and the burden of magical thinking individuals like yourself employ, necessitating constant vigilance and correction.
People kill people.
AI, gun, explosive, makes no difference. Long as there are two blokes atound with irreconcilable opinions/worldviews, somebody's gonna want someone else dead. And that is the problem. The tools do not move until the mind employs them.
we agree that guns are equalizers - it allows a small woman to fend off a large man. That is the point. They make it physically easier to kill. Like, that is their entire point outside of sport.
for being more mentally available, I was just reading about some asshole that shot at a car that pulled into his driveway. Yes, he is mentally unhinged. I don't feel it is a stretch to say that owning a gun enabled him to feel safe and shoot the people from a distance and had he needed to get into a physical altercation, it very likely would not have ended with dead kids in the driveway.
I'm a gun rights supporter. I own guns. I take my kids shooting. People need to be held responsible. People can kill without guns, of course. But there is no way to argue that guns don't make killing more accessible.
You dramatically underestimate the physical capability of the people you know. Humans are strong and humans are fragile. Every single one of them could kill another human in a pre-technological society.
Apologies for my earlier reply, it's been pointed out to me it was rude. It was and I'd like to apologise.
On your point I'm not sure where you get the assertion any human could kill any other in pre-technological society. That appears evidently false to me. How did you come to that assertion?
I would say it is evidently true to me. As stated, humans are fragile. A punch or fall can easily cause brain injury leading to death. Get in an advantageous position on a person and they are going to have a real hard time preventing you strangling them unless they're trained/experienced in hand to hand fighting. On a purely physical level it is not hard to kill a person. This isn't even considering assistance from tools or infection, where a direct kill from fighting isn't required.
The number of people capable of this isn't 100%, sure, but it's closer to 100% than your posited 10 in 1000, 0.1%
I give you a gun and say "shoot whoever you choose with this gun, the choice is yours".
I give you an AI powered gun and say "use this however you choose, I have programmed it to automatically shoot in certain circumstances".
In the latter case, I have some responsibility, because I shared in the decision making by programming the gun. Through my code I have put my proverbial finger on the trigger, right next to your finger.
Exactly - the safest thing to do with the AI gun is not point it anybody and destroy it. Perhaps the AI gun shooting somebody wouldn't be an intentional murder but it's certainly manslaughter since you know that it could shoot whoever it's pointing at so simply pointing it at anybody is the active decision that frames the crime.
Said people are trusting the intel from the AI. Those who provide that intel possible should shoulder responsibility for its effects, or at least its efficacy.
this is such BS argument, before guns you could still kill people but it'd take a lot more effort & organization to kill en-masse. same deal if you apply this logic to nukes or missiles. yes those people should be held responsible but there should be a systematic regulation of AI-robot killing machines just like we have geneva accords for cluster munitions or unusually cruel weapons. this is just common sense 101.
Sorry but from an AI system that targets individuals to a system that kills them (they already have autonomous drones with computer vision) there is max 1-2 web services path. So AI kills people
> This specific application and the claimed rationale is as close as I have come to seeing what I consider true and deliberate "Evil application" of technology out in the open.
Someone will double down and include AI into the execution phase via AI controlled drones, tanks, etc. Then they will claim no responsibility and blame the ghost-in-the-shell.
My understanding is that AI in it's current form is not an applicable technology to be anywhere near this type of use.
Again my understanding: Inference models by their very nature are largely non-deterministic, in terms of being able to evaluate accurately against specific desired outcomes. They need large scale training data available to provide even low levels of accuracy. That type of training data just isn't available, its all likely to be based on one big hallucination, is my take. I'd be surprised if this AI model was even 10% accurate. It wouldn't surprise me if it was less than 1% accurate. Not that accuracy appears to be critical from what I've read.
The Guardian article: https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai..., makes me wonder whether AI development should be allowed at all. Didn't even have that thought before today.
This specific application and the claimed rationale is as close as I have come to seeing what I consider true and deliberate "Evil application" of technology out in the open.
Is this a naive take?