Hacker Newsnew | past | comments | ask | show | jobs | submit | A1kmm's commentslogin

In many versions of road rules (I don't know about California), having four vehicles stopped at an intersection without one of the four lanes having priority creates a dining philosophers deadlock, where all four vehicles are giving way to others.

Human drivers can use hand signals to resolve it, but self-driven vehicles may struggle, especially if all four lanes happens to have a self-driven vehicle arrive. Potentially if all vehicles are coordinated by the same company, they can centrally coordinate out-of-band to avoid the deadlock. It becomes even more complex if there are a mix of cars coordinated by different companies.


I think it's Jon Postel who was the original source of the principle (it's often called Postel's Law). https://www.rfc-editor.org/rfc/rfc761#section-2.10 is an example dating back to 1980.

I believe ethanol is not actually often given as an antidote for methanol poisoning in modern times. It does work as a competitive inhibitor of alcohol dehydrogenase (i.e. occupying the enzyme to convert ethanol to acetaldehyde, slowing the conversion of methanol to formaldehyde and on to formic acid, which is not eliminated quickly and causes metabolic acidosis) - allowing the methanol time to leave the body through excretion, and limiting formic acid levels. However, other drugs like fomepizole also inhibit alcohol dehydrogenase with lower toxicity than ethanol.

If the intent is to stop it being used for a business, that's inherently at odds with part of the OSI's definition: "The license must not restrict anyone from making use of the program in a specific field of endeavor. For example, it may not restrict the program from being used in a business, or from being used for genetic research".

Now technically maybe it could meet the OSD if it required a royalty for hosting the software as a SaaS product, instead of banning that - since it allows "free redistribution", and passes on the same right to anyone receiving it (it is defined in terms of prohibitions on what the licence can restrict, and there is no restriction on charging a set amount for use unless that requires executing a separate licence agreement).

Now arguably this is a deficiency in the OSD. But I imagine if you tried to exploit that, they might just update the definition and/or decline to list your licence.


It might be easier to block by ASN rather than hard-coding IP ranges. Something as simple as this in cron every 24 hours will help (adjust the ASNs in the bzgrep to your taste - and couple with occasional persistence so you don't get hit every reboot):

TEMPDIR=$(mktemp -d)

trap 'rm -r "$TEMPDIR"' EXIT

curl https://archive.routeviews.org/oix-route-views/oix-full-snap... -Lo "$TEMPDIR/snapshot.bz2"

bzgrep -e " (15828|213035|400377|399471|210654|46573|211252|62904|135542|132372|36352|209641|7552|36352|12876|53667|138608|150393|60781|138607) i" $TEMPDIR/snapshot.bz2 | cut -d" " -f 3 | sort | uniq > $TEMPDIR/badranges

iptables -N BAD_AS || true

iptables -D INPUT -j BAD_AS || true

iptables -A INPUT -j BAD_AS

iptables -F BAD_AS

for ROUTE in $(cat "$TEMPDIR/badranges"); do

    iptables -A BAD_AS -s $ROUTE -j DROP;
done


One thing here doesn't seem right. I thought the whole thread that this was about them negotiating down how much the executor of a deceased estate would pay to one hospital making claims against it. But the thread included things like: "She had been afraid of being sent to collections and asked why we wouldn’t just take their counter-offer", which suggests a (mis)understanding that it is a personal debt of the sister's.

This suggests an 'AI can't see gorillas' problem here in that, during an AI-human interaction, identification of relevant big-picture context that a human advisor could have helped with is also missed.


Depending on the state and their laws, spouses can be responsible for debt. Along with that, hospital could maybe not sue her but sue husband estate and those liabilities would trickle down onto shared assets so if they had a house, it's now got a lien attached to it.


For only 100 GB, that's quite expensive storage. Compare for example Backblaze B2 at $7.20 / year / 100 GB. That is just the storage, so if you do lots of I/O it might increase - but if you aren't using exactly 100 GB and don't do much IO it might also be less than $7.20.


I've personally been unable to pass AI 'liveness' detection (which was a high-stress situation when it related to something my new employer asked me to do after I already resigned from my previous role) despite repeated attempts and all I have is alopecia areata affecting my eyelashes / eyebrows (a relatively common condition).

These models are discriminatory for a lot of people, I'd say, and shouldn't be allowed.


I think these models are fine for people who they do work on, but it's idiotic to assume facial recognition works for everyone. I should be able to use a website if my webcam is broken.

The practical problems are all caused by AI companies lying through their teeth and making bold claims and their customers being dumb enough to believe them.

The actual problem that needs solving is the fact that you need to validate your age without a form of solid proof being available in the first place. In cases where everyone has digital ID already there are technical solutions to solve that problem, and until those are available for free, it's idiotic to require the use of such technology in the first place. The UK doesn't have common, accessible digital ID yet they expect digital identification of some kind to just happen.


Not even close. Israel apparently has AI bombing target intel & selection systems called Gospel and Lavender - https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai.... Claims are these systems have a selectivity of 90% per bombing, and they were willing to bomb up to 20 civilians per person classified by the system as a Hamas member. So assuming that is true, 90% of the time, they kill one Hamas member, and up to 20 innocents. 10% of the time, they kill up to 21 innocents and no Hamas members.

Killing 20 innocents and one Hamas member is not a bug - it is callous, but that's a policy decision and the software working as intended. But when it is a false positive (10% of the time), due to inadequate / outdated data and inadequate models, that could reasonably classified as a bug - so all 21 deaths for each of those bombings would count as deaths caused by a bug. Apparently (at least earlier versions) of Gospel were trained on positive examples that mean someone is a member of Hamas, but not on negative examples; other problems could be due to, for example, insufficient data, and interpolation outside the valid range (e.g. using pre-war data about, e.g. how quickly cell phones are traded, or people movements, when behaviour is different post-war).

I'd therefore estimate that deaths due to classification errors from those systems is likely in the thousands (out of the 60k+ Palestinian deaths in the conflict). Therac-25's bugs caused 6 deaths for comparison.


It does work as long as the attesting authority doesn't allow issuing a new identity (before it expires) if the old one is lost.

You (Y) generate a keypair and send your public key to the the attesting authority A, and keep your private key. You get a certificate.

You visit site b.com, and it asks for your identity, so you hash b.com|yourprivatekey. You submit the hash to b.com, along with a ZKP that you possess a private key that makes the hash work out, and that the private key corresponds to the public key in the certificate, and that the certificate has a valid signature from A.

If you break the rules of b.com, b.com bans your hash. Also, they set a hard rate limit on how many requests per hash are allowed. You could technically sell your hash and proof, but a scraper would need to buy up lots of them to do scraping.

Now the downside is that if you go to A and say your private key was compromised, or you lost control of it - the answer has to be tough luck. In reality, the certificates would expire after a while, so you could get a new hash every 6 months or something (and circumvent the bans), and if you lost the key, you'd need to wait out the expiry. The alternative is a scheme where you and A share a secret key - but then they can calculate your hash and conspire with b.com to unmask you.


Isn't the whole point of a privacy-preserving scheme be that you can ask many "certificates" to the attesting authority and it won't care (because you may need as many as the number of websites you visit), and the website b.com won't be able to link you to them, and therefore if it bans certificate C1, you can just start using certificate C2?

And then of course, if you need millions of certificates because b.com keeps banning you, it means that they ban you based on your activity, not based on your lack of certificate. And in that case, it feels like the certificate is useless in the first place: b.com has to monitor and ban you already.

Or am I missing something?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: