Hacker Newsnew | past | comments | ask | show | jobs | submit | y0ghur7_xxx's commentslogin

Sorry I have to chime in, because there is also my very own nntp server with also a web interface for people who don't want to download a client: https://in.memory.of.e.tern.al/comp.misc/

There are still some people there, and talking about stuff. It's very much like hacker news, but with much less people :)


> They fail because of severe checklist fatigue

> checklists often leaves a lot to be desired.

You say yourself that checklists are great, and it is proven that they save lives. So if there is fatigue in using them over and over, and they are not perfect, well... get over it? Sure i can understand that it's boring to go through the same checklist 5 times a day, but come on, there are lives at stake here. If one of your wife patients gets an infection and dies because she forgot some important, simple, step because of "checklist fatigue" how would she feel?


I don't think that's what checklist fatigue means -- it's not about boredom or tedium.

It means that the more you keep having to skip over irrelevant items and the more you still depend on remembering other things that aren't on the checklist, the less likely any human being is to reliably follow the checklist -- because they accidentally skip over an item thinking it was the irrelevant one, or jump back to the wrong item (skipping others) because they got "off" the checklist to do steps that weren't on it.

The point of a checklist is that it's supposed to be a single idiot-proof source of truth in a specified area, reducing mental complexity and therefore reducing errors.

Once it stops being that because it isn't perfect, it can easily increase mental complexity which requires more brain use and increases errors. That's the fatigue.

So it's not a question of just "getting over it".


Keeping a checklist up to date is incredibly important.

If you have skip items on a list, then human error pops up again


The Messiah/God complex is a very real thing with MDs. It's dependant upon the enviroment and training, but a lot of them really do think that they are 'special' and that they have 'proved' themselves via the rigours of school and residency. Their sense of self is tied up into their job preformance. Like in other professions, if you 'attack' their job and their work, you are attacking them personally.


> The Messiah/God complex is a very real thing with MDs.

That's true with pilots, too, especially fighter pilots. But they rigorously use the checklists.

The pilots also know that many crashes have been traced to skipping an item on the checklist. For example, John Denver's fatal crash was due in part to failing to fuel the airplane before flight.


Fighter pilots aren’t going to get paid more if they cram in an extra mission by skipping steps.


It's also probably partially attributable to the fact that it's their own life on the line if they skip a checklist step like lowering landing gear.

That in combination with training. It's just the cultural thing to do, everyone uses checklists, a pilot from the moment they start training is always doing check lists.

It also helps that a checklist for a plane is always relevant where a checklist for a procedure is more variable.


Their career is also on the line, even if they live through the incident. Nobody wants to keep on a pilot who is careless with at $100m machine.


Yeah there's definitely a difference in the culture where checklists are the norm in aerospace where they're a new and developing thing in medicine.


Fighter pilots aren't in it for the pay. They often remark that they're amazed that people actually pay them to fly.

I'd forgive them for skipping the checklist if their airfield is under attack and they have to get their crates in the air or die on the field.


> aren't in it for the pay.

Nobody ever is. But if their contract negotiator ever opens with that line...


You're not going to cure the human condition. If checklist fatigue is a real problem, telling people to suck it up and do their jobs is not going to work. The process needs to be improved to account for check list fatigue, it's that simple. Be it read aloud each item before checking it off, have a secondary scribe confirming and checking it off, or any other improvement that minimizes the effect of the fatigue.


No matter how right you might be, "get over it" is not an effective way to change human behavior.


> Certificates which are perfectly good for communication and do not pose any significant security risk.

Is it so? I remember that in 2008 someone was able to create a rouge CA certificate because of the predictability of serial numbers[1]. It was a different time: we still used md5, but are you sure the limited entropy used to generate serial numbers does not pose any security risk?

[1]https://www.win.tue.nl/hashclash/rogue-ca/


The difference here is one bit. The BRs say you must use at least 64-bits of entropy, EJBCA out of the box used 63-bits. A bad guy might need to spend say $40 trillion to make a bogus cert instead of $80 trillion. No bad guys have $40 trillion so it's irrelevant. And that would be if we were still using SHA-1 (which is broken, and so the entropy is all that would keep you safe against collision attacks) but in fact Actalis and other CAs are only issuing with SHA-256 which isn't broken.

This is a Brown M&M ‡. It doesn't actually matter in terms of security, 63-bits, 65-bits, it's never going to make a real difference. But we wrote 64-bits in those rules, if we can't trust you to obey that rule, who says you got the really important parts right?

https://www.snopes.com/fact-check/brown-out/


It's not that Actalis has not tried to obey, or purposefully withheld information or tried to mislead the community. The disagreement is on how strict the interpretation of the BR should be.

Would Van Halen abort a concert if there was a single brown M&M in a bowl of 1000? Probably not because even though it's a violation of their contract, they got their point across, it still means the organisers had read through the full contract and tried to obey.

Reading through the discussion, I wish I could be as strict as Ryan Sleevi is in demanding that browsers fix their incompatibilities with the web's BR (ehm.. standards). Chrome, there's this bug where this element is placed one pixel off from where it should be (it's by no means critical and does not impact users of any website in any meaningful way, but according to the CSS Box Model Module Level 3 spec, paragraph suchandsuch it's wrong). How about you fix it by next week or I'll uninstall you from all systems on the world.


Further to tialaramex's great answer:

In the 2008 attack, the CA was using sequential serial numbers. They weren't randomized at all.

The attackers had to do a large amount of computation to produce colliding certificates even when they knew exactly what the rest of the content of the certificate would be.

In the aftermath of that, we got MD5 deprecation and also a requirement that certificates include randomness that wouldn't be predictable to the subscriber, so that the subscriber doesn't know what the collision target is.

It's a little complicated to foresee the exact size of the benefit from this in different threat models, but in the model where the attacker has the capability to produce two related texts with the same SHA-256 hash, the current precaution means that the attacker has only a 1/2⁶⁴ probability that using that capability in conjunction with a certificate issuance will yield a matching certificate.

In 2008, certificate issuance usually cost money for the subscriber, where now it needn't, but there are still issuance rate limits and there's now Certificate Transparency, so all of the attempts will become public.

A bigger risk is presumably an n-way collision capability where an attacker can produce, not just 2, but n related plaintexts that all have the same SHA-256. In that case the attacker has an (n-1)/2⁶⁴ probability per certificate issuance that the issued certificate has the desired hash, assuming nothing unexpected or uncontrollable happens during the certificate issuance. (Another tricky problem, for example, is the time of issuance, which can be specified accurate to the second by the CA and appears in the certificate.)

Especially when nobody has demonstrated a SHA-256 collision or research that's close to producing one, and all attempts would be public in Certificate Transparency, and all CA issuance is rate-limited in some way, it doesn't seem like even 1/2⁶³ is that bad. Just five or ten bits of entropy in the certificate would probably have been enough to stop the 2008 researchers' attack from succeeding at all.

The attack would also have to have been carried out while the existing certificates were being issued (if there was no successful attack during certificate issuance, there won't be an attack after-the-fact).

I like tialaramex's brown M&M analogy: browser vendors are concerned with ensuring that CAs take rules and policies very seriously, even if there's no conceivable way that a particular problem could be related to an attack or vulnerability.


Actalis is a major Italian CA that works mainly with big banks and the public sector, like (from the bug report)

- the Tuscany Region (e.g. O=Rete Telematica Regionale Toscana, etc.)

- the Piedmont Region (e.g. O=CSI Piemonte, etc.)

- central public government (eg. O=Bank of Italy, Ministry of Transports, etc.)

- major banks (e.g. O=Unicredit S.p.A., FinecoBank, etc.)

- large private companies (e.g. O=SNAM, Terna, Wind, etc.)

- chambers of commerce


There is a Name Constraints extension in X.509[1] that does exactly that, but to my knowledge no browser implements it.

[1] https://tools.ietf.org/html/rfc5280#section-4.2.1.10


and that would have to be baked into the CA certificate, not specified when the CA is trusted. I dont want my browser to ask the CA what it's allowed to do, i want to tell it what it's allowed to do


It has to be baked into the CA, so that a browser vendor can check it before inclusion. If the CA specifies domains it is not allowed to sign certificates for, it will not be included.


> but to my knowledge no browser implements it

Firefox does, though I don't know how much they check beyond just the dNSName constraints. Here's the unit test making sure it stays working: https://github.com/mozilla/gecko-dev/blob/b8157dfaafc42deb3b...


The referrer header is in no way a tool to differentiate real users from a ddos attack.


I disagree. A fake referer is easily checked: Is my link really on the frontpage? If so: all good. If not: it's getting suspicious.


A similar line of argumentation has been historically used to push every outrageous thing on innocent people since forever. You sell the "abuse" as defense for a shocking crime. Ok, you only said DDoS when the usual is terrorism and child abuse. But the bottom line is the same: I need to take something private from you to defend myself.

What would you think if all stores took every measurement they could about you without disclosing it and eventually justified it by saying "how else would I know you're not a thief"?


A referrer header is not an outrageous amount of information. It's the store-equivalent of asking "Where did you learn about us?" Taking it away would hurt smaller sites and do nothing against large companies and ad networks.


> A referrer header is not an outrageous amount of information.

But it does reveal information that is none of the website's business.

> It's the store-equivalent of asking "Where did you learn about us?"

No, it's not. Actually asking that question would be the equivalent. What this is is surveillance.


The store is asking, the site is not. And 99% of people are trained to click "Accept" after years of dark pattern abuse and they have very little understanding of what happens in the background. I hope you understand that my point isn't to bash a webmaster but rather bring in discussion the principle of the whole thing. Seems that everybody draws the line for what is acceptable in such a way that it perfectly covers their own needs.

I've seen people that insist that using facial recognition is not different from what humans are doing naturally, now done also with electronics. We can agree the implications are different.


  You sell the "abuse" as defense for a shocking crime.
This works the other way around too. You use the abuse of non-personally identifiable information (by combining it with other data points, illegal without consent in the EU) to take useful data away from innocent webmasters.


> to take useful data away from innocent webmasters.

Webmasters who are collecting data about me or my machines (excluding the data about my direct use of their site) without my permission are not "innocent webmasters".


I'm surprised that in 2019 people (especially on HN) still believe/claim that users trying to hang on to their personal data "abuse" this to "take useful data away from innocent webmasters".

There are dozens of real life situations where covertly collecting such data would be considered completely unacceptable and yet my comment arguing this was still substantially downvoted.

But I guess my point is being in a technically literate community makes no difference when it comes to making a buck. Once one agrees to take a "not an outrageous amount" of private data for a bit of money, they'll agree to take an outrageous amount for outrageous money. And I think this is a perfectly accurate explanation for what FB, Google, [you name it] are doing.


Doesn't your argument work against encryption just the same? With such an argument aren't you actually punishing 99.9% of the internet population for what the 0.1% is doing?


But in general it's the only way to understand who's linking to you. Sure, not essential, but useful to see in general, especially when search engines could send it and you could see what keywords people used to find your site. If it were gone, as it is in many cases now due to https, people will adjust.


  "as it is in many cases now due to https"
That's not exactly true. Referrer is only hidden if it's explicitly asked by using a meta tag:

  <meta name="referrer" content="no-referrer" />
Or by using Referrer-Policy:

  Referrer-Policy: no-referrer
The default behavior is no-referrer-when-downgrade. This means that referrers from https to http are hidden. But https > https is still visible. And with https adaption reaching saturation, referer headers are usually still sent.


Google has used encrypted search terms in the referrals for years now.


> but is this a common sentiment

It is. One more statistical point: I started reading, saw that phrase, closed the tab and came here for the comments.


It's not scanning only port 3000:

    const portsToTry = [
      80, 81, 88,
      3000, 3001, 3030, 3031, 3333,
      4000, 4001, 4040, 4041, 4444,
      5000, 5001, 5050, 5051, 5555,
      6000, 6001, 6060, 6061, 6666,
      7000, 7001, 7070, 7071, 7777,
      8000, 8001, 8080, 8081, 8888,
      9000, 9001, 9090, 9091, 9999,
    ];
view-source:http://http.jameshfisher.com/2019/05/26/i-can-see-your-local... :125


That's awesome. I hope that all asian phone manufacturers like xiaomi, sony, samsung, and others get together and make an alternative to ios and android. They are big, and they can pull it of. Android apps compatibility is a big plus. Finally some movement in the market.


Yeah, Samsung, Sony, and Huawei getting together to build the os to crush android...

Japan, China, and Korea. Three countries that love eachother. /s


Korean here, people don’t like the countries itself, (we literally hate Japan), but doesn’t have much feelings to companies or usual people. (We buy Sony products, talk with Japanese people, study in Japan or work in Japan companies...etc) Similar with China. It’s something different... and I’m pretty sure if there is a reason the 3 companies (but I’m not sure if Sony has a significant mindshare in the Android market... isn’t Japan’s phone market Apple + Samsung + < 10%?̊̈) make an Android-compatible OS, they would, and we also would use it.


The more I think about it the more feasible it actually seem. Sony doesn't have a market share that's proportional to their technology at the moment but they can actually occupy the high-end lines in this new ecosystem.


They tried with LiMo/Bada/Tizen and couldn't pull it off...


> @Benjojo12 and I are building an encryption tool that will also support SSH keys as recipients, because everyone effectively already publishes their SSH public keys on GitHub.

Is this not already possible with tools like https://ssh-vault.com/ and (shameless plug) https://sshenc.sh/?

Those tools use the RSA keys to encrypt a symmetric key that is then used to encrypt the data, but the outcome is effectively the same. No?


Don’t those run afoul of the linked white paper (under “dangerous” in the 2nd paragraph), which talks about the attack paths made available if an RSA key is used for signing and encryption?


I don't think so: neither of those tools sign the message with the same RSA keypair. sshenc.sh for example does not sign the message whatsoever. An attacker could just intercept a ciphertext, drop it, encrypt a different message and send that.

Those tools are not meant for sender authentication. If you want that you would have to first share the senders pubkey with the recipient, and sign your message with the corresponding privkey.


While the tools themselves might not use the same key for both operations, I think the question was asking about whether it is problematic that a user’s SSH keys, used in SSH for signing, are also used by these tools for encrypting. In other words, the concern being the same key is used for two different operations, even if not in the same tool.

As I commented in https://news.ycombinator.com/item?id=19953623, I’d love to see another blog post walking folks through why/how the “dangerous” RSA keys are in fact useable for both operations because the textbook RSA concerns aren’t a concern because of X, Y, and Z.


The point of the tool isn't to use SSH keys; that's just a nice feature of it. The normal usage for the tool doesn't involve SSH keys at all.


I don’t know if that’s really the case, given https://docs.google.com/document/d/11yHom20CrsuX8KQJXBBw04s8...

> Goals

> * The option to encrypt to SSH keys, with built-in GitHub .keys support

And lower on the page, it shows the use of built-in command-line syntax for “github:$recipientname”.

It definitely works for keys that aren’t used for SSH, but support for SSH keys seems to be a large part of the rational for the ed25519 support (otherwise it would just be a tool that operated on X25519 keys directly).


That's true, I'm just saying, the rationale for the existence of the tool isn't "build something that encrypts with SSH keys", but rather, "build a modern replacement for PGP file encryption".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: