Hacker News new | past | comments | ask | show | jobs | submit login

There is a white paper on 1passwords design: https://1passwordstatic.com/files/security/1password-white-p...

They also regularly have audits and pen tests, with the reports pushed publicly: https://support.1password.com/security-assessments/

Finally, it's been built by people who are respected in the security industry.




Still doesn't explain why it's all not at least source-available. I'm not going to complain if they don't use open-source licenses such as MIT or (A)GPL, but straight up not making the source code publicly readable at all is a big strike against it.


Making it opensource is no guarantee at all(if there could be any), for instance it could be more dangerous: anyone could spot a security hole and take advantage of it without reporting it, there is no guarantee there would be a responsible disclosure.


Are you vouching for security through obscurity?


When systems and code are audited and pentested by third parties, I don't see any benefit of actually making anything public.


There is good insight into this from this comment from them in 2014: https://1password.community/discussion/comment/114870/#Comme...


That seems to be about transitioning to an open-source model. I don't mean that. I mean simply having their git repo publicly accesible in a read-only fashion. No external contributions, no license, etc.

I see no reason not to do this, especially for such a security-oriented service. You should be striving for as much transparency as possible.


Because they like being in business vs just giving away their software?

Where is the repo of software that you've paid an unknown number of developers to work on for multiple years over multiple versions that you charge for and run a viable business employing all of the peoples?


Just because source code is available doesn't mean you can legally use it.

That's why every "the company source code got stolen!" news is nothingburger, no competitor can use it, it would be huge liability to be caught using it, and it "only" matters for people that might find bugs in it.


You should perhaps read up on software licenses and what it means when code is public but unlicensed.


They are in the business of doing business. How much of the codebase that makes up their business model would you like them to expose?


Bitwarden is in the business of doing business, too. But their source code is publicly available. A lot of it is even under GPL:

https://github.com/bitwarden/server/blob/master/LICENSE_FAQ....


Fun fact: Bitwarden also includes some of the 1Password open-source code: https://github.com/bitwarden/clients/blob/c10e93c0d9cf306a59...


I mean, even if their code was publicly available, how could you verify that they are running that code, and not some other code?


Compile it yourself & run the binary?


If you could do that, what value would the business have?


Hosted infrastructure? Support? Publishing signed binaries onto app stores?


Licensing


Thank you for the link.

> It's also been built by people who are respected in the security industry.

This means almost nothing. It is an appeal to authority. Experts can still miss things. Yes, it is better than experts saying a product stinks, but still is not trustworthy without open source. Maybe I'm making my own fallacy here, I'm just trying out a position.


Most people don't read open source and instead trust that the experts will catch any issues....


It's turtles all the way down. Would you rather NOT have the option for other experts to see the code?


Of course not, I'd rather the code be Open and audited.

Being Open Source is in some ways a multiplier on security because it allows more expert review. But the expert review is the important part; if security-critical code is Open Source but hasn't been looked at by anyone other than the main developers, the Open Source part is a multiplier on zero.

It's a little bit more complicated than that, and there are a lot other factors at play as well even outside of security. We're not really getting into stuff like future-proofing and what happens if 1Password gets a lot worse in the future. It's complicated.

But the gist is that while it would be a lot better if 1Password was Open Source, it's still in its current state probably got more eyes on it than some Open Source security projects do.


When the tool is opened sourced honest and rogue security experts are going to go through it.

Now how much value is the first adding versus the second removing is the case. And based on incentives for each, an honest security research maybe getting a small bug bounty, and a rogue one potentially gaining access to 10's of thousands of accounts I think it might be a net negative overall.


Even so, the number of experts looking at the code will increase if it's open, right? I know the concept of "many eyes makes all bugs shallow" isn't the panacea we once thought it was; that still there can be bugs lurking in widely-read sources.

But it must be smaller than the number of bugs that exist in code read by a smaller group.


An appeal to authority is not a logical fallacy if the person in question is actually an authority in the domain.


Sure it is: an appeal to authority is not a valid step in a deductive logical argument, unless you have somehow established that the authority in question is literally infallible.

Now, it's grounds for an (extremely) persuasive inference! And we know very little of what we consider known by strict deductive logic: we rely on weaker inferential reasoning the vast majority of the time. Grandparent's "means almost nothing" is much, much too strong.

But when we really want to know for sure that something is true, people are going to want to see proof, not a statement from someone who probably knows of proof.


If you want to go down this route, we know nothing of the real world from strict deductive reasoning because the axioms strict deduction flows from do not apply to the real world, but to mathematical universes where absolute truth is accessible to us. In reality, all statements we could use as premises are only probably true to a certain level of confidence, having themselves been constructed from inductive reasoning. Therefore, appeal to a good authority is as good of a step as any, by which I mean it's only provisionally worthwhile unless and until sufficient evidence comes in to demonstrate it is invalid in a specific case.


Absolutely we can't get very far with logic without relying on some axioms, and those axioms can't themselves be proven. But I don't think it follows that we need to accept every axiom someone proposes, such as what counts as a good enough authority. You and I probably agree on the basic existence and persistence of objects, for example, and we might as well pretend that we agreed to treat that as axiomatic ahead of time. I'm less sure that we'd have the same list of which people count as infallible-enough experts in which areas.

I'm really just here to stand up for the body of knowledge I learned in 9th grade Logic Camp. Good old classical Aristotelian logic is where the concept of a "logical fallacy" comes from; it really is all about deductive reasoning and not about inferences; and there really isn't a PhD-from-a-really-good-school exception to the fallacy of argument from authority. Just the same way that "X is false because George Santos said it" is simultaneously very persuasive, at least to me this week; and also a fallacy ad hominem.


> But I don't think it follows that we need to accept every axiom someone proposes, such as what counts as a good enough authority.

Nobody's saying we do, and I think there's a good middle ground we all actually inhabit where the director of the CDC, for example, is an authority on diseases when speaking in an official capacity, but we don't give a damn what they think about the latest movies. This isn't a difficult concept until we try to formalize it, really, at which point I'm sure we can run off into a bramble of paradox and bizarre conclusions we "must" accept in order to satisfy certain kinds of logical consistency.

> I'm less sure that we'd have the same list of which people count as infallible-enough experts in which areas.

This is a problem and we've seen it be a problem quite severely in recent years. Part of the problem is certain political groups following people with absolutely no recognizable expertise in anything except making money from people who don't recognize logical argumentation as even potentially useful: If it doesn't validate their pre-existing ideas, it's not only wrong, it's a trap laid by the enemy, and anyone who promotes it must be punished. Take that mindset, add some over-simplified and incorrect models of reality, and you have people who refuse to accept reasonable sources of authority for self-contradictory and purely emotional reasons.

Finally, logic has come a long way since Aristotle, and using some kind of consistent attempt at statistical reasoning is no less valid than declaring some probably-true statements to be axioms or postulates and reasoning deductively from there. Like diagnosing a disease: You can't use deductive logic to determine why you're having flu-like symptoms, you must use some kind of reasoning based on relative frequency of diseases and, possibly, incorporate the results of various tests in a more inductive fashion. "George Santos lies a lot" is a perfectly valid piece of evidence to incorporate into a worldview, just like "The seasonal influenza is more common than some horrendous infection which also initially presents with flu-like symptoms" is.


I agree about the problem —- I might quibble with the psychology, but close enough. But that’s exactly why the distinction I’m trying to draw is so important. If we keep in mind the difference between what we can rigorously establish and what we’re fundamentally taking on faith (however well-founded), then at least we can talk to people on the other side: clarify core disagreements, examine evidence, and occasionally, with entirely too much work, change a few minds. That may sound hopelessly optimistic, but shifting one mind in fifty would be an earthquake: for most purposes,’enough to declare victory.

Treat arguments from authority as indistinguishable from any other rule of reasoning, on the other hand, and there’s nothing to do but declare people who don’t share our view of authority irrational. And then I don’t know what the plan is.

It’s exhausting when people refuse to recognize well-established expertise, but I think it’s counter-productive that so many of us get defensive about it. Yelling at people to accept our authority figures isn’t working. It won’t ever work.


Here is where I think we part ways, philosophically speaking:

> If we keep in mind the difference between what we can rigorously establish and what we’re fundamentally taking on faith (however well-founded)

From my perspective, we're taking everything in the real world "on faith" (and there is a loaded phrase ripe to be deliberately misinterpreted) to a certain extent, and not just because of brain-in-a-vat arguments. For example, I sit in chairs thinking they're solid objects, but they're made of solid objects and might well collapse under me. In my experience, that doesn't happen to me, so my heuristic is that chairs are safe, but a heuristic isn't rigorous. It's "faith" if you want to phrase things that way.

Moving deeper, I trust that my senses provide me with accurate-enough reflections of reality I can use them to navigate my world safely, but I know enough about neurology to know that that isn't a given. Vision is reconstructed by the visual cortex from messy and incomplete nerve signals from the retinas, our sense of 3D space is reconstructed (based on low-level heuristics) from a pair of 2D images reconstructed from those messy retinal signals, and so on, from the bottom of the neurological hierarchy to the top of the conscious sense of self. The human machine lives off of best-guess reconstructions from incomplete and messy data.

This isn't mere acatalepsy, however: I think humans live in a physical world we're capable of perceiving accurately enough, and comprehending well enough, that we can accurately say we live in a real and comprehensible external Universe, and that some things don't go away even if you don't believe in them. Therefore, it's possible for our heuristic judgements to become more accurate at predicting reality over time, which is what separates knowledge from dogma.

Accepting that an authority is probably more likely to be right than wrong is a heuristic, and that heuristic can and should be improved, but all of our knowledge of reality is heuristic, so trying to treat reality like an axiom system is philosophically wrong-headed and incapable of dealing with the full complexity of reality as well.


I'd agree with almost all of that, but with two additions: I think we can locally approximate reality as an axiomatic system, with the axioms just being the stuff we've decided to treat as true for the moment. When the context changes what counts as an axiom changes, but that only rarely happens to me with chairs.

Second, I think there's at least a rough partial order on the things people propose as axioms: the order of how surprised I am when someone contests one, if you like. Very surprised, if it's the existence of external reality; not too surprised at all, sadly, if it's whether biologists are to be trusted over the pastor at their church on the origin of species.

The nice thing about that machinery is that when you meet the second sort of person, you're not forced to believe that they're fundamentally irrational and immune to all reason.

One more thing: trusting experts is a very convenient heuristic for me individually, and I use it all the time, but it's not epistemelogically all that useful. In principle you can always replace an appeal to the authority of a genuine expert with the evidence that they rely on to come to their judgement: if you can't, then they're speaking outside of their expertise. No one of us can win an argument on the internet that way; time is finite and we each only know so much. But if we all chip away at it, by supplying evidence where and when we can, we might get somewhere.


> Sure it is: an appeal to authority is not a valid step in a deductive logical argument

Technically, unless you reinvented logic on your own, this in an argument of authority with extra steps.


If we're talking about fallacies, we're invoking old-fashioned classical logic.

The nice thing about having a notion of strict deductive reasoning is that it gives you an idea of what arguments might persuade someone of basic good faith, but who very much doesn't want to be persuaded. If you and your interlocutor don't both accept modus ponens then you won't get anywhere, but that's pretty unlikely in practice.

(And I'd actually argue that we each invent logic on our own and then notice that the logics are equivalent, but that's straying into metaphysics.)


Appeal to authority is, instead of giving actual arguments you throw a name and say, "well, they surely know their stuff!".

Whether they know stuff or not is irrevelant to the fact person is trying to avoid having constructive argument.

So it could really be dismissed by "okay, they are experts, but how you're sure they didn't wrote it all on hangover?"


> if the person in question is actually an authority in the domain

I haven't seen a name mentioned in much of the discussion here.

Who are we talking about from this list, and what else have they done? https://1password.com/company/


I wish more people would remember this.


But its incorrect, its literally the logical fallacy, argument from authority, if the person us not an authority then it cannot by definition be argument by authority.

Experts must prove their views with evidence and not rely upon their reputation, that is the meaning of the fallacy.

No wonder so many people cant reason well.


An argument from authority is not a fallacy in and of itself.

An appeal to false authority is always a fallacy, such as considering an authority's opinion on a topic on which they're not authoritative.

If the participants in a debate agree that an authority is legitimate, then an unchallenged appeal to their authority is not fallacious.

If an authority's opinion is contradicted by undisputed evidence, then an appeal to their authority is fallacious.

The whole point of the distinction is to admit authority as a valid source of information, in the absence of direct evidence, because we can't possibly reason from direct evidence in every single case.


The truth of a statement is what matters, not who uttered it.


And, when you are unable to evaluate the truth of a statement for yourself, the expertise of the person making the statement is a helpful datapoint when deciding how much to trust it.


Right but there is no statement by competent person made here.

The audits could be classified as that but not "well they hire security people, they must know what they are doing!"

Hyundai hires engine developers and their engines explode nonetheless!


Just curious: Imagine two people took the 1Password white paper and created two separate implementations. The only information you have on the two implementations is the background of the people who implemented them. One is a first year CS student; the other is a seasoned security researcher with multiple published vulnerability discoveries. Which would you choose and why?


Appeal to authority is not a fallacy, it's basically a necessity to function in the world.



> if done improperly, this could be a logical fallacy

Maybe I'm wrong but doesn't this put it in a totally different category than other fallacies? Circular arguments, for example. All circular arguments are wrong. If you identify a circular argument you don't even have to fully understand what is being said, you can immediately conclude "this is a bad argument".

But appeal to authority isn't like that. "I'm not going to drive through the mountains today because the roads are iced over, and I know that because the highway department said so." That's an appeal to authority, and it isn't proof, but it's a good argument and there's no need to drive out yourself and confirm that the roads are dangerous. Identifying an appeal to authority is not enough to discard an argument, you need to evaluate the authority. When you see a circular argument you don't have to measure the diameter of the circle or something.

So I don't think appealing the Grammarly was a fallacy, but in this case I think they're wrong.


That's a good clarification. Thanks for the example.

Hmm... perhaps my concern is that expert opinions matter way more than non-experts, but we still can't blindly trust them.

...and I'm sure you're not saying to take everything they say as word of truth.

I withdraw my argument on the grounds it is poorly constructed.


> When you need to support a claim, it can be tempting to support it with a statement from an authority figure. But if done improperly, this could be a logical fallacy—the appeal to authority fallacy.

The key words are: “if done improperly, this could be…” The article goes on the give non-fallacious examples of an appeal to authority. The quality of an implantation will be correlated to experience, so this is an example of a non-fallacious appeal to authority.


Trust is a necessity, not authority. Those with authority are often not trustworthy.


I get it but in this context "authority" means an authority on a particular topic, not like a police officer or something. Appeal to authority becomes a fallacy when you appeal to someone who is not actually an authority on the subject at hand.


But how do you determine if someone is an authority? Our world is filled with epistemological bubbles. One person's expert is another's snake oil peddler.


Right, I don't think we really disagree about anything. "That's an appeal to authority" is not a useful objection, because appealing to authority is often a great idea. "That's not a good authority" is a good objection and often very relevant.

It's sort of like seeing a lot of arguments that rely on false or misleading evidence and deciding that "appeal to evidence" is a fallacy. The choice of authority or evidence is the issue, not the act of appealing.


It's also when you have no argument and when confronted defer to "they must know their shit"


> Finally, it's been built by people who are respected in the security industry.

Source for this?

To me, the celebrity endorsement makes me doubt it's a serious business.


Which people?

I've been very reluctant to use their cloud solution as I trust Dropbox more for security. So I still fight 1password to keep the vault stored in Dropbox.

I figure there are maybe 4 organizations who are active enough to prevent a full download of all their user's data. Google, Dropbox, Amazon, and Facebook. (Maybe Apple, but they seem lethargic.)

Because they store all the passwords to all of our services they are a huge target.


Vaults stored in Dropbox could be brute forced if an attacker ever gains access to the ciphertext. The 1P service mitigates this by adding a random key as salt that’s stored locally (iOS keychain/browser storage). That key never leaves your device (authentication is done via zero knowledge proof), so the master password is virtually impossible to brute force right now, even with a very weak master password.

I can highly recommend reading the white paper, it’s very well written and kinda like a comprehensive guide to E2EE which every SWE should be familiar with anyway.

And what do you mean by Apple being lethargic? They added a secure enclave to all their devices that is probably the most secure storage and crypto processor you can hope to get for that kind of money. They also added the option of completely end to end encrypted backups. Because of the secure enclave, that’s actually a really safe option.


> Vaults stored in Dropbox could be brute forced if an attacker ever gains access to the ciphertext.

I’m sorry, but could you expand on this? If I leave my encrypted password vault on Dropbox’s servers, then of course they could attempt to brute force it. I thought the entire point of the encrypted vaults is that brute forcing is computationally infeasible, but technically possible.


My company forced us to use Box and actively blocks Dropbox on all work computers. They did an audit and didn’t like what they saw in Dropbox.


i think “trusting dropbox more” is not what i would necessarily expect.

nonetheless i think the provider of my password manager should not themselves host my password vault.

If anyone from 1password is reading this: I trust you, but you make it hard to do so if you cannot be flexible about not hosting everything.

fd: I use 1password at home and for work.


How would you run a shared vault for work on a „dumb“ file hosting service? With the ability to add/remove team members, recover vaults in case of password loss etc? What about the fact that master passwords can be brute forced if they are weak, just as LP customers are now affected?


I mean, crypographically we’ve had solutions to those exact problems for 30 years.

PGP might not be very usable but it also had mechanisms to do this.

if you are scared of people copying the vault before they lose access to the storage: you’ll be very sad to know that this is already possible with the SaaS solutions.

if you're worried about people breaking the vault if they have access: then its even more of a reason to control the access.


> I mean, crypographically we’ve had solutions to those exact problems for 30 years.

My point exactly - 1password.com addresses those problems (e.g. by adding a random key to the master password, and via Secure Remote Password auth), while using just a master password (sans random key) in a dumbly file-hosted vault does not.


When you talk about “master key”, are you aware about multi-key cryptography?

me and my friend can have private keys to a ”vault” (or, file) that are solely our keys, even if they decrypt the same secret.

No need for a “master password” to unseal the vault.

Here is the gpg docs for it: https://www.gnupg.org/gph/en/manual.html#AEN111

section 5.1 of RFC 2440 explains how it works: https://www.ietf.org/rfc/rfc2440.txt


If I got you right, then 1P does use this mechanism, and LP most likely too. The problem is - how do you store the private keys in a way that their loss is not catastrophic? Services like 1P and LP answer that question, with varying levels of sophistication.

With 1P, it works roughly like this:

- Every vault item is encrypted with the vault’s key (randomly generated number w/ 256 bit, AES)

- For every user that has access to the vault, the user‘s public key is used to encrypt the vault key. I think this is what you meant by multi-key? Adding a team member to a vault means encrypting the vault key with the member‘s public key.

- The user‘s private key in turn is encrypted with the „Account unlock key“, which is made up of: Master password + random secret [1] + salt. Neither of those is ever sent to the servers in plaintext, made possible by zero knowledge proofs.

If you stored your ciphertext on a dumb file hoster: Sure you can increase entropy in the master password to match 1P‘s random secret. Or just store your private key directly, as I think you suggested. But this is not memorable, so where do you back this key up in case your hard drive fails? Aren’t we entering recursion at this point, requiring yet another round of encryption? There’s no end to this. At some point you need to have either a password you can commit to memory, or one that’s stored in sthg like a secure enclave.

You could also print out the private key, like 1P is suggesting for its secret key. But exposure of the piece of paper is a total breach if it’s your private key, but not with 1P‘s secret key.

1: Random secret is stored on your device, and they ask you to print it out upon signup. It’s never shared in plaintext with the 1P server, same as the master password.


You trust Dropbox? The company that infamously invited to their board a former government official responsible for authorizing warrantless mass surveillance?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: