> I'm less thrilled about it being written in a garbage collected language
What are the security problems with garbage-collected languages?
(not being sarcastic, don't have an agenda, I have no previous knowledge on this, and am not a security expert. Just had never heard this suggested before, and am curious what he meant. Legit question!)
Usually security nuts like to override the clear-text string with zeros or random characters before calling free() on it. This way, if this chunk of data stays in memory (which is most likely the case with libc's free()) it cannot be read by exploiting a buffer overflow.
With garbage collected language, programmers don't know when their variable is "free()ed", since it could be held in multiple thread, and the last thread dying will release the memory for this variable. Since programmers often don't know when the password variable will be "free()ed", it is very unlikely that they scrambled the password before releasing the memory. Thus, leading to the password being kept in some region of the program memory, in clear text, exploitable by a buffer overflow & co. for an indefinite amount of time.
Another thing to note about garbage-collected languages (like Java with the G1 collector) is that blocks of memory will often be *copied* to other parts of the physical memory as part of compaction phase [1] (see quote), in order to be able to provide large segments of sequential space. Since the GC is designed with performance in mind rather than security, the original copies are not zero'd as part of this compaction, and instead their raw data will potentially stay in memory indefinitely until overwritten by another part of the application, and will not be zero'd upon termination of the application. Because of this, even with meticulous tracking of references and "raw" memory access only, GC'd applications can leave traces of secrets on memory. There may be workarounds for this by exclusively using "Unsafe" methods, but that drastically limits your API-interoperatility with libraries and such.
> G1 reclaims space mostly by using evacuation: live objects found within selected memory areas to collect are copied into new memory areas, compacting them in the process. After an evacuation has been completed, the space previously occupied by live objects is reused for allocation by the application.
> Usually security nuts like to override the clear-text string with zeros or random characters before calling free() on it.
If you are worried, you can store the password in a byte array and zero that out.
But further, a buffer overflow is practically impossible with a GCed language (especially a popular one). A programmer using a GCed language cannot write code which has a buffer overflow. That must come from a bug in the runtime itself. Not likely.
As far as I'm aware, every gced language has bounds checks on arrays. Certainly all the popular ones.
> the last thread dying will release the memory for this variable.
Really not how GCed languages work. Memory lifetime is not bound by thread lifetime except in the rare case when memory is bound to a thread (static/global variables).
The majority of GCs on the market are tracing collectors. Memory is periodically collected when it is not longer referenced (mark and sweep). What triggers that collection is a whole host of potentials interactions. How frequently and deep it runs also depends on a lot of application characteristics.
Fair point, but really one of those situations where if an attacker is in the position to look at a password stored in a tombstone, you've got bigger problems (for example, would be trivial in that scenario to simply intercept the password rather than looking for it in VM memory).
> But further, a buffer overflow is practically impossible with a GCed language (especially a popular one)
Because the JVM had no buffer overflow? [1] Also I purposefully wrote "buffer overflow & co" because buffer overflow are not the only possibility. Shellcodes could ptrace() and inspect the memory of the program.
> > the last thread dying will release the memory for this variable.
> Really not how GCed languages work. Memory lifetime is not bound by thread lifetime except in the rare case when memory is bound to a thread (static/global variables).
That was bad phrasing from by part. You should have read "the last thread releasing the variable reference".
Happens extremely rarely and is frequently not in JVM core code but rather something like the 2d renderer or applets. Code not likely to be executed on a server.
Take a deeper look into those CVEs and count how many are for Java 8+ and server code (it's a pretty short list).
You might as well argue the linux kernel is insecure because there's been buffer overflows in the various drivers.
> Shellcodes could ptrace() and inspect the memory of the program.
Certainly, and they can redirect socket traffic and inject a MITM for any process to directly intercept a password. Even if you are zeroing memory, there will be a period of time when a password is present in memory which means the ptrace attack also works with C.
The bad part of a managed language is that passwords stay in memory for longer, but that risk is somewhat moot considering exploiting requires a compromised system. In which case, there's little reason to pull out passwords by sniffing memory.
By checking if things are referenced or not as the basis for collection is a bad idea.
Take 2 classes that reference each other, but nothing references either.
In your example they won't get cleaned up. You'll want a tree-based collector. If an object is no longer referenced from the tree it can be cleaned up.
>With garbage collected language, programmers don't know when their variable is "free()ed", since it could be held in multiple thread, and the last thread dying will release the memory for this variable
And there is simply no way to deal with this, technology just isn't there yet.
> it cannot be read by exploiting a buffer overflow
What does the attack that does this actually look like? Lastpass reads text off an html page and decides when to inject auto-fill prompts and/or enter in a password. Is it possible for a buffer overflow to be exploited there, that lets an attacker (who controls the site) gain access to a password for a different site? How does that work?
Is there some other attack possible here? Ie: is it possible another user-space application can read passwords from memory used by the password manager? How does another app know it's a password? How does it trigger a buffer overflow?
(I'm ignoring apps running with kernel or privileged access: that seems like game over)
CPU side channel attacks are a possibility. In the worst case, it could let an attacker cross OS process boundaries, making it exploitable remotely through Javascript executed in the browser. Although such vulnerabilities would be hard to find and even more difficult to exploit on scale, the possibility seems more realistic than it did pre Meltdown & Spectre. It would be nice if something as high stakes as a password manager was prepared for this kind of scenario.
You should be able to know all the references to a variable, if you are careful when writing your program. Also, you could write a destructor that scrambles the memory location before the object is collected. However, you would still not have control over copies that the GC may decide to make and it is a bit trickier to force a free, since the GC is in no obligation of freeing an object as soon as it has no more references (though you can probably force that to happen in most GCed languages).
In most GC languages, the String type itself is not a managed handle to a single memory buffer, but a reference into a whole copy-on-write datastore. So you can't even scramble the underlying memory - mutating the string to erase it will not zap the original, it will just create a copy.
Your best bet is to ensure no references to the password string exist - including in library code you may use, which means constant revalidations whenever you update a library or your underlying language runtime. Once you do that, you can force garbage collection, either by some explicit language mechanism to request garbage collection, or by trying to allocate gobs of memory in some way that can't be easily optimized out.
There are strong incentives to use the built-in String class. APIs for UI will use it. The first google result for "c# Clipboard" certainly uses string. It'll be a lot of extra effort, that you might not have with a different language. That is, if actually you consider the risk worth it... i probably wouldn't.
To avoid the risk outlined here, knowing all the references to a variable is not sufficient; without an understanding of their lifetimes in every possible permutation of the program's use (and misuse), you cannot act on that information to minimize the exposure posited here.
I won't get into the question of whether a programmer should also know all that, as there is another consideration which renders it moot: Even with a non-GC'd language, a programmer could leave variables with sensitive information in memory for the remaining duration of the program (e.g. local variables left deep in the stack when a function returns or throws an exception - do you know when they will be overwritten?) What matters here is that the programmer understands the risks, knows what constitutes the most sensitive data, and acts accordingly - but, armed with that knowledge, the programmer can just as well ameliorate the risk in a GC'd language as one that is not.
> You should be able to know all the references to a variable, if you are careful when writing your program.
The issue is: what if thread A and thread B hold a reference to the password variable? And you don't know in which order they will execute. In which thread do you scramble the password before releasing the variable reference?
From what I understand, OP's point is: because of the nature of C, you have to know where your variable is "free()ed", because you have to do it yourself. Therefore you can scramble there. Even if you have two thread doing:
n = decr_reference(password)
if (n == 0) {
scramble(password)
free(password)
}
That issue seems the same in a GC language as it is in a non-GC one. Either way you need to know when you're done with the data and it's time to scramble.
One possibility would be to reference a password variable always from t he same thread. But even if you use multiple threads referencing the same memory location, you could make some of them weak references (which the GC doesn't take into account when deleting the object) or you could make sure you stop referencing them in both threads and force the GC to run on that object. The destructor could handle the scrambling.
Python has `gc.collect` which should work for user created types. Besides, it does have a reference counting GC. So, as long as you don't create cycles, you should be able to force the collection of objects similarly to C++'s shared_ptr.
Java has try-with-resources for this. Python has with. Unlike say C++ you do have to indent your code once for every such resource though which can be cumbersome.
Nono, a lot of GCs copy the whole known variables to a new area, then delete the old area in bulk (“but it consumes twice as much memory!?” yes, but it’s very efficient - doesn’t require tracking). Therefore you may have a constant char[] and it could still be in 2 places in memory.
An interesting point and I wonder if there are thoughts on if the JVM should implement flags that do overwrite before gc (obv worse perf but some cases have use for it).
Or at the least, let a program implement parts of the gc api and plug it in themselves.
However, the issue isn't so much that doing that, rather it's that GC will potentially never collect the memory. In the JVM, the GC primarily runs when enough allocations happen. It's not on some timer.
So the concern would then be having a password sitting in memory for hours (or even days) on end.
Typically you would be using something like JCE for these sensitive functions, and a JCE implementation could be written (potentially with VM support) to have such functionality. These also can do things like zero out binary data/keys after use.
This wouldn't help once you create your own objects however, e.g. JSON parse from a decrypted block.
they don't know when it's going to be freed, but they do know when they're not going to need it anymore and can overwrite it exactly like in C. or am I missing something?
Yes. Except unless of course you're not certain if the buffer you have your secret in was copied at some point. Maybe by the text control you rendered it in.
That's fair, a lot of GC languages use immutable strings. That said, all(?) of them have some form of mutable buffer, albeit some care would have to be taken to avoid accidentally turning it into a string.
I can’t remember off the top of my head which major Java library I was using a few years ago but that’s exactly how it took sensitive string params - via a char array instead of a String object. I was scratching my head why they would do that for a bit until I learned.
The GC may copy data around, though. So, when you scramble the data there may be already be copies of the previous value in memory. I think that's the biggest drawback.
People have described the concern about memory not being zeroed in a predictable way, but not why that's a security concern. And, honestly, for most people I don't think it is. If an attacker is in a position to read arbitrary memory out of your browser then the chance that they won't obtain something else about as powerful as a password (eg, a valid session cookie) seems minimal. Or a crash dump file may include passwords, but then how does someone have access to that and not be able to read your cookie store? For a niche set of people I think there's greater concern around cold boot attacks - someone with physical possession of a suspended machine powers it off and quickly boots another OS, then dumps RAM (think laptop being taken away from you in an airport situation). That can be mitigated through firmware just wiping RAM on unclean boots (TCG has a spec for this, Linux and Windows implement it), and getting around that involves much more invasive attacks.
I don't want to say the concerns are entirely unwarranted, but I really don't think it's a big deal for the vast majority of people.
At the risk of conjuring a strawman, I've heard people worry about the loss of control, like maybe the process has some sensitive data in RAM that it will release and then the OS will alloc() it to another process. And that's true on one level, but nothing prevents a garbage collected language from overwriting all the data in a block of RAM as it free()s it. I think the risk of memory related bugs in a non-managed language is far, far greater than any GC-related hypothetical risks. And then you have languages like Rust which "feels" like it's a GC language because it manages all the alloc()s and free()s for you, but isn't GC at all because the language's semantics make those calls deterministic.
I'd vastly rather see sensitive security software developed in a modern garbage collected language (or in Rust and similar languages) than in C or C++.
> At the risk of conjuring a strawman, I've heard people worry about the loss of control, like maybe the process has some sensitive data in RAM that it will release and then the OS will alloc() it to another process.
I'm sure some people worry about this, but I think you will be hard pressed to find a modern OS that actually would give away your contents to another PID.
Just because a program can malloc(8), fill eight bytes with known content, free() the pointer, malloc(8) again and see their old content doesn't mean the OS hands your data to anyone, just that the libc (or whatever runtime you use) is not actually getting a whole page for that single 8 byte malloc, and it didn't give it back to the OS at free() either, so you "owned" the page where those 8 bytes lives all the time during execution of this simple test.
The limit for which your allocator starts _actually_ handing back data is probably never less than 4k and upwards to 256k depending on page size, malloc settings and OS/libc defaults.
So while there are a lot of traps code can fall into as mentioned in other comments in this thread, I think everyone can stop worrying about "the next program to malloc() will get my old data" because that just doesn't happen.
The arguments people are giving about being able to explicitly call `free` make a lot of sense to me for getting rid of some attack vectors, but it still seems like the security risks of using a memory-unsafe language would heavily outweigh the risks of the garbage collector not clearing strings fast enough?
I know that's not an either-or proposition, there are memory safe languages that aren't garbage collected. But if people aren't talking about those languages, if they're just bringing up C or something...
I am not an expert, I might not know what I'm talking about here at all -- but my instinct is that I would rather see security-critical code written in a garbage collected language than in C. Frankly, I don't trust developers not to make memory errors in C.
Maybe I'm underestimating the risk of the garbage collector not zeroing out variables? Or maybe I'm over-complicating it and the answer is just the obligatory "write it in Rust" refrain so you can avoid both problems?
But I'm also a little surprised to see this line, my impression was that security advice was starting to trend towards recommending GC languages, not away from them.
These kind of concerns are so funnily out of touch.
The real dangers always turns out to be crazily bad security practices (the kind obvious to everyone after the fact), not theoretical attacks that probably weren’t ever demonstrated in the wild.
I'm not an specialist, but I suspect the rationale is that you have less control of how the memory is cleaned up after use. In special, the GC can actually copy objects around without the programmer's knowledge, potentially making multiple copies of secrets which the programmer cannot clean up (by setting the bits randomly) later on. Without a GC, the programmer is able of knowing how many copies there are of any given secret and can dispose of them, if needed. On that note, probably writing their own "malloc" is probably a good choice.
That said, I'm sure there are workarounds even in GCed languages. For instance, you can usually create C extensions which could allocate and manage memory outside of the GC's control. So, such extesion could potentially give back memory control of certain special memory regions where secrets can be stored, while everything else just goes through normal GC.
I buy the explanations that they mostly want to be able to explicitly overwrite sensitive memory at a time of their choosing.
My question: does the nondeterministic execution pauses that garbage collection injects into a program's runtime aid or prevent timing attacks?
It seems like it would prevent them, since it makes it that much harder to predict execution duration, but I have this vague impression that high-security operations are more likely to demand real-time computing. Maybe that's just incidental, and applications that must highly perform also tend to need to be secured from attack?
I suppose it's a lot more difficult to wipe memory clean in a garbage collected language.
For example:
password = "my-secret-password";
// do stuff then remove the pass from memory
password = "" or null or delete or unset
We have no guarantee that the first string "my-secret-password" will be collected and removed any time soon whereas in C or C++ we could just memset it before freeing it.
But that feels like a very generous interpretation, I'm sure the author really meant it as a "real programmers don't use GC language"...
To be pedantic, they're not saying the class is making things worse - it just doesn't do anything on .NET Core because it lacks cross-platform encryption APIs comparable to the ones used on .NET Framework. So it's basically just a String.
It's a little frustrating that their only recommended alternative is "just give us your secrets bro" though:
> The general approach of dealing with credentials is to avoid them and instead rely on other means to authenticate, such as certificates or Windows authentication.
There's a pretty important note in the remarks, that you should not use SecureString in new code, as it only shortens the lifespan of the plaintext in memory.
That's only if you use immutable strings. In C# or js (and probably in others as well) you can use some form of byte arrays / buffers, and overwrite their content after use.
Technically C/C++ have an as-if rule (the optimizer can do whatever it wants as long as the program behaves as-if it ran the source code provided) that allows the optimizer to create cached copies and/or omit the memset. There is no way to express in C/C++ that all traces of a given variable should be removed. Though I guess there are constructs that work well enough in practice and you can always check the generated assembly.
> What are the security problems with garbage-collected languages?
None that I'm aware of. I think this is a petty swipe at programmers who use garbage-collected languages, implying they're "less" than programmers who use "real" languages that don't have garbage collection.
I have been a LastPass customer for over 10 years and I think this January when my yearly subscription ends, I will finally not renewing.
I’ve shrugged off a lot of strangeness that has been happening with them as a fledging company’s growing pains. Unfortunately, this incident is the final straw. I think we are going to see a lot more come to light and their lack of any sort of transparency on this is a cardinal sin in the Infosec world. As an aside, it’s interesting to see their fall from grace from their reception section on Wikipedia: https://en.m.wikipedia.org/wiki/LastPass#Reception
I’m moving to bitwarden and not looking back. I would be interested to see some people write about this transition as I’m not sure if I want to export/import or start anew and move things manually.
I just did the migration (to 1password though, sorry the lack of tags is very bad for organization), 6 years old customer.
Key points:
- Refresh the website list from the extension before starting, ideally clear the extension cache first (will sign out)
- export from the extension
- attachments and password history are not exported
- there is a lastpass-cli that will help you export attachments
- there is a hacked together PR from myself that will help you export the password history
The import worked very well in 1password aside from attachments/history.
What I did though was tag all my password with "lp-breach-aug-2022" and then as I go through them and change them, I remove the tag
To perform a LastPass migration, there are 4 phases involved:
1. Export passwords
2. Export attachments
3. Export password history
4. Export form fills (THIS IS NOT POSSIBLE FROM MY UNDERSTANDING, form fills also appear to not be encrypted?!)
# 1. Export passwords
In the extension, go to Account Options -> Advanced -> Clear Local Cache, this WILL LOG YOU OUT.
Then, log-in and Account Options -> Advanced -> Refresh Sites, this will update your local cache.
Finally, begin the export process and follow the instructions, make sure to USE THE EXTENSION (not the website): Account Options -> Advanced -> Export -> LastPass CSV file.
When saving the CSV, do not copy-paste the content of the HTML manually, instead use the popup to download the file that LastPass provides. You might need to allow popups for LastPass extension the first time you perform the export, then perform another one to get the popup.
# 2. Export attachments
Use lastpass-cli to export attachments. A script is provided in version 1.3.4: https://github.com/lastpass/lastpass-cli/blob/v1.3.4/contrib...
Keep in mind that the script works also on version 1.3.3, which is the one provided pre-compiled by Ubuntu, you just have to copy-paste the script to your local machine.
# 3. Export password history
This is not possible natively, you can use my modified PR, but it's not trivial, bash knowledge, familiarity with C syntax is expected: https://github.com/lastpass/lastpass-cli/issues/245#issuecom...
Keep in mind that YOU SHOULD AUDIT THE SOURCE CODE, I modified an existing PR and it's hacked together, I brought it only to where I needed it to, to get the password history out for my specific use-case.
# 4. Export form fills
Unsupported from my understanding
# Conclusion
Tag the items or mark them in your new password manager with something to remind you that they were breached on lastpass in august 2022 and remove such mark when you change their password.
Awesome work on the password history export, thanks a lot!
I audited the code to the best of my ability and it doesn't look like it's malicious, but I certainly could've missed something, so to anyone who's thinking about using this, it works, but do your due diligence.
I ran into this before, actually. As of about a year ago, Lastpass partially used cached data to generate some portion of exported data, but that cache is not diligently kept up to date.
No, not true. I'm using 1Password and while I like it, there are a few things LastPass got right where it even beats 1Password.
The one on top of my mind is that you can unlock LastPass with a PIN. My wife has a phone with a glass cover (to protect it from the children), which "broke" fingerprint unlock.
She's required to type the full password every time to unlock it, which is particularly hard on phone (long password).
On top of that, lastpass app (phone) had the option to "force" autofill from a notification. For some apps where the popup never really shows up with 1Password, I was able to force it using LastPass and then fill. With 1Password the only option is to go to the app and copy-paste.
Those are not game-breaking though, given the many, many bugs that LastPass (app on phone) had, the most annoying was: open the autofill and when searching, just no result shows up. This made the autofill useless a good chunk of the time.
On top of that, LastPass EXTENSION (chrome) has the option of choosing between sharing states between browser profiles or not sharing states between browser profiles. This is very useful in my case, because my wife has a chrome profile under my OS user, but we can still have 2 different lastpass "logins".
From this perspective, 1Password is actually entirely broken: if you login into the native application (which is basically required for decent functionality), you are not allowed to login into 2 different 1password profiles through the chrome extension unless they are on different URLs (e.g. mycompany.1password.com vs 1password.ca).
Finally, LastPass was consistent: web, extension and app had the same capabilities.
1Password is highly inconsistent, where the native app has more capabilities than all of them, the extension has no edit capabilities but has better read capabilities than the web version and the web version has a mix of edit and read capabilities. For example, the native app can "batch add tag", but the web cannot do that.
TBH it was more of a throw-away sarcastic outburst, an exclamation, an out-breath, than a genuine question. And also based mainly on the security side of things. I didn't make that clear, however, so I apologise for leading you into expending so much effort on your excellent reply.
All good, appreciate the apology, I'm bad at reading sarcasm, sorry!
And I'm very angry at LastPass too.
To be fair, the thing I'm the most angry about at LastPass is how the product felt completely stale. I remember signing up 6 years ago and there has been no change at all across the board. Bugs, issues, improvements, NOTHING.
They could have avoided all this, they just didn't.
>What I did though was tag all my password with "lp-breach-aug-2022" and then as I go through them and change them, I remove the tag
How did you add the tag, or is it obvious in the UI? I've never used 1Password before but think I'm gonna land there instead of Bitwarden, and I like this idea.
So, keep in mind that 1Password works in a "weird way": you are expected to have the native app installed.
The web portion has *less edit capabilities* than the native app.
You should have the extension and the native app installed at the same time, the extension should connect to the native app (and share login).
In the native app you can click on one item and then hold shift or control (Windows) to select multiple items, then you literally drag them on the tag on the left.
It will feel laggy a few seconds if you tag 1000 items like I did and it might not work perfectly, so double check that all the items got tagged.
To do that check, you have to: click on the tag, then scroll to the end. It will tell you the count of all the items for that tag.
Repeat the "bulk tagging" until the count is what you expect.
Do notice that if you click on one item and then hold shift and click on the item at the end of the list, it will select all of them, so this process is pretty fast. I had to do only 2 tries before all of the items got the tag.
EDIT: You must have added the tag to at least 1 item manually for the tag to show up in the sidebar. To do that just press "edit" on the item and at the bottom there is a "tags" field, you can add one.
In the web version, you just type tags and separate them by commas. The native version has a way better control.
Secret about tag: if your tag is named `foo/bar` it will represent them in a tree-like structure in the native app, so `foo` -> `bar`.
> I discovered that every time you click "export" in lastpass, the export accumulates a copy of the vault. My second export had 2x of everything in it, 3x for third, etc.
WTF
> I see the features in 1Password and I almost cannot forgive myself for holding onto Lastpass for such a long time.
The export process was the thing that mainly held me.
There was also an issue where 1Password would not work properly with a Work Profile on Android if you ALSO had a non-work-profile version of the app, which translated to "if you use 1password at work and in your personal life and you have a Work Profile, you cannot use 1password inside the work profile (it must be outside)"
In the end, this is just a bag of passwords, so as long as it's keeping things safe, it should be acceptable.
It didn't.
And it also doubled the price without giving any software improvement over the years, which is very bad.
>So, keep in mind that 1Password works in a "weird way": you are expected to have the native app installed.
I like this design a lot. LastPass _used_ to work like this ages ago.
>You must have added the tag to at least 1 item manually for the tag to show up in the sidebar.
I did get stuck here for a minute but was able to get it set up. Thank you!
I discovered that every time you click "export" in lastpass, the export accumulates a copy of the vault. My second export had 2x of everything in it, 3x for third, etc.
I see the features in 1Password and I almost cannot forgive myself for holding onto Lastpass for such a long time.
> I would be interested to see some people write about this transition as I’m not sure if I want to export/import or start anew and move things manually.
Did it about 18 months ago. I was expecting it to be more cumbersome than it was. Export from LastPass, import to BitWarden, manually compare.
Simples. It all worked IIRC, though I only have a few dozen entries as I'm in the habit of clearing old ones down. Left LastPass going for a few weeks just in case, then closed it down and the data was deleted.
Edit: If I was doing it now, I'd do it from scratch and change every LastPass-aware credential as I go. That info is out there now; you don't want to be using it any more.
I also transitioned to bitwarden about 18 months ago, but I haven't deleted my lastpass account yet.
I've used LastPass for password history once, and a couple times for notes (which don't get exported).
Now I want to delete my lastpass account completely but what would be helpful is if I can mark all my bitwarden passwords that are still the same as the ones in LastPass, as I'd like to change all of them. Anyone know a way to do this?
Sorry, no. All I can think of is exporting from them both, stripping each down to just the account name, login, and password, sorting them the same way, then doing a diff. Laborious, but possibly less so than a manual comparison if you have quite a few.
Seeing more articles about export/import woes would be great.
When I moved over to KeePass I was able to export and import all of my passwords but the field labels got pretty mixed up, and it was a little bit of a pain to correct. That might be fixed now, but it would be interesting to see people's experiences importing into other services beyond just "here's how you export from LastPass to CSV".
Edit: as other people mentioned, I also didn't get any password history with my export, which isn't a big deal to me but is worth highlighting.
you can ask them to cancel ahead of time and get a refund as long as you explain it's because the lost of trust and the fact that they don't provide the service they advertise. I did it yesterday and I encourage everyone to do it. Even if they stop, the refunds send a STRONG message to management. Also don't be shy to chargeback if your CC company allows you to. Companies like LastPass need to be made an example of, and you have the power in this situation.
I'll be in the same boat when my subscription is up, not sure exactly when it is (I should really check). My discontent had been growing for a while and it's been getting harder to defend it to friends and family, who I had to BEG to get to use a password manager to start, even though now they all swear by it. I'd like them to change along with me, but am worried about how difficult it would be.
I've made the transition to multiple different services over the years, not on a large professional scale so I cannot comment on doing that and I reckon doing that would require completely different advice than I am suggesting below.
I would highly recommend starting new. Every transition between managers has wound up leaving me having to manually delete fields after the fact anyways, or just keep those fields littering the manager. Sometimes even incorrect fields when moving away from LastPass which is even more of a bother. Starting new also gives you a chance to get more used to the new manager's features, and when transitioning you can add specific fields based on crucial information that might have otherwise been lost in automatic moves.
Use this transition as a justification to change your passwords to the services you use, and also a way to decide whether you want to keep using that service or submit a deletion (most you can do this on your own, other times you have to send a GDPR deletion request). I know it takes more effort, but spend a chill weekend doing so, and you'll be glad you did. Plus you can also review some security settings on your services, like force sign out all other devices and changing your 2FA settings.
That's a good point; especially since LP just had a breach, it's a good time to ensure that none of your passwords are the same as what's in their database anyways.
I used to be a LastPass user and also used to use their GoTo services for work. Shortly before all of this was revealed, I noticed a problem with their API and sent their customer service team message about how their API is not working correctly and is simply responding with wrong data and they basically said "too bad" and said they weren't gonna fix it due to time constraints. I even tried their forums but they deleted the Thread, marking it as "Spam".
I then stopped using my account on LastPass and literally a few weeks later they revealed the "security incident". Had to change all my passwords but I'll never get near this company ever again.
I moved off of LastPass a while ago, but hadn't actually deleted my account because of laziness/inertia. This breach was finally the impetus to get me to full-on delete my vault and start the process of cleaning up my old accounts.
Luckily I've been off of it long enough that I suspect most of my regularly used accounts are different anyways, but I'm still going through the process now of methodically rotating all of them (I may change some of the email addresses as well).
----
I think a lot of people even casually knew LastPass wasn't at the same quality level as other solutions, but inertia is powerful. Sometimes you need something glaring to be the final straw that breaks the camel's back.
It's been long overdue; I moved off of Gmail as well a while ago but haven't gone through and systematically changed all of my account email addresses; there are still a few older services I have that send emails over. The new year is a good opportunity to clean that stuff up.
The scariest part was it was a backup of theirs that was stolen. So deleting all your info might not have even protected it. It could still be in an old backup and get stolen. :(
I've deleted my stuff now anyway, it's all we can do. :(
> The scariest part was it was a backup of theirs that was stolen.
I knew it was a backup, but I don't recall seeing anywhere how old it was. Do we have any more information?
My data was confirmed deleted by them mid last year when I moved to BitWarden, and I'm hoping the backup was older. If not then I'm in the unfortunate position of being at greater risk than those who signed up more recently. I need to change everything anyway just in case, but knowing more would help with peace of mind. I used them for so many years I probably only had the 5,000 (or maybe even the 500) iteration config they never once mentioned.
I think that keepass with periodic manual backups of the keystore is the right solution for most people (maybe not for orgs.) I've never trusted these "cloud" password companies to do their job right.
After reading this, I acted on a decision I was on the fence about. I already have moved to Bitwarden and like it a lot better, but this post prompted me to go into LastPass and actively delete my account.
The next thing to do will be to start changing passwords. As with most of us, that's a project of serious scope that I do not look forward to.
> As with most of us, that's a project of serious scope that I do not look forward to.
I've spent the better part of the past three days doing just this. Get some good music and some good coffee, and it can actually be pretty cathartic. I enjoyed the hygiene exercise much more than I thought I would.
Yes! I too have had many sites fall into that category.
Also it's been a fascinating exercise in user-interface/-experience competitive research. Some websites just do not give a crap. Here's one gem: https://i.imgur.com/yBLmuHt.png
I can't help but feel pity for the local/city-level websites though. You can tell they've just Frankenstein-d stuff together on meager budgets and under crippling administrative loads.
Same boat. I'm mentally kicking myself having moved off so long ago but not having taken the final action that would have prevented me from being caught up in this breach at all.
When I first moved off I didn't want to close the account just in case something went wrong with the transition. But after it was clear that the transition was fine, then I should have gone back and just finished up the final step.
I probably needed to do a full account cleanup anyway at some point, but I just wish I had been slightly more proactive about deleting my data. It's a good lesson to learn, I'm thinking that as part of the account cleanup I should also take a look at what other accounts I have lying around that are unnecessary.
So I've been doing this and I strongly recommend keeping LastPass (LP) until you've deleted all your credentials from there.
My workflow is to launch the login from LP (using the app), use those credentials, and change the passwords using Bitwarden. Otherwise, you might accidentally set a new password that doesn't actually work and have to go through painful password reset processes.
I'v migrated maybe 2 years ago and deleted LP account. But I'm now wondering if that delete REALLY wiped off my account? Including some long-term backup or something? Has anyone asked LastPass? Anyone here with internal knowledge?
I may never find out for sure, even if I ask LastPass...
The post looks a bit weird on first sight: "I always knew LastPass has a ton of flaws, but promoted it anyway".
This may make sense though. LastPass seemed to be the only one with a good enough UX. And without a good enough UX, you can't make users actually use it. Using an imperfect but usable password manager is still much better than not using one with better security but poor UX.
(Here comes the old adage: make the friction low for the customer, and any shortcomings elsewhere will matter little.)
Security is not a binary thing. It's a spectrum and for most normal end-users having _any_ password manager is better than trying to keep all your passwords in your brain, which leads to password re-use and easy-to-guess passwords, etc. It's only after multiple episodes of what amounts to malicious incompetence that the cure becomes worse than the disease. I think we're collectively agreeing we've reached that point with LastPass.
> for most normal end-users having _any_ password manager is better than trying to keep all your passwords in your brain, which leads to password re-use and easy-to-guess passwords, etc
Indeed. Writing down passwords on a pad of paper next to your computer is a valid practice for most people if we're being realistic, particularly for personal computer use. Even leaving passwords in an unencrypted text file on your desktop is not nearly as bad today as it used to be 20 years ago. Pretty much anything is better than using the same 8 letter password on every single website. That's the status quo which needs to be toppled.
Its awesome, supports team work, and if you have NC its no brainer. Probably too much work to install NC if you don't use it but in small company settings its probalby good idea as you need online office anyway.
I moved to Bitwarden from KeePass and haven't looked back.
The UIX of Bitwarden can be a bit meh at times but it's also boring, predictable, and solid. I'm just saying this as someone who hates save buttons in the upper-right hand corner of things - just... small nitpicky stuff like that. Sometimes I have to look for a button or their use of iconography confuses me a bit.
I would do my own hosting for a distributed password database but the older I get the less I trust myself to keep that stuff locked down and patched. Given the number of users I feel Bitwarden has more skin in the game to keep their solutions tight.
If you're not looking to self-host I can't recommend Bitwarden enough!
Needed a solution that I could use on all devices easily - browser, shell, phone... basically anywhere I need a password. KeePass isn't distributed by nature, relying on a filesystem DB - this worked for me for _years_ and is a fine system, but I started getting nervous using Google Drive to tote the database around from device to device.
Also planning for my passing, I think it'll be easier for my SO to get into a simple web-based solution with my TFA scratch code if they need to handle any of my affairs vs. tracking down my KeePass file that would be behind disk encryption etc.
Like I said - used KeePass for years, and I love it to death. I have just moved on in my usage and needs =)
Reading about what password managers people prefer is interesting and a good example of why UI is hard. The fact some people like lastpass UI is fascinating but also not surprising because it does have a less modern look that some people could be into.
I used Google Passwords for a long time before deciding to move to something not OS dependent. My first pick was LastPass. I used it for maybe one month, but found their browser extension and Android app pretty bad. So I decided to move to BitWarden. I am very glad that I did this, otherwise I would have been changing all my passwords like a maniac.
I deleted my LastPass account as soon as I imported all my passwords into BitWarden, and the passwords for the most critical sites have been reset since then. Moreover I use auto-generated passwords, so no password is used on two sites, plus they are super hard to brute-force.
I use a mix of pass(1) and LastPass, but this incident has convinced me to put everything on pass. But I don't really use it the "recommended" way, where you put the password on the first line. It's not a great fit for a consultant when half my customers want to give me my own Gmail/Atlassian/etc account. So I tend to keep big files of free-form text instead. But if I'm going to use it with a browser, the manual copy-paste will get annoying, and I want to switch to the normal pattern. Does anyone have any suggestions? I guess with decent auto-complete I can do something like `google/foo` and `google/bar`. If you have tips let me know!
I consider the flexibility of pass(1) to be one of the best features. In my case, I use a hierarchy to manage secrets across different orgs and classifications. The structure I use is:[ORGANIZATION]/[CLASSIFICATION]/[SITE|APP]/[USER]
The folder structure allows for different keys to be used in .gpg-id files, so secret access can be limited on different devices based on which keys are available. For example, only a subset of keys are available on my android phone via the Password Store app from F-Droid, with all devices using a shared password-store synced using git(1).
Completion with bash works well (on Fedora) and following the convention of having the password on the first line allows for the android app to work and you don't need to worry about someone looking over your shoulder by using 'pass -c ...'.
This sounds like a very nice system, and I'll give it a try. I'm already using git to keep things synced between my desktop and my laptop. I've never even attempted syncing to my phone, but if I do that giving access to only a subset of the keys sounds great.
> but I am using Bitwarden free and there's no authenticator app
There is if you pay $10 annually for Bitwarden Pro.
I use it, and it was the game changer I needed in terms of user-experience, to bother actually start using 2FA/MFA for all sites where it was an option (compared to before where I would only do it if required).
They haven’t had the feature for a huge amount of time but it works alright, with some minor mistakes here and there. As with most managers, it struggles when there are more than two fields to populate
Well, that sounds bad. I mean, I'm not an infosec expert, but I can follow enough of that to see that it's not good. I use Lastpass at work, because we have a site license, but maybe I'll look into whether I can switch over to bitwarden. I don't expect perfect security, but I expect them to at least try.
LastPass user experience has grown terrible over the years. The iOS app regularly freezes for over ten seconds with no response.
Trying to login to LastPass on a second device, with 2FA enabled, regularly takes me over five minutes. Why?
Login to app. Lastpass tries to auth using watch app. That's broken, so ask for a SMS. Enter SMS.
That doesn't work. it wants master password again. Ok we're in! but now whatever flow to login to an app is broken. Open app, try and use the helpful keyboard shortcut that's broken. ok go back to last pass. copy the password. Oops it wants to enter master password again. Ok. Good thing I picked a long master password. Ok now back in. need to search the site I wanted. got it. copy password, switch apps, paste. DONE!
Also, security vulnerabilities?! Definitely going to switch to Bitwarden.
I feel like a shill at this point but just use Bitwarden. open source, cloud sync by default, alternative self-hostable backend if you want to, no device limit, doesn't cost anything which is just about the only thing that concerns me because the free plan seems too good honestly.
Bitwarden is great, and their commercial offering of $10.00 per year is so cheap as to be effectively free. My only issue is the fact their servers could be wiped and I no longer have access to my data, but that's where KeepassXC comes in. I keep one Bitwarden and one Keepass DB as a fallback, and keep them updated with the same login entries. I store my KeepassXC database in various cloud storage services, as-well as keeping various local copies.
> My only issue is the fact their servers could be wiped and I no longer have access to my data
So is the Bitwarden database not also stored/always up to date local (like how IMAP works for e-mail, if the server goes offline you still have all your emails locally) ?
> In fact, if password management is done correctly, I should be able to host my vault anywhere, even openly downloadable (open S3 bucket, unauthenticated HTTPS, etc.) without concern
This is the key point. Properly implemented, you should feel very relaxed if your encrypted data is leaked.
I'm using BitWarden, but 1Password's "secret key" concept is smart, it means your data is still very secure even if your password sucks.
> On Monday password manager service LastPass admitted it had been the target of a hack that accessed its users' email addresses, encrypted master passwords, and the reminder words and phrases that the service asks users to create for those master passwords.
(That's from 2015 but could read like the other week!)
Plenty of folks knew LP was always bad. I've been using 1Password for years, not just because it's so good (it is) but also because LastPass, which I previously used, was horrendous. Even canceling my family account with them was a nightmare.
For at least a few years yes, I dropped them 3-4 years ago but as others have mention the red flags go back further. GoTo acquisition was the last straw for me.
Yes. But prior to this breach it was easy to look the other way due to the extremely large amount of inertia associated with changing a manager and all your passwords. I know this was the case for me. In August we thought it was simply another "simple" breach. E.g. they got hold of some information that would be useful to spearphish or whatever but not the vaults themselves. No big deal, just be on the lookout for emails.
Once it came to light they lost control of their vaults the calculus changed. In my memory this is the largest, most prolific breach in history. Every other breach of a major site pales in comparison. The only solution is to change password managers immediately (I went to 1password) and begin the process of changing everything and updating your security posture. Unfortunately, the hackers also have an insane amount of metadata on customers. So if you stored incriminating (either legally or socially) websites in there the hackers now have a lot of leverage to get you to bend the knee.
In summary, lastpass has been on the down slope for a long time. But it was easy to just accept this and work around it. This breach changed everything. It revealed their incompetence in full and woke a lot of people, including myself, up to just how hard it is to trust a company. It's just not enough anymore to have a big company slapped onto your logo (LogMeIn) and hope they provide the correct mitigations through experience. From now on I, and many people I know, will be carefully evaluating their choices with password managers, etc from now on. I don't think their CEO can be trusted especially with all the weasel words used in the disclosure and the timing of the disclosure. They showed no respect for their customers in either the aspect of security or disclosure. If you know nothing else about this breach that should be enough to get you and everyone you know to run.
Use of 3rd party password trackers has been a periodic concern for our organization. We do B2B business with banks, so the temperature is increased somewhat. There are kinds of credentials we have access to that genuinely terrify me.
I've been debating building an in-house solution for managing secrets, if for no other reason than to get all of this information off of 3rd party computers. No serious proposals have been put forth, but I don't think this stuff is exactly rocket science either. Our requirements are functionally-equivalent to a copy of passwords.xlsx on a network share.
The system is fairly broken from my perspective. Especially, as the size of the financial institution decreases. We work with smaller customers.
At this small scale, everything is vendored out. Many times, 2+ different vendors will need to directly exchange something like a password for the bank's core system. Email is the preferred technique, typically with some theatrical secure email crap on top - involving yet another 3rd party in the secret exchange mess.
As you get into the scale of an organization like Capital One or BofA, you start to see more of that Hollywood-style credential exchange & control with multiple consenting parties, Iron Mountain trash cans, biometric doors and one-time passwords.
If you are concerned about the IT/security of your financial institution, you may prefer larger ones. These have more employees running just IT compliance than many of our clients have in total.
If you are concerned about bad customer service or losing access to funds, you may prefer smaller ones. Being able to realistically talk to a board member of the bank about a dispute makes a lot of people feel better about where their money is.
> Our requirements are functionally-equivalent to a copy of passwords.xlsx on a network share.
Really?!
That's the least secure option I can think about, including LastPass. No encryption whatsoever, and the whole setup relies on no one ever making copies or otherwise getting unauthorized access to the file. Audits could be tricky, too - Excel may save some temporary copies of this file somewhere (like in %TEMP%) without anyone realizing.
This stuff isn't rocket science (pass is a simple shell script, after all), but your comment sounds quite concerning. I mean, I know banks are notoriously backwards when it comes to technologies and information security, but this is just... wrong.
Why not whichever Keepass client you prefer, and Syncthing (if it's even necessary for your use-case)? Open-Source, old enough to have a few revisions and be looked at pretty closely, completely off-line, and forgiving if updates are needed to passwords (with many clients having the option of merging changes with your open database).
This way, you don't make the same mistake as Telegram as well; I.E., don't roll your own crypto if you can help it, especially if there are ready-made tools that already do what you want, doubly so when those tools are battle-tested and looked over by actual experts.
> doubly so when those tools are battle-tested and looked over by actual experts.
The "actual experts" part is where I trip over this. What is the standard in this context?
I've written a lot of software that utilizes cryptographic primitives in a wide variety of business contexts. Does this make me an expert? Or, do I need some special piece of paper that says I have permission to conduct cryptographic implementation business? If so, where do I obtain this?
I thought this whole comment thread was a matter of the "actual experts" not being who they claimed to be.
Switching to Keepass, 1password, et. al. is just continuation of the same madness in my view.
I jumped the LastPass ship a couple of years back. There had been some security incidents at that time, but they could still be downplayed.
What drove me away was the lingering impression that they were more focused in figuring out how to monetize their product, rather than improving it: E.g. one year feature X required a $1/m plan, next year it was free, and the third year it required the $5/m plan. At the same time, their apps/extensions seemed sluggish and stagnant.
That didn't exactly inspire trust in a product that is essentially the gateway to most of my online activities.
I just canceled my yearly family subscription and moved to Bitwarden which to be fair is less polished in some aspects and more in others. Support refunded me my yearly fee 3 months into the year which was nice, but my feedback for them was that the only way I would ever consider them again would be if they fired EVERYONE in management, got acquired by a company whose reputation I trust and completely rebuilt their product. Which is to say I will never be a customer again or recommend them to anyone ever again.
Leaning strongly towards self-hosting. Name brand cloud managers just make too juicy a target for sophisticated attackers regardless of their competence/care of the pass manager co.
I know it's got a bit of security via obscurity vibes, but I've concluded the combo of residential IP, wireguard, firewall and dedicated VM is probably more secure. That would require someone with decent skill targeting me specifically...in which case they're probably better off with a wrench attack anyway.
I was self-hosting bitwarden-rs (alternative implementation in Rust) in a Docker container, and I had set up Watchtower to automatically update it nightly. I thought I would be relatively safe and keep up with updates.
Well, it turns out the project in question had to change their name from bitwarden-rs to vaultwarden-rs since they violated the trademark. Updates stopped being pushed to the original docker image url, and several months passed until I suddenly noticed this. In the meantime I could easily have missed some important security upgrades.
I understand it's only myself at fault here, but it made me migrate to a hosted account instead. As fun as it is to self host things, something this security critical is probably better left to dedicated people.
Yeah still considering how exactly. Not exactly thrilled about removing it from one cloud only to stick it onto a different one with arguable even less security features (geo blocks etc)...
Ideally I'd sync it directly against home server, but iphone limits options on sync to own server a bit. Maybe via git...
Reading through these comments I am surprised at how many people were/are still using LsstPass. I used it until they had an incident 5 or 7 years ago. And there face need others since. How many chances did an organization get with highly sensitive information before people move on?
This is why we need good OS-level password managers. Phones and now computers have dedicated security chips which are infinitely more secure than any cloud solution. Such an easy market to grab that it boggles me why Apple and Google aren't aggressively going for it.
Apple and Google both have solid options here, and I'm a happy user of Google's. But I also wouldn't want either of them to push their solutions aggressively, for competition reasons.
Do you consider your passwords to be "disposable" or easily replaceable? I could never trust Google with hundreds of passwords. The thought of their AI going haywire and essentially locking me out of the internet is terrifying.
> The thought of their AI going haywire and essentially locking me out of the internet is terrifying.
I think this is really unlikely; since https://news.ycombinator.com/item?id=34092956 I've been gathering lockout reports on HN and they're mostly things like adding a phone number to your account and then forgetting about that when switching numbers.
iCloud Keychain syncing, strong password suggestions in Safari, and WebAuthn passkeys are all part of Apple's strategy. When they don't buy a third party and deeply integrate it, they tend to operate by insinuating themselves as the platform default. What would you have them add to that?
Their "password manager" on Mac is called Keychain Access. The UX is very bad, the interface is old and clunky and it doesn't sync with iOS (if for example you create a secure note there's no way to access it on iOS) - not to mention that most people don't even know it exists, it's kind of a hidden feature. Meanwhile, on iOS the password manager is hidden in the settings and again it has pretty bad UI/UX. I understand that they want to hide the complexity away from the end user and make these kinds of features "just work", but in practice they feel pretty half-baked.
I agree that Keychain Access kinda sucks, but it's because Apple UI paradigm for it is different. For them, the Password Manager isn't a separate entity that's a source for copy-pasting passwords into arbitrary apps, instead it's a core Framework of the OS that apps integrate with. As such, it doesn't really have "its own UI" because each app provides the UI.
Of course, that does mean that it's less universally convenient like the other commercial apps.
As usual with Apple stuff, I guess they're not interested in making it a better separate app because their value proposition is "use our frameworks and get this feature 'for free' "
I’d say - ability to use it across platform or across system accounts. I like to use my personal LastPass when logged into my work laptop with corp account. Mind you, not to store work passwords, no, to have access to e.g. my Amazon account.
Additionally, I share my LastPass with my partner. Probably not a setup for most, but we find it convenient.
All that is achievable only when the password manager is not tied to the system login.
If you're a small operation it's not stupid to keep your passwords in a google spreadsheet. Google accounts are seeing attempted cracks 24/7 for the last fifteen years (or more) by nation states and it's been a long time since any of their accounts have been hacked. Remember when someone spliced their optical fiber under the Atlantic Ocean? That's what Google is up against.
If you follow this. Keep your passwords in triplicate because if you fat finger the mouse over a cell it can evaporate (experience speaking here). Three copies means very little chance of losing a password. And keep a backup of course.
Share the spreadsheet among your team. It's very easy to add and delete users.
Yes I know this seems cheezy but sometimes the simplest way is the best. Good for up to about ten team members in my experience.
I think that is dangerous advice. It's easier to use Bitwarden, or KeePassXC in Google Drive, than a spreadsheet where you have to follow such rules. And the former ist much more secure against a lot of mistakes and attacks.
I have multiple folks I know using KeePass with the kdbx synced on Google Drive or similar. The authentication is a combination of a 1KB key file (manually copied onto each device via sneakernet) and a long password.
Exactly. I've used Keepass then KeepassX then KeepassXC, synchronized across devices with Dropbox then now Nextcloud. No problem, the whole database is safely encrypted, and nobody can access it but me anyway.
question for those who know: for those of us in apple ecosystem, is just relying on their keychain an acceptable alternative to a third-party password manager company?
If your passcode(s) on your device are longer than 6 digits, sure. As it stands, you can recover your iCloud Keychain by either[0]:
(A) signing into iCloud (Requires Apple ID Password) + approving the new device on an existing device that has Keychain access
or
(B) signing into iCloud (Requires Apple ID Password) + performing SMS 2FA + entering the device passcode of your primary device
The threat model here is where a nation-state actor enlists the full cooperation of Apple and gets Apple to hand over your encrypted iCloud Keychain, then gets Apple to siphon your Apple ID password next time you sign in. They could then use those two pieces of information to brute force the passcode on your encrypted Keychain data. If you have an 8+ digit passcode, or an alphanumeric passcode, that makes it exponentially harder to brute force.
With the long passcode, your only remaining threat would be Apple shipping malicious hidden code or an RCE in their product that allows them to force your device to approve new devices non-interactively, which would allow them to approve a malicious device the next time you approve your own new device for access to iCloud Keychain.
Or perhaps it's more likely that, when you're setting up a new device, Apple sends over the name of your new device, but with the public key/CSR of their own device, since iOS doesn't show a key fingerprint during device approval or anything.
> As it stands, you can recover your iCloud Keychain by either...
I remember, a long time ago when i created an AppleID, there was a yes/no choice whether or not to upload [something related to the password] to Apple, so it would become possible to recover the AppleID password if needed in the future.
Is that still a thing, or has it been replaced by new features now?
Also how, say, firefox's password manager compares. I've tried to look up technical comparisons before but found them unconvincing (making clearly outdated or incorrect claims). You can get the standalone managers compared to each other, but the non-standalones are missing.
I do know the answer to this - Firefox is not an acceptable alternative to a strong password manager. Local admin can dump any passwords from Firefox, and I think a user can even dump their own passwords from Firefox on windows and Linux.
Relying on your browser/OS password manager is, for the vast majority of people, a better alternative. These days you get pretty decent integration even if you use Chrome on macOS and want your passwords to show up on your iPhone.
All online cloud password managers have vulnerabilities of some form or another. As computer people why would you think otherwise? I don't understand any of the assumptions that your data would be safe in the cloud. I had this same point to a past employer that LastPass (or other password in the cloud manager) will always be vulnerable. But it falls on deaf ears. I would never, ever trust my passwords to an online storage provider.
Great technical reasoning, but no mention of the business practices. This is a consistent result we see when companies are acquired by private equity. Like Thoma Bravo did with SolarWinds, and is doing with Proofpoint. Vulture capitalists destroy the companies they target to extract short term value. This is why I warned people against using LastPass for several years. Sad to see I was already vindicated.
Hopefully this will stop the replies to comments complaining about stupid password rules: "Just use a password manager". Why would you ever trust someone else with something like this?
The advice should read "Just use a popular password manager that doesn't have a well-document history of being terrible, from a company that releases their regular security audit reports". The most basic form of "Do your own research" should reveal a long list of security incidents: https://en.wikipedia.org/wiki/LastPass#Security_incidents
While source code access is certainly a concern for some, it is worth pointing out that Enpass supports database sync methods such as Dropbox, Google Drive, and a proprietary "wifi" sync. This means that you can have sync like Lastpass or 1Password but retain complete control over your own data.
I can't speak to Enpass's security, but I have been a user for several years. It feels less polished than 1Password but is freer and more open.
That is what I am using. I guess the Enpass attack would require either an attack on the client on my phone / pc or on the storage provider (if they are able to crack my passphrase). I had assumed lastpass and 1pssword were using the same approach
How do we know 1Password doesn't have similar glaring oversights like LP?
We can't audit their code unless it is open source? I'm not going to just believe them at face value because some random internet personality says so. Unless some respected authority can publish an audit of the security posture and source code, we're just taking them at their word.
Granted, if I had to chose today, I would instantly pick 1Password based on what I can find on google, and LP has far, far more leaks than 1P.
But let's not kid ourselves that 1P is somehow more trustworthy without audits. And I'll eat crow if 1P has proof that they are routinely audited by 3rd parties.
EDIT: removed snark.
EDIT#2: If Signal can publish open source, why cant 1Password? If security is done right, the source code should be visible to everyone without jeopardy, or at least that's what I've been led to believe.
EDIT#4: We trust browsers too much. 1P stores the secret key on every device so that you only have to enter your passphrase. I'd really like that code to be public because that's a great way to lose control of everyone's secret key. Extensions worry me because they are a critical component in any password manager's usability-vs-security. But perhaps that is a digression or worth an Ask HN.
Still doesn't explain why it's all not at least source-available. I'm not going to complain if they don't use open-source licenses such as MIT or (A)GPL, but straight up not making the source code publicly readable at all is a big strike against it.
Making it opensource is no guarantee at all(if there could be any), for instance it could be more dangerous: anyone could spot a security hole and take advantage of it without reporting it, there is no guarantee there would be a responsible disclosure.
That seems to be about transitioning to an open-source model. I don't mean that. I mean simply having their git repo publicly accesible in a read-only fashion. No external contributions, no license, etc.
I see no reason not to do this, especially for such a security-oriented service. You should be striving for as much transparency as possible.
Because they like being in business vs just giving away their software?
Where is the repo of software that you've paid an unknown number of developers to work on for multiple years over multiple versions that you charge for and run a viable business employing all of the peoples?
Just because source code is available doesn't mean you can legally use it.
That's why every "the company source code got stolen!" news is nothingburger, no competitor can use it, it would be huge liability to be caught using it, and it "only" matters for people that might find bugs in it.
> It's also been built by people who are respected in the security industry.
This means almost nothing. It is an appeal to authority. Experts can still miss things. Yes, it is better than experts saying a product stinks, but still is not trustworthy without open source. Maybe I'm making my own fallacy here, I'm just trying out a position.
Of course not, I'd rather the code be Open and audited.
Being Open Source is in some ways a multiplier on security because it allows more expert review. But the expert review is the important part; if security-critical code is Open Source but hasn't been looked at by anyone other than the main developers, the Open Source part is a multiplier on zero.
It's a little bit more complicated than that, and there are a lot other factors at play as well even outside of security. We're not really getting into stuff like future-proofing and what happens if 1Password gets a lot worse in the future. It's complicated.
But the gist is that while it would be a lot better if 1Password was Open Source, it's still in its current state probably got more eyes on it than some Open Source security projects do.
When the tool is opened sourced honest and rogue security experts are going to go through it.
Now how much value is the first adding versus the second removing is the case. And based on incentives for each, an honest security research maybe getting a small bug bounty, and a rogue one potentially gaining access to 10's of thousands of accounts I think it might be a net negative overall.
Even so, the number of experts looking at the code will increase if it's open, right? I know the concept of "many eyes makes all bugs shallow" isn't the panacea we once thought it was; that still there can be bugs lurking in widely-read sources.
But it must be smaller than the number of bugs that exist in code read by a smaller group.
Sure it is: an appeal to authority is not a valid step in a deductive logical argument, unless you have somehow established that the authority in question is literally infallible.
Now, it's grounds for an (extremely) persuasive inference! And we know very little of what we consider known by strict deductive logic: we rely on weaker inferential reasoning the vast majority of the time. Grandparent's "means almost nothing" is much, much too strong.
But when we really want to know for sure that something is true, people are going to want to see proof, not a statement from someone who probably knows of proof.
If you want to go down this route, we know nothing of the real world from strict deductive reasoning because the axioms strict deduction flows from do not apply to the real world, but to mathematical universes where absolute truth is accessible to us. In reality, all statements we could use as premises are only probably true to a certain level of confidence, having themselves been constructed from inductive reasoning. Therefore, appeal to a good authority is as good of a step as any, by which I mean it's only provisionally worthwhile unless and until sufficient evidence comes in to demonstrate it is invalid in a specific case.
Absolutely we can't get very far with logic without relying on some axioms, and those axioms can't themselves be proven. But I don't think it follows that we need to accept every axiom someone proposes, such as what counts as a good enough authority. You and I probably agree on the basic existence and persistence of objects, for example, and we might as well pretend that we agreed to treat that as axiomatic ahead of time. I'm less sure that we'd have the same list of which people count as infallible-enough experts in which areas.
I'm really just here to stand up for the body of knowledge I learned in 9th grade Logic Camp. Good old classical Aristotelian logic is where the concept of a "logical fallacy" comes from; it really is all about deductive reasoning and not about inferences; and there really isn't a PhD-from-a-really-good-school exception to the fallacy of argument from authority. Just the same way that "X is false because George Santos said it" is simultaneously very persuasive, at least to me this week; and also a fallacy ad hominem.
> But I don't think it follows that we need to accept every axiom someone proposes, such as what counts as a good enough authority.
Nobody's saying we do, and I think there's a good middle ground we all actually inhabit where the director of the CDC, for example, is an authority on diseases when speaking in an official capacity, but we don't give a damn what they think about the latest movies. This isn't a difficult concept until we try to formalize it, really, at which point I'm sure we can run off into a bramble of paradox and bizarre conclusions we "must" accept in order to satisfy certain kinds of logical consistency.
> I'm less sure that we'd have the same list of which people count as infallible-enough experts in which areas.
This is a problem and we've seen it be a problem quite severely in recent years. Part of the problem is certain political groups following people with absolutely no recognizable expertise in anything except making money from people who don't recognize logical argumentation as even potentially useful: If it doesn't validate their pre-existing ideas, it's not only wrong, it's a trap laid by the enemy, and anyone who promotes it must be punished. Take that mindset, add some over-simplified and incorrect models of reality, and you have people who refuse to accept reasonable sources of authority for self-contradictory and purely emotional reasons.
Finally, logic has come a long way since Aristotle, and using some kind of consistent attempt at statistical reasoning is no less valid than declaring some probably-true statements to be axioms or postulates and reasoning deductively from there. Like diagnosing a disease: You can't use deductive logic to determine why you're having flu-like symptoms, you must use some kind of reasoning based on relative frequency of diseases and, possibly, incorporate the results of various tests in a more inductive fashion. "George Santos lies a lot" is a perfectly valid piece of evidence to incorporate into a worldview, just like "The seasonal influenza is more common than some horrendous infection which also initially presents with flu-like symptoms" is.
I agree about the problem —- I might quibble with the psychology, but close enough. But that’s exactly why the distinction I’m trying to draw is so important. If we keep in mind the difference between what we can rigorously establish and what we’re fundamentally taking on faith (however well-founded), then at least we can talk to people on the other side: clarify core disagreements, examine evidence, and occasionally, with entirely too much work, change a few minds. That may sound hopelessly optimistic, but shifting one mind in fifty would be an earthquake: for most purposes,’enough to declare victory.
Treat arguments from authority as indistinguishable from any other rule of reasoning, on the other hand, and there’s nothing to do but declare people who don’t share our view of authority irrational. And then I don’t know what the plan is.
It’s exhausting when people refuse to recognize well-established expertise, but I think it’s counter-productive that so many of us get defensive about it. Yelling at people to accept our authority figures isn’t working. It won’t ever work.
Here is where I think we part ways, philosophically speaking:
> If we keep in mind the difference between what we can rigorously establish and what we’re fundamentally taking on faith (however well-founded)
From my perspective, we're taking everything in the real world "on faith" (and there is a loaded phrase ripe to be deliberately misinterpreted) to a certain extent, and not just because of brain-in-a-vat arguments. For example, I sit in chairs thinking they're solid objects, but they're made of solid objects and might well collapse under me. In my experience, that doesn't happen to me, so my heuristic is that chairs are safe, but a heuristic isn't rigorous. It's "faith" if you want to phrase things that way.
Moving deeper, I trust that my senses provide me with accurate-enough reflections of reality I can use them to navigate my world safely, but I know enough about neurology to know that that isn't a given. Vision is reconstructed by the visual cortex from messy and incomplete nerve signals from the retinas, our sense of 3D space is reconstructed (based on low-level heuristics) from a pair of 2D images reconstructed from those messy retinal signals, and so on, from the bottom of the neurological hierarchy to the top of the conscious sense of self. The human machine lives off of best-guess reconstructions from incomplete and messy data.
This isn't mere acatalepsy, however: I think humans live in a physical world we're capable of perceiving accurately enough, and comprehending well enough, that we can accurately say we live in a real and comprehensible external Universe, and that some things don't go away even if you don't believe in them. Therefore, it's possible for our heuristic judgements to become more accurate at predicting reality over time, which is what separates knowledge from dogma.
Accepting that an authority is probably more likely to be right than wrong is a heuristic, and that heuristic can and should be improved, but all of our knowledge of reality is heuristic, so trying to treat reality like an axiom system is philosophically wrong-headed and incapable of dealing with the full complexity of reality as well.
I'd agree with almost all of that, but with two additions: I think we can locally approximate reality as an axiomatic system, with the axioms just being the stuff we've decided to treat as true for the moment. When the context changes what counts as an axiom changes, but that only rarely happens to me with chairs.
Second, I think there's at least a rough partial order on the things people propose as axioms: the order of how surprised I am when someone contests one, if you like. Very surprised, if it's the existence of external reality; not too surprised at all, sadly, if it's whether biologists are to be trusted over the pastor at their church on the origin of species.
The nice thing about that machinery is that when you meet the second sort of person, you're not forced to believe that they're fundamentally irrational and immune to all reason.
One more thing: trusting experts is a very convenient heuristic for me individually, and I use it all the time, but it's not epistemelogically all that useful. In principle you can always replace an appeal to the authority of a genuine expert with the evidence that they rely on to come to their judgement: if you can't, then they're speaking outside of their expertise. No one of us can win an argument on the internet that way; time is finite and we each only know so much. But if we all chip away at it, by supplying evidence where and when we can, we might get somewhere.
If we're talking about fallacies, we're invoking old-fashioned classical logic.
The nice thing about having a notion of strict deductive reasoning is that it gives you an idea of what arguments might persuade someone of basic good faith, but who very much doesn't want to be persuaded. If you and your interlocutor don't both accept modus ponens then you won't get anywhere, but that's pretty unlikely in practice.
(And I'd actually argue that we each invent logic on our own and then notice that the logics are equivalent, but that's straying into metaphysics.)
But its incorrect, its literally the logical fallacy, argument from authority, if the person us not an authority then it cannot by definition be argument by authority.
Experts must prove their views with evidence and not rely upon their reputation, that is the meaning of the fallacy.
An argument from authority is not a fallacy in and of itself.
An appeal to false authority is always a fallacy, such as considering an authority's opinion on a topic on which they're not authoritative.
If the participants in a debate agree that an authority is legitimate, then an unchallenged appeal to their authority is not fallacious.
If an authority's opinion is contradicted by undisputed evidence, then an appeal to their authority is fallacious.
The whole point of the distinction is to admit authority as a valid source of information, in the absence of direct evidence, because we can't possibly reason from direct evidence in every single case.
And, when you are unable to evaluate the truth of a statement for yourself, the expertise of the person making the statement is a helpful datapoint when deciding how much to trust it.
Just curious: Imagine two people took the 1Password white paper and created two separate implementations. The only information you have on the two implementations is the background of the people who implemented them. One is a first year CS student; the other is a seasoned security researcher with multiple published vulnerability discoveries. Which would you choose and why?
> if done improperly, this could be a logical fallacy
Maybe I'm wrong but doesn't this put it in a totally different category than other fallacies? Circular arguments, for example. All circular arguments are wrong. If you identify a circular argument you don't even have to fully understand what is being said, you can immediately conclude "this is a bad argument".
But appeal to authority isn't like that. "I'm not going to drive through the mountains today because the roads are iced over, and I know that because the highway department said so." That's an appeal to authority, and it isn't proof, but it's a good argument and there's no need to drive out yourself and confirm that the roads are dangerous. Identifying an appeal to authority is not enough to discard an argument, you need to evaluate the authority. When you see a circular argument you don't have to measure the diameter of the circle or something.
So I don't think appealing the Grammarly was a fallacy, but in this case I think they're wrong.
> When you need to support a claim, it can be tempting to support it with a statement from an authority figure. But if done improperly, this could be a logical fallacy—the appeal to authority fallacy.
The key words are: “if done improperly, this could be…” The article goes on the give non-fallacious examples of an appeal to authority. The quality of an implantation will be correlated to experience, so this is an example of a non-fallacious appeal to authority.
I get it but in this context "authority" means an authority on a particular topic, not like a police officer or something. Appeal to authority becomes a fallacy when you appeal to someone who is not actually an authority on the subject at hand.
But how do you determine if someone is an authority? Our world is filled with epistemological bubbles. One person's expert is another's snake oil peddler.
Right, I don't think we really disagree about anything. "That's an appeal to authority" is not a useful objection, because appealing to authority is often a great idea. "That's not a good authority" is a good objection and often very relevant.
It's sort of like seeing a lot of arguments that rely on false or misleading evidence and deciding that "appeal to evidence" is a fallacy. The choice of authority or evidence is the issue, not the act of appealing.
I've been very reluctant to use their cloud solution as I trust Dropbox more for security. So I still fight 1password to keep the vault stored in Dropbox.
I figure there are maybe 4 organizations who are active enough to prevent a full download of all their user's data. Google, Dropbox, Amazon, and Facebook. (Maybe Apple, but they seem lethargic.)
Because they store all the passwords to all of our services they are a huge target.
Vaults stored in Dropbox could be brute forced if an attacker ever gains access to the ciphertext. The 1P service mitigates this by adding a random key as salt that’s stored locally (iOS keychain/browser storage). That key never leaves your device (authentication is done via zero knowledge proof), so the master password is virtually impossible to brute force right now, even with a very weak master password.
I can highly recommend reading the white paper, it’s very well written and kinda like a comprehensive guide to E2EE which every SWE should be familiar with anyway.
And what do you mean by Apple being lethargic? They added a secure enclave to all their devices that is probably the most secure storage and crypto processor you can hope to get for that kind of money. They also added the option of completely end to end encrypted backups. Because of the secure enclave, that’s actually a really safe option.
> Vaults stored in Dropbox could be brute forced if an attacker ever gains access to the ciphertext.
I’m sorry, but could you expand on this? If I leave my encrypted password vault on Dropbox’s servers, then of course they could attempt to brute force it. I thought the entire point of the encrypted vaults is that brute forcing is computationally infeasible, but technically possible.
How would you run a shared vault for work on a „dumb“ file hosting service? With the ability to add/remove team members, recover vaults in case of password loss etc? What about the fact that master passwords can be brute forced if they are weak, just as LP customers are now affected?
I mean, crypographically we’ve had solutions to those exact problems for 30 years.
PGP might not be very usable but it also had mechanisms to do this.
if you are scared of people copying the vault before they lose access to the storage: you’ll be very sad to know that this is already possible with the SaaS solutions.
if you're worried about people breaking the vault if they have access: then its even more of a reason to control the access.
> I mean, crypographically we’ve had solutions to those exact problems for 30 years.
My point exactly - 1password.com addresses those problems (e.g. by adding a random key to the master password, and via Secure Remote Password auth), while using just a master password (sans random key) in a dumbly file-hosted vault does not.
If I got you right, then 1P does use this mechanism, and LP most likely too. The problem is - how do you store the private keys in a way that their loss is not catastrophic? Services like 1P and LP answer that question, with varying levels of sophistication.
With 1P, it works roughly like this:
- Every vault item is encrypted with the vault’s key (randomly generated number w/ 256 bit, AES)
- For every user that has access to the vault, the user‘s public key is used to encrypt the vault key. I think this is what you meant by multi-key? Adding a team member to a vault means encrypting the vault key with the member‘s public key.
- The user‘s private key in turn is encrypted with the „Account unlock key“, which is made up of: Master password + random secret [1] + salt. Neither of those is ever sent to the servers in plaintext, made possible by zero knowledge proofs.
If you stored your ciphertext on a dumb file hoster: Sure you can increase entropy in the master password to match 1P‘s random secret. Or just store your private key directly, as I think you suggested. But this is not memorable, so where do you back this key up in case your hard drive fails? Aren’t we entering recursion at this point, requiring yet another round of encryption? There’s no end to this. At some point you need to have either a password you can commit to memory, or one that’s stored in sthg like a secure enclave.
You could also print out the private key, like 1P is suggesting for its secret key. But exposure of the piece of paper is a total breach if it’s your private key, but not with 1P‘s secret key.
1: Random secret is stored on your device, and they ask you to print it out upon signup. It’s never shared in plaintext with the 1P server, same as the master password.
You trust Dropbox? The company that infamously invited to their board a former government official responsible for authorizing warrantless mass surveillance?
I don't buy the "if it's open source we can see/audit/trust the service" argument.
1. Bugs get missed all of the time in OSS. There is no guarantee that the more eyes the better, and in fact there may be a negative correlation due to the bystander effect [citation needed].
2. A software service is a complex interaction between many pieces of software. Two perfectly secure, audited pieces of software could interact in an exploitable way.
3. Just because the service provider tells you "this is the code, this is how we keep you secure, etc." doesn't mean it's true in practice. A bad actor could modify the code in production before the next version of "audited, trusted, OSS" is vetted.
4. Security practices outside of the code also matter (arguably more so), and even an organization with good policies can fail to follow them at times.
Ultimately, we're trusting the people behind the services we use to be honest and do their best. It seems that LastPass has demonstrated they aren't as deserving of that trust lately, but THE SAME THING COULD HAPPEN AT ANY ORGANIZATION.
Footnote: LP/1P could push an update that grabs everything necessary to decrypt your password vault the next time you log in.
It's a nice data point, but it's not necessary to me. Do you have the source code to your mail service provider or your online banking software? [1]
Having the source code available says a few nice things:
1. This company is confident enough to show their work
2. This company is "good" at software engineering (or it could reveal the opposite)
[1] I know some people can and do run their own mail servers. I can respect that, but I trust the Google devs and organization to be properly competent and incentivized to do a good job keeping my email account safe.
The older versions of 1Password are BYOH, bring your own hosting. I use it because I don’t want a single source of failure. My information is encrypted and stored in another cloud service. It doesn’t matter if that cloud service is breached. It doesn’t matter if 1Password is breached.
It's a shame because it's an excellent product, I really wish 1P understood this. Their removal of the cloud sync feature and insistence on moving to a subscription-based model is infuriating and drove me away from them. I want to be in control of where my vault is synced to and the only way of doing that is by staying stuck with the older 1Password 7.
Important piece of context here: if LastPass was the only password manager on the market, that's what I would recommend people use, even after the breach. Having a password manager is a big boost in security even if that manager is LastPass.
Personally, I stick to Open Source solutions (KeepassXC), but I don't typically recommend other people use KeePass unless they're technically inclined -- because the biggest risk with a password manager in my opinion is user error, and so I want to focus on that even if it means that someone isn't using an Open Source program.
All that to say, that this line:
> Granted, if I had to chose today, I would instantly pick 1Password
is a pretty good summation of the situation. I do think 1Password is a lot likely to be a lot more secure than LastPass, but even if I wasn't confident about that, the situation many users are in is:
- they should be using a password manager, so they do need one today
- an online password manager is a better fit for most users than an offline solution
- 1Password is (I think) slightly easier to use than Bitwarden and is more often recommended by security professionals.
----
But Bitwarden would still be a fine choice for people who want to use something Open Source, and offline solutions are great for people who feel comfortable with them (I keep my password manager offline because I have the technical skills to do so, so I like the added boost of security from my vault not sitting open on a server someplace). But the important thing is that they use a password manager in the first place, since using actually secure unique passwords across every site is basically impossible for most people without one. It's not so much that I assume 1Password is perfect, I just think it's the best choice for a lot of people right now based on the information we have. I'm not going to recommend KeePassXC to my parents, it's important to me that their solution be online and managed by a professional.
A lot of security is about making the best choice available based on imperfect information and based on individual context.
I used to recommend LastPass to people who I knew weren't willing to pay money, because (again) I wanted them to be using a password manager no matter what, and if they weren't willing to pay for 1Password, they might as well use something free. But even that advantage kind of dried up a little bit, LastPass has gotten a lot less generous about what they offer for free.
This alone lends credence to their claims. To this date there has been no major breaches despite being a large target (albeit smaller than LP). Moreover, the fact that your vault is both password protected and locked behind a secret key is about as good as you can get in terms of commercially offered security.
That's not to say they couldn't be lying. But after careful evaluation I've gone to them and people much more experienced in security have also moved to them as well.
FWIW a "source code audit" or "making it open source" does not imply intrinsic security. You are putting far too much weight in either a firm to do the right thing with money, or the existence of sufficiently motivated OSS researchers mining what might be millions of lines of code. We still find bugs in the Linux kernel regularly despite it quite literally having tens of millions of eyes on it. What makes you think this would do anything more than assuage your fears through security kabuki? In fact, OSS while sounding nice introduces an entirely new attack vector that a company may simply not have the staff to mitigate. To use the Linux kernel once more vulnerabilities have been deliberately injected into the kernel more than once. There have been game breaking SSL bugs. Huge overflow problems, etc. I love OSS. It is not a panacea. Signal chose this model - it does not imply it is the best, the most practical, or the most secure.
If a document alone lends credence to their claims, the source code would do wonders. It's not about public contributions, it's about transparency and good faith.
Why do you trust the source code is actually what they deploy to your device, or that what they build isnt linked against extra libraries, maybe even internal library?
I do. Having source code available does6 mean anything if you can't verify the source yourself, or if the source can't be community certified. There's no point in a backend being open source, if nobody can verify what is running there. There's no point in an iOS app being open source if the app is distributed through the app store, as we have no way to verify that it is what it says it is.
Meanwhile if I'm running Debian I trust the maintainers have built the source code to distribute to me, similarly with homebrew and chocolatey.
> Its a matter of trust and good faith
Indeed it is, and you have to trust 1password, and if you don't, it doesn't matter whether there's good faith or not.
> Indeed it is, and you have to trust 1password, and if you don't, it doesn't matter whether there's good faith or not.
Precisely. By not showing their source code, they implicitly state that they don't care about whether you trust them or not when it comes to source code. It is not a meaningless distinction. Bitwarden for example lists being open-source as a plus in security. You may not personally see it that way, but a big part of the industry does.
Intellectual property is important and making everything open source would allow our competitors to easily copy it or at least get an idea how to improve their products. It is hard to seriously compare the features, the security design, and the UX of Bitwarden to 1Password — it is not close. Just a few examples: being able to edit your data while offline, ability have large notes with Markdown formatting (aka "Moby Dick Workout"), support for large datasets (more than 100,000 items).
1Password has been in business for 17 years, longer that any other password manager. It is very difficult to have a long term business model built completely on open source.
I never said open source was to be the foundation. In fact, I never talked about open source at all. All I'm referring to is source availability.
As I said earlier I'm not going to complain that you won't use a free license such as MIT or AGPL or whatever else. The real issue is just the sources being publicly auditable. Are you worried about your competitors copying your non-copyrightable material? Ideas?
While there would still be an issue, I would be a bit less harsh on the policy if at least the clients were source-available. Transparency is security.
> It is hard to seriously compare the features, the security design, and the UX of Bitwarden to 1Password — it is not close.
How does Bitwarden not come close in security? All I can come up with is the secret key requirement. Is that all? If anything Bitwarden feels more secure because of its transparency. You can see the developers working live, each commit they make.
The client source code is the where the most of the IP is. The server code is pretty dumb on it own, all it does is the sync and permissions.
One of the issues with Bitwarden encryption is the fact that every field is encrypted separately and that could provide more info to the attacker. For example, you could tell how many URLs in a particular login or if there is note for an item and how long it is.
Noted, thank you. So why not source-available? I assumed you already published the non-copyrightable ideas in your public whitepaper. Is there a concern that even if the sources are made available under a "look but don't touch" basis (essentially all rights reserved) competitors would still gain an advantage by copying the non-copyrightable stuff like processes or ideas? (that are already public through the whitepapers and could reasonably still be obtained via reverse-engineering of the client binaries)
When I see people running to 1Password, I'm really concerned. I don't know whether 1Password has somewhat of a following cult here or they're doing some astroturfing in this community. But 1Password claims are the same claims as LastPass used to have. (zero trust, secure, …) And now we're discovering that LastPass was totally lying. We have no way of knowing whether 1Password is telling the truth.
For me, my password manager is the one absolute thing I want to be auditable. 1Password with its follower "who know people who designed it" reminds me of "trust me bro", "funds are SAFU" in cryptocurrency projects. Leaving LastPass in order to sign up for 1Password feels like out of the pan into the fire.
I've always use and will always recommend FOSS for password manager. I install Bitwarden on my technology-illeterate friends and family and I use KeepassxXC personally. I've looked briefly how vaults are encrypted in both software. The security is sound. Bitwarden was audited by Cure53[1] which is one of my favourite security auditing firm. (I highly recommend to read their audit of dovecot[2])
Anyway, you know what they say "fool me once…". I'll be eating popcorn when some claim will turn out to be false for 1Password…
These are pen-tests and black-box security audits. While they're definitely better than nothing, and they would show that their security is better than LastPass', the code was never audited.
Are we reading the same reports? At least the two latest reports by Cure53 mentions "source code audit". In addition, the audits by ISE and AppSec explicitly mentions code review as part of the audit.
I am in no way familiar with these kinds of reports, but does this not mean that (at least parts of) the source code was audited?
On top of that, they’ve made their client software prettier and slower, but not really more usable IMHO. I migrated to BitWarden and don’t think the user experience is any worse.
One of the tests we recently added to 1Password is the "Moby Dick Workout" for secure notes with Markdown rendering. Would love you to compare 1Password and Bitwarden:
I use 1Password but the slowly worsening/slowening UI and move to a subscription model, along with the fact I can’t even upgrade to the latest version because I still sync my own vault are making me look around for an escape hatch.
I guess it’s the Keepass family of apps, or perhaps Bitwarden, but it’s disappointing to even have to think about switching.
My only complaint with Bitwarden is that the 1Password extension seems to have fewer issues automatically recognizing login fields for web sites. In most cases, that's an easy enough fix via custom fields in Bitwarden but the UX around that could be nicer.
> ... Padding oracle vulnerabilities, use of ECB mode (leaks information about password length and which passwords in the vault are similar/the same. recently switched to unauthenticated CBC, which isn't much better, plus old entries will still be encrypted with ECB mode), vault key uses AES256 but key is derived from only 128 bits of entropy, encryption key leaked through webui, silent KDF downgrade, KDF hash leaked in log files, they even roll their own version of AES - they essentially commit every "crypto 101" sin. All of these are trivial to identify (and fix!) by anyone with even basic familiarity with cryptography, and it's frankly appalling that an alleged security company whose product hinges on cryptography would have such glaring errors. ...
Seems bad.
I took the author's advice and did the google search "jeremi gosney" + "lastpass". This 2015 article turned up:
> A longtime LastPass user himself, Gosney says he doesn’t plan to change a thing following the breach – not even his master password. “I am confident in the strength of my master password. As a password cracker, I know it’s impossible to crack,” he says.
For me, one cloud solution, and a local KeepassXC database as a fallback. Works like magic. If the cloud solution (Bitwarden) fails, then I fallback to Keepass.
What are the security problems with garbage-collected languages?
(not being sarcastic, don't have an agenda, I have no previous knowledge on this, and am not a security expert. Just had never heard this suggested before, and am curious what he meant. Legit question!)