Hacker News new | past | comments | ask | show | jobs | submit login
Molly for Android: A fork of Signal for Android with passphrase lock (github.com/mollyim)
82 points by giuliomagnifico on Jan 12, 2021 | hide | past | favorite | 89 comments



The main problem with projects like these is that I don't know (without manually checking myself) whether they are actually tracking the Signal source code effectively. Building Signal isn't exactly straightforward (just due to Android/iOS crud). We need something like build attestation from reproducible builds to be extendable to slightly different source codes.

Other than that, love it. Now to add iOS backups and cross platform imports... And strip out gifs (or at least make them disabled for me). And force by default unknown callers to go via a relay...

So many wishlist items which are probably wasted effort because they will bit rot without being accepted by Signal.


does signal not accept pull requests or something? currently there's two signal forks on the frontpage, would be much better to upstream the effort if that's viable


As far as I've seen, Signal don't really accept outside contributions. I'm not sure if it's policy, or just a culture thing.

They may have accepted some in the past, but I've never seen it, and I've seen a lot of PRs closed unmerged.


checked the last 10 closed pull requests

-1 closed by the author realizing it had already been done

-2 closed saying they were going to fix it differently

-1 closed saying they took some code from the commit and fixed part of it differently

-6 merged

seems quite reasonable to me (albeit this is only from the newest 10 closed pr obviously)


Your 2nd & 3rd examples I've seen a fair bit of, but I've mostly seen them closed due to the feature not lining up with Signal devs' intent.

Glad to hear this seems to be changing; 6/10 is pretty good.


Signal accepts pull requests theoretically, they just usually reject everything.


i looked at the history and it seems they just close them even when they merge them, and then merge them via their own commits. but you can see them accepting stuff in the comments of the closed pull requests


That's extremely obnoxious, why?


Github still doesn't provide a means to mark a pull request as merged if you've locally rebased and pushed.

https://github.com/isaacs/github/issues/2 https://github.com/isaacs/github/issues/548

This is what's happening with Signal's PRs. Here is an example:

PR: https://github.com/signalapp/Signal-Android/pull/10266/commi...

Commit in PR: https://github.com/signalapp/Signal-Android/pull/10266/commi...

Rebased and pushed commit: https://github.com/signalapp/Signal-Android/commit/6df839612...

Note that they have different commit references. Thus Github does not recognise this as a merge, and there's no way to explicitly tell Github that you did merge it.


Hi, there, Signal Android developer here. This is correct. Our process is that we work in a pre-release branch, and then merge that to master when we make a release. So GitHub doesn't usually recognize it as a merge, but we always keep the original authorship and whatnot.


You could include "Closes #123" or "Merges #123" or "Fixes #123" in the commit message of the merge commit, and GitHub will link it, and currently close it (it doesn't actually consider it merged, but it at least auto-closes and links)


Can’t you just change the merge target to the prerelease branch?


Maintainers can't. Only the PR owner can.

For that to work the PR owner has to know which branch to target (assuming it's public).

Unless the PR creator somehow knows which release their PR is going to be included in, with a guarantee it'll be merged in said release, then it's just not possible for the PR creator to target the correct branch.


It’s a bit undiscoverable, but if you click the edit title button it will actually allow you to edit the base branch as well. Sibiling comment said this might be opt-out-able, but at least in ms/vscode no one seems to opt-out.


> Maintainers can't. Only the PR owner can.

Just tried it, seems to work fine for me..?


When filing a PR there's a "allow maintainers to edit the PR" checkbox (I don't recall the exact wording). If the PR owner doesn't check it, the maintainer can't change the target branch.


Hey there, would be great if you reference these merges in the issues so it would be visible to everyone (and especially the devs who put in the work) that the code is used.


In one of the projects I was in, we did the same but then made a comment with the merged commit hash so that folks can check it out.


Ah, it turns out Github is the obnoxious one.


I don't know but I can speculate a couple reasons why they might:

-They might not want people who aren't officially affiliated to get "contributor" badges on github, so people don't get confused thinking they speak for the project when they participate in discussions

-github may not be their primary way of managing their source, so pull requests have to be transferred to their internal system first before being released on github

-better control over what gets into what version without having to micromanage when to accept pull requests


Also helps with ownership. They have to maintain the thing, and it's easier to do when the person who committed it is a co-worker you can get a hold of.


Sounds like they don't use github to manage the codebase.


But Git is a distributed VCS. They could just as well cleanly merge pull requests into the main branch as hosted on GitHub and then merge GitHub's main branch into their canonical system's main branch.


I just randomly picked a contribution from an external party. The commit that made it into mainline has an author set to the person that contributed the code originally and a committer set to one of the 'signal.org' members. I'm confused why people would have a problem with how they chose to manage their code if the original attribution is not lost in the process. I'm fairly certain the Linux codebase is managed similarly.

Edit: adding links I forgot https://github.com/signalapp/Signal-Android/pull/10219 https://github.com/signalapp/Signal-Android/commit/dda51bf36...


Git is a distributed VCS, so if another tool works better for their workflow, then they can use that instead of GitHub


They are also not interested in putting signal on f-droid.

Quite off putting to me.


>The main problem with projects like these is that I don't know (without manually checking myself) whether they are actually tracking the Signal source code effectively.

That's my main complaint with Signal - lots of widgets. More code to audit and keep an eye on.

More users is good, but stickers and stuff.. meh.

Maybe teach zoomers how to use emoticons ;-)


I understand where Moxie it coming from: user friendlyness (and candy) increases the user base in a demonstrable way. At the same time, adding code like this pretty clearly increases the attack surface unnecessarily. So there is a tradeoff they are making for everyone. I would much rather be able to disable that additional state space, even if I can't strip it out of the build entirely.

I also find it a bit crazy that the 'desktop' app is Electron, and they don't hint anywhere what a house of cards Electron is. I wouldn't run it except inside a VM, and even then I would have to accept that all the messages could be extracted remotely. They give no indication of their compliance with best practices (e.g. https://labs.bishopfox.com/tech-blog/reasonably-secure-elect...) with is disturbing.


Yeah, that's one reason I prefer verbal convos. Electron aside, how many people even keep their phone on the latest version? There's all sorts of ways to slip up with Signal, though now that I'm not violating COPPA by posting on the boards, I don't see a need to make a literal list of all of them.


people really do like those stickers though. You can just not use them, I find them annoying too, but I'd rather signal have them so people who want them don't have a reason not to use Signal


Signal has reproducible build for Android : https://github.com/signalapp/Signal-Android/blob/master/repr...

I didn't try, but I don't see how it could be more straightforward than just launching a docker container


The derived projects need this, but in a way that attests to the currentness of the source code they are derived from, i.e. as a patch against the latest Signal release.


Signal is my go-to messenger. I use it exclusively, and have advocated others to move to it. There are many sore points with Signal that are well known, but Molly has pointed out a significant one.

I simply do not trust the OS exclusively to keep the messages secure at rest. The fact that I could use a passphrase, swipe pattern, or other mechanism to encrypt the database separately from Android was a boon, and I'm greatly saddened that they have removed that feature.

Perhaps a second layer of encryption doesn't really matter all that much. If Android's security is as bad as I think it is, then a second layer might just amount to security by obscurity. But it's the principle of things.

angry_octet raises an excellent point. I wish this would return as a mainstream feature.


From what I understand, Signal messages should really only be considered encrypted in flight. At either end the massage could be picked up by an insecure device (custom keyboard, keylogger, screenshots, etc.), and a determined hacker will be able to compromise any casual user’s phone.

Still, there’s value in encrypting in-flight messages.


Or any app/bt device that has access to notification (if your settings allow the message content to appear in the notification).

Great example is Android Wear.


I'm super curious if someone can explain the threat model here in more detail. Some of these mitigations seem reasonable, some reasonable-but-very-esoteric, some mostly unreasonable. But maybe I am misunderstanding.

Going through the feature list:

> Protects database with passphrase encryption

Modern Android supports FDE, so this seems like it's no gain for most cases. For users who don't have FDE, I guess it makes sense as long as you aren't concerned about evil maid attacks?

(FWIW, Signal's philosophy—here and on RAM protections—seems to be that this is the province of the host OS. I agree!)

> Locks down the app automatically after you go a certain time without unlocking your device

I don't know what this means—is it the below (i.e. purging decryption keys from RAM on inactivity) or something else?

> Securely shreds sensitive data from RAM

The threat model here seems to be malicious code that can read the device's RAM.

I think it's best to leave these kinds of mitigations to the host OS (i.e., if there is malicious code that can read your RAM, you're _probably_ SOL), but I guess there are probably some attacks (e.g. hardware or local-device attacks) that can dump RAM or something? What are common examples?

> Allows you to delete contacts and stop sharing your profile > Clears call notifications together with expiring messages > Disables debug logs


Molly dev here. For security, the app is helpful in case the device is stolen or searched. In short, it tries to defeat smartphone forensics.

The scenario is that someone grab the device and wants to read the chats, contacts, or access any information stored in the app. And you realize or suspect that. For example, because you lost the phone, or because it was left out of sight in an unsafe place.

Under the big assumption that the device was clean of malware when Molly was locked there are no technical means to access the data without the password. This is the goal. If there was any security guarantee it would be this one.

When Molly is locked, the database is closed, and the only way to open it again is by entering the password. While locked, the app cannot display notifications. I guess this is the main blocker why Signal has not implemented password encryption yet. I have some ideas to fix this in a secure way, but it would require to modify all Signal clients not just Molly.

To lock the app, there are two options. Either you tap "lock" in the menu, or you set up a timeout. This timer starts counting down with the Android screen lock. Therefore, it should be adjusted by considering the time you expect an attacker could bypass the Android lock screen, and gain root access to the device, and the period of time you want to be able to receive notifications. There are more lock triggers in the roadmap, like the action of connecting an USB cable... many exploits works through USB.

I agree that in a perfect world with zero vulns, all this should be handled by the OS. But even that, there are use cases for Molly. Should your fingerprints or a weak pattern give access to everything stored in your phone? Let me tell you someone's else analogy, "The keys to my front door shouldn’t also unlock my safe. You can rifle through my cutlery, but not my banking records."


Thanks!

“Stolen or searched” isn’t a very specific description. Does the attacker have the screen lock? Or is the screen unlocked but they don’t have the screen lock factor?

Does the attacker have the ability to inject code into the running OS, or only to read memory contents?

Etc.

I’m not saying this is unreasonable, but it seems like your design philosophy is sort of “belt and suspenders”—-i.e. layer on any defense you can. This can increase safety, but at cost of features or usability (as you noted) or complexity.

To your specific case (“you realize or suspect that”)—why wouldn’t I just use the lock screen? :)


> “Stolen or searched” isn’t a very specific description. Does the attacker have the screen lock? Or is the screen unlocked but they don’t have the screen lock factor? > Does the attacker have the ability to inject code into the running OS, or only to read memory contents?

As long as Molly is locked, it doesn't really matter. It offers protection in the worst case scenario, under the premises I noted before.

> This can increase safety, but at cost of features or usability (as you noted) or complexity.

You are right. Just keep in mind not everyone need a safe, but the people who need it appreciate having the option to buy one.

> why wouldn’t I just use the lock screen?

Because you know there have been working exploits in the past to bypass the lock screen, or to read physical RAM directly from the USB port of a locked phone (1). And thus it is reasonable to believe there are still more vulnerabilities to be discovered and patched in Android.

(1) https://saltaformaggio.ece.gatech.edu/publications/DIIN_17.p...


To be specific, for many phones if they are turned on you can just plug them into a Cellebrite box and get immediate unlock. Unless you follow strict message discipline in keeping the phone powered off it is very difficult to avoid this attack.

Tossing the database encryption key when idle is a form of segmentation in time, and is a considerable constraint on attackers.


It seem like this is a form of layered defence. Android FDE should take care of this use case, but if there is some widespread flaw in Android disk encryption then this would be another layer that attackers would have to penetrate.


When is there going to be legal action? https://news.ycombinator.com/item?id=17723973


No legal action as long as they dont use word "Signal" anywhere. Molly is good client


Actually, if they use the [edit: official backend servers] of Signal, moxie is still going to threaten to sue them into oblivion again.

The objection to libresignal wasn't just based on the name.

That's why signal isn't actually FLOSS.


What is the legal basis for blocking a third-party client which doesn't abuse any trademarks?

The Signal source is GPL 3.0, so definitely libre.


The source code licence has nothing to do with the right to access a particular service. Additionally, the organization behind signal is not bound to the licence, they can do whatever they want with their own code, it's their code after all.

If the server code is licenced as AGPL3 that grants others particular rights in addition to those granted by regular law, under particular constraints, to use and modify said source code, and thus enables e.g. Molly to provide their own servers.


Using the official client also has nothing to do with the right to access a particular service.

I think it would be a losing battle for OWS to go after a popular forked client that talks to the official OWS signal servers.


They are unrelated but the grandparent made two points. They said it wasn't FOSS, but it is; that only depends on the license.

My question was what prevents a third party from connecting to Signal's servers.


Thats wrong, the backend is FLOSS and available here: https://github.com/signalapp/Signal-Server

What is potentially problematic is not the usage of the same backend but of the same actual servers.


Fixed the comment, but then again, either Molly users will only ever be able to chat with other Molly users, or Moxie is going to threaten them.


Yeah this is why I'll never view Signal as a real Whatsapp replacement. It fixes some of WA's issues but not many. There's still no federation and the ban on third-party apps makes it very hard to connect it to anything else.. I want fewer chat apps, not another new one alongside everything else :)

The mautrix-weechat bridge to Matrix uses the actual production Signal desktop client so they don't block it, but it's super wasteful because of this, it uses almost 1GB of memory just to relay some messages :P And it also uses your phone number as ID, doesn't allow integration with bots like Telegram does do, etc. It's ok for a small niche of people needing super secure chats but it's not the be-all end-all chat app.

So I recommended everyone around me against Signal as a Whatsapp replacement.. We should move to something that is solving all the problems, not just some of them. I think it's crucial that the network itself is just as open as the software, otherwise Signal can be sold and we end up with the same mess as Whatsapp once more (remember Whatsapp actually used to be a pretty decent IM app before Facebook took it over). Open protocols have been the building blocks of the internet and this fragmentation is making it a mess.

I think Matrix is the way to go, I just wish they could simplify their E2E a bit.. Right now it's too much in the way in terms of user experience.


The last commit on that branch is April, 2020 so I can't imagine that's kept up-to-date?


Signal relies on Intel's SGX for certain operations (such as who contacts who?). Does that worry anyone else?


Most chat servers simply know who contacts whom. This is an inherent problem (or strategic advantage) in being a chat server. Signal tries to hide the information from themselves by using Intel SGX. So, while SGX may not be perfect, its better than nothing.


But how do you validate that they are actually running SGX server side?


On top of that, even if they do run SGX, whoever hosts the SGX instances can just let the NSA into the server room and nobody would be the wiser. It's not like SGX actually works. It actually makes things worse if it allows signal admins to look the other way in good conscience.


why cannot signal foss server be built to be installed like any other sane service on a server and apps that just support custom urls? its not rocket science


The main signal developer (Moxie Marlinspike) is strongly against this architecture and won't accept either 3rd-party apps using the Signal servers, nor the main Signal client using 3rd-party servers.


So whats the point having a Foss setup? Its not for promoting clients and servers? Why dont they instead say its source visible, peope can point put vulnerabilities and all because thats what it is at the end of day today.

Pretending to be Foss and then not allowing third party clients, servers is not nice imo


You can fork the signal client and server and setup a competing network. You just won't be allowed on the official Signal network. That's the entire point of FOSS.

Open source vs. open network are fairly orthogonal. AIM and ICQ were never open source, but had several 3rd party clients. Outlook is closed-source, but still federates with other e-mail servers.

I'll use an open-hardware analogy. Let's say there's a open-hardware go kart. The publishers of the original go kart also own a race track. They only allow their original design on the race track. You can take their design, modify it and drive anywhere you want.

A competing race track is closed hardware, with copyrights and patents on their karts. If you base a kart off of their design, they will sue you. If they deign to let you take your home-built kart designed by someone else on their track, they aren't open hardware. If they publish the exact specifications required for karts to run on their track and encourage home-built karts to run on their tracks, they still aren't open hardware.


> Open source vs. open network are fairly orthogonal. AIM and ICQ were never open source, but had several 3rd party clients.

Nit-picking: AIM had an open-source protocol (TOC), that AOL had created for their TiK open-source AIM client. Some 3rd party clients used the protocol too. It was missing lots of features, but the basics generally worked. After ICQ moved to OSCAR protocol, I believe TOC would work off and on for ICQ as well as AIM.


Thanks for that bit of information. As far as I can tell it's the only thing to come out of this thread that wasn't a complete waste of my time :)


If it was 10 years ago, I could remember what sorts of things TOC was missing but OSCAR had, and give you a bit more useless information; but here's the best I've got.

IIRC, it was mostly anything you needed to get beyond the bare minimum. TOC could message, and manage your buddy list, and set your idle time and your status message; but it couldn't do typing notifications, and probably not some other things. I believe they used TOC tunneled over HTTP(s?) for their Java applet client, if you used that.

I seem to recall it went through periods of unavailability from time to time, so I would go back and forth between TOC stuff I wrote and understood and OSCAR libraries that barely half worked.


that is complicated. why don't they allow third party clients on their servers or third party servers on first party apps? reddit allows clients, matrix, irc, xmpp, even telegram has floss forks that work interdependently. why don't they simply use something like mozilla license, aka copy our code or whatever, just not the name?

> That's the entire point of FOSS.

foss as in free to check code, submit patches, create clients on and on...

i recently read the same question somewhere and the reply was some scary long answer how maintaining a fork is mighty difficult because you have to be few months behind upstream, fix bugs, manage certificates, work on apps.... so essentially make it difficult to set up competition and still pass oss test because source. smh


> When we call software “free,” we mean that it respects the users' essential freedoms: the freedom to run it, to study and change it, and to redistribute copies with or without changes. This is a matter of freedom, not price, so think of “free speech,” not “free beer.”[1]

All of these things are allowed by Signal. Look at what is happening to WhatsApp right now because of damage to its brand. Moxie thinks that allowing 3rd party clients or federated servers would damage the Signal brand. If you disagree with him, go ahead and fork it. There are probably 100s of forks of signal that let you connect to unofficial servers. If it weren't open source that option would not be available to you. How many forks of the (closed source) official AIM client are there? Of the server? In terms of freedom offered by the software Signal is positioned pretty well.

1: https://www.gnu.org/philosophy/open-source-misses-the-point....


are you even reading yourself right?

>and to redistribute copies with or without changes.

you are saying free to redistribute copies with or without change but you are saying connecting to official server is bad for brand. what happened to copies without change? how can you justify that? arent you restricting that line?

if moxie is so worried about the sugarflake brand, why does it not copyright the brand name? shouldnt that solve their problems? why pretend open source when its not


> you are saying free to redistribute copies with or without change but you are saying connecting to official server is bad for brand. what happened to copies without change? how can you justify that? arent you restricting that line?

Nobody is stopping you from distributing copies without change. Do it right now. Announce it to the world. It's 100% fine.

> if moxie is so worried about the sugarflake brand, why does it not copyright the brand name? shouldnt that solve their problems? why pretend open source when its not

Signal is actually trademarked. Scroll to the bottom of the signal website: and you will see "Signal is a registered trademark in the United States and other countries." Nothing new about this. Firefox is trademarked as well (hence e.g. iceweasel)

I'm sure the author of openssh runs an ssh server somewhere. He doesn't let me connect to it. That doesn't make ssh less open-source.


Software freedom requires the freedom to run the program as you wish, for any purpose. That includes the freedom and right to connect to use it to connect to the Signal network. Moxie restricts that freedom, violating the spirit of free software. End of story. Your car analogy is irrelevant.

Maybe Signal is open source but not free software. That just shows how open source misses the point.


Software freedom requires the freedom to run the program as you wish, for any purpose. That includes the freedom and right to connect to Theo de Raat's private SSH server. Theo de Raat restricts that freedom, violating the spirit of free software. End of story. My car analogy is irrelevant.

OpenSSH is open source, but not free software. That just shows how open source misses the point.

I can fill in the above template with GNU's CVS servers in the 90s, if you prefer GPL to BSD.


You are not prohibited from connecting to Theo de Raat's private SSH server using OpenSSH. You are prohibited from connecting to Theo de Raat's private SSH server. The client has nothing to with it.


But if Theo were to grant me access, but disallow e.g. putty, that would be well within his rights, no?


No, what software you use is none of his business.


What software I use is none of his business right up until it starts affecting computers he owns. This isn't that complicated. If I decide that I don't want people wearing red shirts to enter my house, then that's stupid, but within my rights and does not meaningfully infringe upon your right to wear whatever color shirt you want.

I don't see how restrictions on what is allowed to connect to privately run servers can magically cause software to become non-free. Nor do I see any way in which those restrictions existing (again on computers neither you nor I own) in any way violates the spirit of free software or in any way denies you and I the freedoms that free software is supposed to respect.


Your house, a personal area, is in no way comparable to a public messenger network.

> What software I use is none of his business right up until it starts affecting computers he owns.

Which client implementing the Signal protocol you use does not affect him, therefore it's none of his business.


I don't think that traditional advocates of Free Software (RMS for example) would agree that those running a server must accept connections from just any build (however changed it has become from the originally released sources) of the Free Software client. The freedom of the Signal software consists in anyone being free to fork both client and server sources and run their own server on their own infrastructure.


ever heard of api?


Yes, of course I have, and so has Moxie. In his classic essay on why Signal will not be federated, he clearly explains why he does not believe that this is a good approach.


So?


So: most of the people here, when they think about concepts like “software freedom”, look to the major thinkers in this field. You can operate with a different definition if you like, but you can't be surprised when people react as if your views are odd.


Most people look to authority instead of thinking themselves, you mean.


Yes. Source available but not free software.


The point is you can claim to be FOSS and people will buy it.


Thats what I am trying to say. You disallow third party clients from joining lyrical servers because federation bad and fud about "brand".

Saying here you go. Open source. You can fork a client but dont connect, you can fork server but dont use official apps.

I do not know if GPL code can have this restriction. Its like saying Tesla car has open source or Foss software. Free to fork it but you need to build your own car to run this software. Its free, is open source, just not usable.


When I installed Signal recently I was asked for a passphrase. What is this passphrase used for if not to encrypt my messages?



Yes exactly. When I look at the settings in the app it says "PINS keep information stored with Signal encrypted only you can access it."

However the page that you linked does not mention encryption. I can't see anywhere explain exactly what the PIN does. Does it encrypt your data or not?


Any reasonable length pin would not contain enough information to act as a safe encryption key; it would be too easy to brute force.


Depends on if the app itself rate-limits attempts, or destroys the encrypted content after a set number of attempts.


This would only be true as long as whatever the decryption algorithm is, it is not possible to run it off of the device, or otherwise interrupt the process of resetting the counter used to decide to rate-limit or self-destruct.

Famously older iPhones were susceptible to resetting the 'Invalid Attempts to Unlock' counter that the iPhone stored in the Secure Element to workaround the PIN rate-limit, by halting the CPU before it had a chance to increment the value, but right after it returned the pass/fail result.

So Signal could be entangling this PIN with a key from the Secure Enclave, and then trying to securely increment a counter inside the Secure Enclave to implement exponential rate-limiting and self-destruct, but it would be tricky to implement correctly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: