Aren't they just reselling Mullvad? So is everything said here true for Mullvad generally?
Edit: I use an always on VPN on my phone but I can only have one, and that's taken by my local wireguard so I can access the not-cloud services that I run remotely.
I've figured out how to connect Mullvad at the same time on that server, such that all traffic on the server goes through Mullvad. I can't figure out how to chain them. I want to make a request to my local network wireguard (wg0) and have any traffic that isn't local be routed through to the mullvad connection (wg1) so I can both access my local network and use the internet over the VPN. Has anyone don this or could anyone point me in the right direction? This is on a linux machine...
Sorry, Web Safe has blocked this site
This site has been blocked by Web Safe. It's listed as
having content that’s inappropriate for children,
involving either pornography, hate, crime, drugs,
violence, hacking, self harm or suicide.
Thank you, Virgin Media. At least they actually tell you they're blocking them now rather than just forcing a DNS failure.
Isn't that a UK thing which you need to opt out of, not a Virgin thing? They don't do that here in Ireland, while I was under the impression most UK ISPs at least had default on porn filters
I got Zen (another UK ISP) and the page isn't blocked, and I'm a happy Mullvad customer. In fact, I haven't encountered a site that's blocked on the ISP level with them.
Really? Your ISP treats you like this so blatantly? This is absurd, nonsense, being considered for practical purposes a child who needs to be controlled in their choices of "appropriate" viewing material. Disgusting. If you don't mind my asking, in which country is this?
You could just use plain wiregaurd built into the linux kernel... Download a tiny config from mullvad (there is a separate config for each server), pop it in `/etc/wiregaurd`, `chmod 600` and `chown root:root` it and use the `wg-quick` command to bring it up e.g `wg-quick up config-name`. That's it, no appy apps needed. I believe this is all the apps are doing, they just make it easier by retrieving and installing the configs for you and of course add more attack vectors in the process.
This is how i use wiregaurd and it's pretty easy via the wg-quick interface. If using systemd you can also generate a unit for a particular config to bring it up at boot with: `systemctl enable wg-quick@config-name` where config-name is whichever one you want from your /etc/wiregaurd dir.
If you want to be able to check a file to see it's up, e.g for i3status bar or something, you can use /sys like this: `/sys/devices/virtual/net/mullvad*/dev_id` i'm using a wildcard but you can be more specific if you aren't going to be changing configs.
Apps do much more. You can change location with a click, force kill switch, blocks ads or malware, change to openVPN if UDP is blocked, automatically connect and switch between networks, etc.
All of that will come to Linux UI once there's a network-manager-wireguard plugin, the same that one can do it for openVPN and the like now. Wireguard is still new, and network-manager was still finding the correct UX a year ago.
I need no advanced features, and I have no other WG servers to connect to besides Mullvad, so I’m simply using their app which handles everything for me.
Mullvad supports split-tunnel. Sure you can somehow set that up manually with the standard clients, but with Mullvad you can simply run a command with "mullvad-exclude" and the process will be exempt from VPN. Pretty convenient.
Either your wireguard endpoint should be the router / gateway for the local traffic, and ip_forwarding is enabled on that gateway, OR you have to specify routes in iptables for the different networks you want to reach.
ip route add <subnet> dev <device name> via <gateway or router>
Like this:
ip route add 192.168.1.1/24 dev wg0 via 192.168.1.1 (which is the router, usually).
Hey, thanks a lot! I started reading that and then I bought a router that runs OPNsense... I'm just going to run the whole network through the mullvad VPN. Setting up dynamic DNS and poking a little hole in OPNsense so I can connect to my local network wireguard... that's more of my speed.
Actually I got into the OPNsense documentation tonight, I think what I'm looking to do will be even easier than I imagined with it acting both as my local server and routing traffic to the mullvad interface... e.g. with my current local wireguard server retired. This networking stuff is crazy hard, I'd rather have a proper solution with good documention than what I was trying to do on a machine that has its own complocated local networks for libvirt and other stuff that I just kind of use without fully understanding.
From the Introduction: "This report describes the results of a security assessment targeting five Mozilla VPN Qt5applications and clients, together with their corresponding codebase"
It's only the client-side software in scope,not the VPN service itself.
I don't want to speculate as for the reason of the scope for the audit. Just answering the question "So is everything said here true for Mullvad generally?" with "No, the audit is only looking at the client-side software and is therefore not saying anything of Mullvad in general".
I was just about to comment "how does this compare to Mullvad?", had no idea they were basically the same thing. Mullvad is already great and available in more countries, so I see no reason to move to Mozilla VPN.
There's no problem having several wireguard connections enabled at the same time. Routes are selected per metrics/distance.
E.g:
A private virtual network between you and remote hosts won't be interrupted by the presence of a VPN service. The entry connection to the private network would be routed through the VPN service, though.
"AllowedIPs" detemines which target networks are allowed to be routed through the tunnel. If it actually gets routed or not depends on the software. wg-quick adds routes to AllowedIPs by default, systemd-networkd does not.
Mozilla VPN starts at $9.99/month with month to month pricing.
Mullvad is €5/month, period.
Mozilla pricing starts to align once you pay 12 months at a time.
At least on the surface, Mozilla isn’t providing a benefit to the consumer aside from an account / subscription management approach that is slightly more “normal”, and it’s unclear if that’s actually a good (or bad) thing.
But what's the advantage here? Once Mozilla blesses Mullvad, then... customers already know that Mullvad is good. It's understandable why Mullvad could have a partnership agreement with Mozilla, but it's not obvious why customers shouldn't just bypass Mozilla and go straight to Mullvad with cheaper prices.
If you get the 12 month plan, they are the same nominal amount, but mullvad charges 5 euros and Mozilla charges 5 USD. So depending on the exchange rate, one will be cheaper. Right now and likely for the foreseeable future that is Mozilla.
If you're okay associating your account with a payment method, which would necessarily be the case with Mozilla, then you can pay for Mullvad via their mobile apps and pay $5 monthly.
Mozilla isn't actually providing the VPN infrastructure and bandwidth -- Mullvad, a well-respected Swedish operator, is. I think they run their own ISP behind the scenes.
The provided staging build contains the Mozilla VPN WebSocket Controller, which exposes a WebSocket endpoint on localhost. No additional authentication is required to interact with this port, thus allowing any website to connect and interact with the VPN client. At the beginning of the audit, Mozilla assured that this WebSocket server is only part of the staging build. However, later it was revealed that Mozilla would like to reuse this connection for communication with a browser extension in the future. Thus, Cure53 decided to report this issue."
A classic one.
Also interesting:
"On Linux and macOS, a helper shell script is called by the privileged daemon which sets up WireGuard and network configurations. This script is extremely critical for security and should normally get most of the security attention. However, prior to the test, Mozilla has announced that it will be replaced soon and, as such, does not warrant substantial reviewing efforts. This - in Cure53’s opinion - is rather unfortunate in relation to its criticality. Cure53 therefore recommends that the upcoming changes get comprehensively reviewed in terms of security before they are shipped in production releases."
That's an overly cynical way of looking at this. Security audits of your code are a matter of course best practice and are unrelated to "does the VPN live up to its marketing claims?"
It's like publishing that you passed health and safety inspection for a factory that makes safes. I don't think anyone would reasonably confuse a company publishing that its factory passed inspection for a claim that its safes are hard to break into.
You don't expect it. But if they were to publicize the result of the inspection, promote it and after the said inspection, start manufacturing plutonium safes, you probably would not be happy as a buyer.
(I know plutonium is not a good choice for the comparison as in the VPN case, we don't know if what will replace the script will be secure or not whereas plutonium is known to be unsafe, but the idea remains that changing something critical to safety after the audit is not nice to hear at all.)
I think what you're saying is you don't know whether future versions of the code are safe?
That's fair, but also I don't know any companies in the industry that pay for a fine-toothed-comb audit like this one for every major/minor release because it's simply not practical. I don't think this report pretends or is intended to pretend that a one-time audit is representative of future code any more than a negative COVID test is representative of whether you have COVID two weeks later. But that's not an argument against disclosing that you came up negative on your last test a few days ago, because disclosure is still better than the opposite.
The way Mozilla is handling this is a textbook implementation of typical pretty-good transparency/disclosure practices. A post discussing the big issues, and the full report available publicly. I think it's a cynical take specifically because it's the least charitable take on someone following best practices.
If the code in question was standard code, this would not be such an issue.
However, it is code that runs with all the privileges, and is therefore the one where security issues would hit the biggest. That is even more true as this code manipulates what is running on the computer, which is easy to get slightly wrong (For instance, for a long time, the LDD binary could be used to execute arbitrary code and it was therefore unsafe to run dracut with dependency resolution on unsafe binaries)
I am not saying that I am against Mozilla's transparency, especially as they were clear on this issue and said by themselves they intended to change this code before release. I'm simply explaining why some may find it either a bad faith or strong security issue.
Yes, because a VPN is a service and you can only meaningfully audit software itself. Any service audit is meaningless one second later as the operator of the service can change it to serve their purposes.
People all too frequently confuse software and services. They aren't the same. This is a software audit, and the thing that needs trust is the service.
Tangential: Can anyone with experience in this field provide an estimate for how much this kind of audit costs? Just considering the viability of open source projects fundraising to cover the cost of an audit.
Penetration tester here - My anecdotal experience:
I've worked on a number of projects where bill rate is something like $250-$400/hr per engineer depending on complexity, access to source code, size of the project, etc.
Usually equating to something like 10-12k for a single engineer on a project for a week. For bigger projects like this I would think it's totally reasonable to see anything from 4 engineering weeks -> 12 engineering weeks depending on different pieces and especially given this is a very high profile project. Based on that estimate of something between ~40k-120k. I know that's a huge range, but just wanted to share what I do know.
On top of just "compliance" or "customers demand it", these types of penetration tests can and do expose real, serious vulnerabilities in software.
Furthermore, I wouldn't underestimate the positive press that having a third party security firm assess your product and share the results publicly. VPN Services have been under special scrutiny lately so I think something like this makes total sense for Mozilla, regardless of the cost.
Why not? it's a one time expense that helps you launch.
If throwing money at a problems solves, that's rarely a hard argument to make.
If you throw engineers at a problem it might get solved, or it might not. Hiring an engineer to work on something is a high risk investment.
side note: Mozilla does have great engineers, when I was there a few years ago the security was also very competent. But it's probably not the same as getting specialized consultants.
Not sure about that. The major VPNs have similar (not identical) functionality and price. They're competing on the perception that they are more secure than others. In that light, having a passing security audit by a reputable company would be table stakes for a lot of customers. As noted by another commenter already, not having one could also be a deal-breaker on a vendor security assessment by a big company, too.
For a Berlin-based team like the one Mozilla used, $250-$400/hr/engineer is kinda hard to believe. Probably closer to $150-$200/hr. The average software engineer in Berlin makes $71k/yr. The compensation levels are very different compared to SV.
So he is talking about bill rate which is very different than what someone makes. At my company someone might make $50 but their bill rate might be like $175. Your bill rate factors is all sort of costs like having an office building, insurance, taxes, and everything that goes into having an employee above just salary. So even if they are in Berlin their bill rate is most likely comparable.
I am familiar with Berlin rates and this person is right. 400 USD (337 euros) an hour is unrealistic, unless you are hiring Cure53 specifically because employee X did groundbreaking research on topic Y and that's why you need that expertise; only then would I expect Mozilla to agree to that sort of rate. The range is more likely to be 110 - 250 euros per hour, where both ends are fairly unlikely but it's not as if I have comprehensive industry-wide data on everyone's financials. (I'm not that kind of hacker, heh.)
The sibling commenters are right, though, that the hourly rates charged by the company are not very related to how much you earn as a person. I wish I got my hourly rate as salary, but I see what kind of organisational crap the founder has to do and it's just not worth the headache to me.
Don't forget to add the employer's mandatory social security contributions and any additional employee benefits and equipment[0], the USD<>EUR exchange rate, and the fact that even in Berlin a senior security engineer will definitely make more than 71k€.
[0]: The founder of a (Germany-based) IT consulting firm recently told me that, as an estimate, pretty much any engineer at a tech firm costs at least 100,000€/yr.
It of course depends a lot on the scope of testing you want done (code audit, probing your running SaaS, some light breaking & entering [1]…) and how much time you want to give them to probe.
And how well organized you are in providing them info/how much trouble it seems like you're going to be for them.
And how well known the vendor doing the work is.
And lots of other factors. Possibly somewhat discounted for OSS or high-profile projects, this is also an advertisement for Cure53 to some degree.
But as a random guess for a random project, at least low 5-figures.
Hi OP, I'm Erik CEO of IncludeSec. We do many FOSS audits for Mozilla, OpenTechFund, etc. I can give you some ranges and points of consideration from what I'm seeing in the industry today.
First consideration point is quality of the team and the seniority of the people ACTUALLY DOING THE TESTING (a lot of pentest shops do bait and switch senior presenting but juniors do the actual work.)
Next consideration is location of company; EMEA and Asia are lower hourly rates than US teams.
Next consideration is scope. Do you want the front door checked, or the entire house inside and out? In this case Cure53 spent 25 work days on this asmt, which gives quite a lot of time to analyze the software and check lots of different avenues of attack.
Next consideration is type of attacks to try and security assessment methodology. Do you want just fuzzing? Perhaps you can get that for free from Google's OSS-Fuzz, they will sponsor people to set up your FOSS app with their fuzzer via CI/CD. Do you want static analysis from some big COTS vendor like coverity/fortify/checkmarx/etc. that could be useful and they often have discounted/free scans they will do for FOSS. Or perhaps you want super smart hacker pentesters to code review and dynamically attack your app (that's what my team does)
Next consideration is publicity, do you want this reporting public? Some charge extra for that.
There's a million other thing to consider when hiring a pentester, but this message is already too long. To give you a ballpark, estimate $10k to $40k for small projects, $40k to $80k for medium sized projects, and $80k to $150k for large projects. YMMV of course, but those ranges and the consideration points should get you well on your way.
Hit us up if you need more tips, happy to help via email <myfirstname>@includesecurity.com
Speaking from a buyer's perspective (I've managed about three penetration tests) they costed around 10-15k per person week-ish.
The cost can be adjusted depending on how experienced the testers are, timing (I need it now vs I need it next month), and how much time they are expected to spend writing up executive reviews. We always opted to just get a list of vulns and passed on the executive reviews as they tended to take up at least a whole day or so of the budget.
You can also save a lot of time by having a really well prepared dev system set up and ready to go for them. Getting someone familiar with your setup while also trying to sort out VPN access, GitHub permissions, etc... costs time and money, so doing that ahead of the engagement saved me about a day of budget.
You can get things that are a lot cheaper, but that's usually just going to be a newly hired tester running burp suite or some other automated testing tool. It's still worthwhile to do though if you haven't, an OWASP top 10-20 automated scan may cost less than $5k but still can be helpful/insightful if you want to reduce your risk surface area.
Is this figurative or do you really get a discount for planning it one month ahead where you're from? From my (n=2 employers) experience, projects are usually planned at least two months ahead (if it's not busy; end of year you can expect 4-5 months).
The executive summary thing is also interesting. We take maybe 30 minutes at the end to sum up what we tested, which issues we found (particularly the impact in semi-layman's terms, depending on the impression we got from the contact person), and sometimes if there are big omissions from the scope that smell foul and someone (us or another company) really should still have a look at then we might remark that there as well. But we don't charge a day's rate for a short summary. I guess what you mean is more substantial than this?
It wasn't a negotiated line item, but more that we had a specific ask, and the agency we were engaged with had a particular expert available if we could wait > a month. This didn't directly affect the cost - which maybe I should have made more clear, but still affected what I think would be the value of the engagement. I guess I'm saying: be aware that you may pay the same for less if you are in a hurry, but it'll still be better than what you have without paying :) It's still my suggestion that if you know there's a specialist available, wait if you can and bring them in.
For the summary: we always got the short summaries, list of vulns, recommended remediation. Tbh - I never paid for the exec summary, but my guess is that it was just taking all that stuff, spiffing it up into a PDF with clickable sections, and making it a lot more flowery? It sounded like something more desired by larger enterprise companies (maybe like this blog post!) than small ones like the one that I managed these engagements for.
It bothers me that Cure53 is the only auditing agency for VPN providers that actually publishes their findings, finds more than nothing, and responds via email to queries regarding their past audits.
Yes, there is another company, Altius IT, that audited PureVPN for their no-log policy. But they found nothing, and refused to answer my email inquiry with a simple question: "if PureVPN were have caught not storing any logs or connection data, but immediately sending them to a third party, would that have been reported as a finding?". The question was asked because on paper, this is not a violation of the published policy.
I've toyed with some VPN browser extensions. Not naming names. I became a customer of a few VPNs over the years, to try some of these extensions out, and the extensions leak your real IP inadvertently. One such extension said it was 'connected' when I DuckDuckGo'd 'what is my ip' and it reported my real / naked IP.
So my solution was to use a (hardware) VPN router which ensures all your traffic is tunneled through the VPN and you don't have to worry about edge cases like this leaking your real IP. Also: many users don't even turn off WebRTC which can expose you too. Many people it seems, don't know about the WebRTC VPN vuln that was disclosed many years ago.
Edit: I reported these extension vulns and they got fixed, but I still don't trust them.
Regarding WebRTC-type leaks probably the best answer on Linux is to have a separate network namespace with only wg0 inside of it and then run your browser in the network namespace. Then the browser does not get to "see" the real egress interface/its IP and thus can't leak it even if it wanted to. Or, if you want this to apply for everything, move physical interface in its own namespace so all apps from the default namespace only see the WireGuard interface. See https://www.wireguard.com/netns/
I actually did something similar to this for slightly different use case. I needed to connect to two different VPNs simultaneously that had overlapping cidr ranges. So I created a network namespace for one of them, and used that for any connections that needed that VPN, and used the root network namespace for the other VPN. I did have to create two separate browser profiles though, since both firefox and chrome won't let you run multiple instances of the same profile simultaneously.
If your route to the DNS server is through your VPN router, you are protected.
DNS leaks are a subset of the problem where not all of your traffic goes through the VPN routes. If your only connection to the internet is through the VPN, the odds of that happening approaches 0%. If you're using some browser extension, the odds of that happening approaches 100%
My biggest problem with using a VPN is the overtly hostile treatment by Google and Cloudflare.
Everytime I try to browse with a VPN (currently using Windscribe) I just see those nasty captchas everywhere. Google won't even let me do a simple search without letting me tag zebra crossings and school buses. Same with Cloudflare which also adds CAPTCHA before showing me a site which is basically most of the internet now.
Really don't understand why they do this to VPN users. Anyway, Anybody know how to fix this?
Unfortunately criminal/fraudulent/unsocial activity also occurs almost elusively behind a VPN, you might not be doing such things, but websites can't tell if you're not one of those people.
Unfortunately criminal/fraudulent/unsocial activity also occurs almost exclusively in certain neighborhoods. You might not be doing such things, but no one can tell if you're not one of those people.
I'm not sure what you're trying to add to the conversation. Companies and network providers implement regional and country blocking all the time. My company doesn't do business with anyone in China, so they block IP addresses associated with China. It's not insinuating racism or profiling of the people, but it's well-known that certain areas of the world have bad actors.
If you're trying to somehow set an example of where it's OK in one context and not OK in another, that's really not a fair comparison.
This would require identification of rogue "protected" users, to kick them out, thus ruling out using any sort of blind verification and ruining online privacy for those using it. And naturally, it's marginally more convenient while being extremely detrimental to privacy, thus it'll be popular, thus services will eventually block non-"protected" users, thus ruining the internet even more.
Cloudflare works with Privacy Pass, a browser extension through which you can solve CAPTCHAs in advance to accumulate tokens that let you bypass CAPTCHAs served by participating services (including Cloudflare, but not Google) as you browse the web. One pre-solved CAPTCHA gives you 30 tokens for ReCAPTCHA or 5 tokens for hCaptcha:
I didn't encounter any serious CAPTCHA issues when I tried out Mullvad and Mozilla VPN. Windscribe is probably worse for CAPTCHAs because of its free tier and the lifetime membership that it previously sold for a low price. VPNs that require ongoing paid subscriptions tend to have higher-quality traffic.
Extensions like this make users more fingerprintable so that kind of defeats half the stated purpose of using a commercial VPN. They also weaken the browser's security model, which is part of the reason why Tor users are told to never install addons.
PrivacyPass is not your average extension. It's got a strong basis in applied cryptographic research and was designed to solve the captcha problem in a safe way for tor. It's now later been adopted by both cloudflare and google, so I'm pretty sure it's well vetted.
The Privacy Pass server uses key rotation to make the tokens expire after a certain period of time. The documentation also hints at other measures including rate limiting and using a web application firewall:
Gcaptcha hates VPN users, and Firefox users [1, 2]. You can ameliorate this by being logged in in a google account (!). The cynic in me mutters something here about advertising.
Cloudfare, on the other hand, sometimes just decides that the website said "nope" and blocks you. It's such a giant PITA. I spend a lot of time in two countries because of my job and family -- and one restaurant near my home in one of them has decided to block all IPv4 (but not ipv6...) connections to it with cloudfare. I used to look at their menu online, often on the days that I flew back (in the 'before times'). I have to use a VPN to get around that (their food is good!). Cloudfare recently started introducing obnoxious captcha requirements and occasionally outright blocking.
The more people who use VPNs -- and I am sure they are on the rise -- the more it becomes normalised hopefully the more that this goes away. To be honest, it's a price worth paying for privacy at any rate.
It's the website operator who chooses the severity of the Cloudflare interstitial page. I don't recall the default value off the top of my head but I think the captcha page had to be explicitly enabled, and the confidence level could be adjusted by the operator.
The web is still a Wild West in many ways. If stores suffered meatspace equivalents of DDOS, rampant fraud, spam, and harassment they’d also use heavy handed countermeasures.
Because like 90% of the traffic is malicious. VPNs and proxies obviously change your exit IP, and people rotate through these as they do things like spam, make accounts, credential stuff, etc. This means they're burning through the IP ranges and getting them flagged.
If you're on a VPN with a fresh ASN and IP range you won't have any issues (until people start using it for other reasons).
If you wanted to "fix it" the dirty method, there's extensions for chrome/firefox/etc that will automatically submit your captcha to a captcha solving service and it costs some tiny amount.
tldr; Mischievous basically has to go through VPNs, and they rotate through IPs getting them flagged. You can join the dark side with a captcha solving service and extension, and there's some "solutions" like privacy pass you can try.
You're exaggerating. I use mullvad on 3 systems and definitely see captcha here and there but it's not very often. I can't imagine that mozilla's vpn is much different since the backen is the same.
Despite "renting" my IP address from Linode for over 10 years, occasionally the IP block I'm in ends up on Google's bad side. When I'm using my VPN server, I have to _constantly_ enter captchas to do any search, and often times it's just completely blocked. I'd say it's hostile because they don't just do a captcha and mark you as a a human for the next 24 hours, it can happen with every search. MFA isn't an accurate characterization of what they're doing.
That's very annoying. I guess they don't prioritize your user path; I'm sure they would fix that, if there were 100 million users just like you. Sorry to hear that.
Hey, finally I'm catching a report like this at a point in time where I'm not myself working as a software security assessment consultant, which frees me to get publicly back on my hobby horse about public assessment reports.
I think third-party assessment reports like these are a real problem in our industry. There are firms that are worse about them and firms that are somewhat better but it's a near-universal problem.
I don't at all object to technical vulnerability reports being shared. It's good to know when third-party auditors have spotted problems in products (and it's also good for prospective customers of consulting firms to know what kinds of vulnerabilities that firm is likely to spot, and how padded out some of these reports can be with low-quality findings). I also think it's worth remembering that the overwhelming majority of security assessment engagements are never reported out to the public; even when vendors do publish reports, more often than not it's after previous unpublished engagements have been run.
But the way these reports get written creates a huge conflict of interest. They're not simply reports of (1) what was tested and (2) what was found. Too often, they're also product marketing documents, beginning and concluding with "overall assessments" of the target software that is almost invariably positive, even on projects that reduced the target to a smoking crater (to say the least, that didn't happen here, but when it does, it's always framed as "[vendor] has made great strides in improving their security in the wake of this important engagement").
There are firms that specialize in writing public audit reports, and firms that specialize in finding excellent vulnerabilities, and the Venn of those firms is practically two disjoint circles† --- at least in the sense that there's a sort of insider chatter about who the quietly bad-ass firms are, and who the "most credible for shutting down sales objections" firms are.
Even when they're not intended to be tools of persuasion, public audit reports tend to function that way systemically. That's because vendors control the terms on which these audits are done, what's to be tested, when the testing will occur, how much time will be allotted and who's staffing. Vendors pay for public-facing reports, as an extra line item in the SOW, and in some circumstances get to review drafts.
Meanwhile, the public that reads these reports is in no way qualified to weigh or contextualize the report. If you're reading an audit report from a commercial vendor, it is invariably presented as a "clean bill of health". But even if you somehow get principals at Azimuth to assess your system, there is no such thing as a clean-bill-of-health audit. Different teams of auditors will find different bugs (even different teams from the same vendor!).
I think we need a new norm in the industry, and that it can only come from the auditing firms themselves. I think that, roughly, that norm should be that public-facing reports can be provided only in the same dry, technical form they're presented to development teams in ordinary, non-public projects: a methodology, a scope and rules of engagement, and a list of findings. No editorializing and no editorial review by vendors. Probably, though I'm less clear on the mechanics of how this would work, it should also stop being OK to charge different amounts for projects that do and don't have public reports.
I know there are consultants that disagree with me about this; I look forward to reading their takes.
† I'm not editing this out but on reflection this is pretty imprecise and it's probably easy to come up with a counterexample.
> Too often, they're also product marketing documents, beginning and concluding with "overall assessments" of the target software that is almost invariably positive
We have some customers that clearly come to us to get reports they can share with investors, and not that many come back so either they weren't happy or (I like to think) they've checked that box and aren't doing further tests. But never did one of them ask us to rephrase something for overt marketing reasons or provide even a single sentence of marketing material to include. Perhaps we were clear that this is not something we do and that's why they don't come back, though since they also don't ask and still order a test, that doesn't seem logical.
I also can't say that I've seen a Cure53 report where this is the case, including this one. If anything, the statement about a critical vulnerability in the previous pentest could make one weary, so I applaud Mozilla for publishing this rather than silent fixing and keeping it under wraps.
> Vendors pay for public-facing reports, as an extra line item in the [statement of work]
This is the first I hear of that, though the public scrutiny does mean we spend quite a bit of extra review time on such reports (from spell checking to going over every substantial statement and rating) so I could understand if Cure53 has different practices from ours. That does not mean they were bought (assuming, for the sake of argument, that a premium was paid, which I don't think there was).
> that norm should be that public-facing reports can be provided only in the same dry, technical form they're presented to development teams in ordinary, non-public projects
That is exactly what we do, and I am quite sure Cure53 works the same way in this regard. We review more deeply if we know ahead of time it's going to be public, as I wrote above, to make sure things are correct (especially since English is none of our native languages), but we don't alter the contents.
> I think third-party assessment reports like these are a real problem in our industry.
I really don't understand where this comment is coming from unless you have very different experiences with customers or public reports where you are based. At a minimum it doesn't apply to the report at hand. Too often these things are kept secret and it's a rare opportunity that we don't have to swear silence on our work. I'd love to do that more.
Coming from a distant background in IT audit for a national accounting firm.
Audit reports like SOX and SSAE 16 that tie back to financial risk can be more staid, because of the legal risk to both the auditor and auditee. Even though it appears to be tied to financial audit, we could still accomplish a lot on the infosec side because of the close relation between infosec failure and significant financial impact. Final reports contained a section specifically for the company's own statements, and a section for the auditor statements that was almost always boilerplate straight from legal. Flourish and marketing were nowhere to be found.
As a former auditor I enjoy reading these reports but I always find myself asking: who are these "auditors" and how are they held accountable, besides not being invited back by the auditee? And: Have I ever seen such a report that concluded the product contained critical exceptions or findings that should discourage consumers from using the product? I can't think of any, maybe someone else can.
I worked on the vendor (not auditor) side of accessibility audits, which have similar structural issues as security reports. The amount of latitude individual vendors have in shaping the scope of testing, setting cadence/private review phases, and picking auditors in the first place is stunning, and even though there are public standards (VPAT/WCAG/Section 508/EN 301 549), these degrees of freedom made reports completely incomparable across even vendors with the same commodity software products. Downstream from us, 90% of customers didn't closely scrutinize the contents or quality of the report and just needed it to exist.
The problem is the ultimate consumer[1] of these reports, legal and procurement agents at buying companies, themselves don't care about the actual quality of the report except insofar as it satisfies their own transitive legal/sales requirements, and it's turtles all the way down. This harms users/customers at the end of the day because they don't have time to scrutinize the details of each individual report or have any real power. If we care about the end goal (secure, accessible software), we need for the auditing firms to collaborate we the government, judiciary, and ancillary vendors to tighten standards to include random[2], uniform checks (same auditor, same methodology, multiple vendors at once).
In the US, OSHA designates NRTLs like UL to perform safety testing, which are required everywhere from workplace standards to insurance requirements. In comparison, at least for accessibility, merely having your vendor have any assessment report is likely enough CYA to withstand a legal challenge. I acknowledge the power of recent website lawsuits to use the broader ADA to raise the bar here, but ADA's "enforcement through private lawsuits" enforcement mechanism is spotty and I think won't result in enough structural improvement.
[1] - These reports also serve as PR/marketing, which is probably moreso the case with Mozilla VPN, but in most enterprise software where these assessments are taking place, the marketing side is very much a secondary goal compared to the individual legal/sales relationship that hinges on the report.
[2] - I think removing the opportunity for vendors to fine-tune scope or prepare or respond to concerns (at least until the next review cycle) is a big step in the right direction, but unfortunately, the legal climate is very much all-or-nothing and not good at nuance. Section 508 (I'm not personally familiar with PCI/SOX/etc. but suspect those are similar) is formally speaking "all or nothing" check all the boxes things, and in that climate, good random audits will basically be always-failing, and if you make too hard a standard that even reasonable vendors can't meet with an earnest effort, you'll end up constricting the market into a meta-game of who can hack the auditing process. See federal government procurement.
I'm not recommending or not recommending any particular firm (I'm not at Latacora), but rather just arguing for a new norm in how all firms comport themselves with respect to public audits. I am not surprised you got a good audit from Cure53; my comment isn't about the quality of their audits.
Could you please stop posting unsubstantive (and/or flamebait) comments to HN? You've unfortunately done it a lot already, and we ban that sort of account.
* is a special character on HN and can cause unusual formatting problems, and especially in a code context can be easily confused with the multiplication operator, and attempting to represent a footnote-superscript asterisk character requires use of ZWJ and "combining asterisk above" and hoping that it's supported ⃰. The 'second footnote' character is a reliable indicator of the presence of a footnote and is not easily confused for the addition operator. You're correct about the etymology of it, but attempting to force adherence to a historical use (that is not prohibited by the site guidelines) is no more likely to succeed than asking a language to stop changing is.
⃰ And that it's even remotely readable to be an asterisk, which my terrible vision can't make out in the text input field on HN, and that it's wrapped correctly (which it's not), and that it's spaced far enough away from the previous character (which it's not) and that <SPACE> <ZWJ> <COMBINING ASTERISK ABOVE> are handled properly by the website (which is still not universal† yet‡).
† Emoji have helped, though HN specifically removes them.
‡ Note, however, that in the HN text input field, the dagger and double dagger are not shown superscript, as they are in the resulting comment you're reading now. It seems to be up to the font to decide what to do, as Unicode opted out of superscripting concerns.
None of those are superscripted at all, except for the doubled daggers, and few people will recognize any of them as signifying footnotes. You're not wrong that they're what typesetters would use, but for a post on HN, I'd personally just use ^^^^ instead of ¶. I'm really sad Unicode chose not to allow superscript ZWJ.
The issue with pretty much all of these security audits is that they typically are limited to code audits of desktop or server software.
Much of the security of VPNs though depends on the configuration of the servers/endpoints. Does it keep logs (on purpose or accidentally), are logs/traces of connections wiped from the RAM as well when you disconnect ... So even if you VPN client is perfectly secure, if someone is listening on the server nothing matters. Importantly, it's probably not unlikely that they were even set up by a "subcontractor" so even if Mozilla does everything right, there is the chance that a subcontractor still made a mistake and logs are being saved for example.
So a proper security audit would do lots of spotchecks on servers to see how well they are configured.
Thanks I didn't see this before. In the infrastructure what exactly did they audit? It seems 4 servers? Is that correct? I'm not sure I understand how these companies set up their networks, but every exit point would correspond to a server wouldn't it?
I should probably clarify that I didn't really mean that Mozilla (or mullvad) is unsecure, simply that reports like this don't really tell us a lot, because the infrastructure is quite significant and it might be changing continuously. On the other hand I don't think there's anything better.
Shouldn't some non-trivial % of VPN companies popular on the market be run by intelligence agencies by various countries as a honey pot operation? Or is the idea that intelligence agencies have better means to monitor internet traffic?
I mean, supposedly that's exactly what happens with tor, nsa and friends run many, many exit nodes and scrap info from there, to such a level that people are worried that it is possible (or plausible) that they can/could identify users through different fingerprints, maybe not for something that would hold in court, but enough that it could make you a person of interest for them to see if they can deploy some of their malware packages
As for intelligence agencies, famously ProtonMail which works from Switzerland operates in a country with information sharing partnerships with the 5eyes, yet they are not all too concerned about it, nor interested on having their server-side software audited, so that to me raises eyebrows
In the past we have had the cryptoag scandal and BCCI which was an entire huge international bank with branches all over the world which basically only existed to be a front to CIA money laundering.... I personally believe that these guys are now operating with Deutsche bank and Cryptocurrencies which is just what would make sense for them as Deutsche bank is a corruption ridden company and Crypto is can be obfuscated so much as to be basically nontraceable
I have personally moved away from ProtonMail, I have 0 sources but they give me too much of a bad cryptoag vibe/honeypot, sad as it is, if you want secure email you must self host it which can be quite a pain in the arse, but these are the costs to have peace of mind
As for their VPN I would not use it, I don't like that ProtonMail centralized so many "privacy things" into a single point of failure
Actually, skip all of that: social engineering is still the best way, even NSA has stated that.
Assuming that social engineering is hard (for example, the target are computers operating machinery), while Windows is... not bulletproof, most applications (especially in-house apps) tends to be nothing more than a wooden gate, and therefore it's more expedient to do that than monitoring encrypt communications (obviously, it's has some benefits but it also has a lot of chaff, so it's better to have a wiretap directly).
Once I learned that Mozilla was using Mullvad servers, my trust to Mullvad increased. Though, besides credibility I am not sure what other benefit Mozilla brings to the table by reselling it? Since they both use WireGuard protocol, client is mostly about UI, rather than functionality.
What would be cool, if Mozilla partnered with multiple VPN providers and had a multi-hop VPN with at least 3 nodes and multi layered encryption, similar to TOR.
Read the report and see if the findings are super basic or super advanced. If you can't tell, then this audit report is not of value to you, similar to how my mom has no use for open source software yet I would still say it's valuable to have open source software in the world.
they do use mullvad to provide this service and mullvad's pricing model is flat monthly rate at ~$5 and yet mozilla charges almost double the amount per month at $9.99 and you need to get locked in to 12 month subscription to get the same price as mullvad charges.
1. Those who are aware of VPNs but have the money and would like to support Firefox can do so by using their product.
2. Those who aren't very aware of the vpn scene can trust a relatively more prevalent name in Firefox while still getting a reliable service and giving them money.
It's not ideal from a value perspective but I don't feel like it is egregious.
Unlike Mullvad, Mozilla isn't just trying to run a VPN service profitably. They're trying to get reliable recurring revenue on the balance sheet to spend on other things, like redesigning Firefox over and over and increasing their CEO's salary. So they really want to lock people into longer-term subscriptions, whereas Mullvad is fine with users subscribing for a month and then leaving.
If I understand the OP well, the question is, how can Mozilla justify charging twice what Mullvad charges, and lock you into a 12-month contract, for what is largely a repackaging of Mullvad.
So just use Mullvad instead. For those who want to support Mozilla's efforts, this is a good way of doing it. Think of your extra $5 as a sort of donation to them for building such a wonderful browser and browser ecosystem (Lockwise, extensions, Pocket etc)
But firefox is developed by mozilla corporation, and donations to mozilla foundation doesn't fund firefox development. It goes to various advocacy/activism causes.
While executives earn money like at any company, most of their budget is spent on Firefox. 2% of the Mozilla Corporation revenue goes back to the Mozilla Foundation.
That downplays the remuneration of the principle Moz exec considerably AIUI. Mitchell Baker's remuneration from Moz was >$2M last I heard (a few years back [see eg https://calpaterson.com/mozilla.html]), I can only assume it's more now.
There was also a report Baker had a >$1M "stipend" (uncharitably a bung) from Google, though I've been unable to refute/corroborate it.
The last report is from 2017 I believe, a year when Mozilla's revenue hit an all time high $562 million. I personally doubt her salary is as high, despite her promotion and I believe the overall decrease in amount of executives that accompanied it.
Seriously, "Chair pay is .5% of revenue" seems fair.
And I haven't heard this "bung" rumor but Mozilla's revenue and usage were shrinking long before Baker became CEO in late 2019.
Personally I find the idea of a charity begging for cash whilst simultaneously paying outrageously large amounts of money to execs to be immoral.
My limit on reasonable salary is 5 times the UK median graduate salary.
If ".5% of revenue is reasonable" surely there's no excuse not to increase salaries of everyone working for a company -- pay everyone ludicrous sums as each salary is small!
IIRC studies have shown exec salary doesn't correlate with increased company success, why should revenue come into it.
The Mozilla Corporation isn't a charity, none of the donated money goes towards their salary. And their revenue is directly tied to a specific deal that the executives negotiated to get over $100 million per year more than their previous deal. They negotiated that despite the years of decreasing users your link showed.
As for the rest, I also dislike standard corporate practices but Mozilla is more or less forced to follow them to compete at all in this world.
Right, let's get an engineering manager to run an international company. They just have to negotiate deals worth hundreds of millions of dollars, handle all of the licensing issues, manage a charity, manage hiring for areas including PR and translation, and more assorted tasks. And let's offer them well below market rate for such a position, though we'll still publicly disclose their salary so the entire world can criticize them for everything that goes wrong with the company.
I'm sure they're flooded with great applicants for that gig. And again I doubt the CEO salary is still over $2 million. It would have been negotiated in April 2020 right before the layoffs.
For clarification, I believe he is saying you pay the same price as a mullvad subscription if you lock into a 12 month subscription, not that you pay double the price and are locked into that.
Both Mullvad and Mozilla VPN are the same service (Mozilla leases it from Mullvad).
Mullvad has a flat monthly fee (5€) and only stores a randomly generated 16 digit account number that you can stop filling up the day you want. You can pay with almost anything, crypto, credit card, even via post mail+.
Mozilla's VPN requires you to create a Firefox account and provide them your email, your age and a password. Optionally you can enable 2FA authentication.
So you trade off a bit more of your privacy in exchange of the same service you already have. It only makes sense if you want to support the Mozilla Corporation and Firefox's development (after all, money donated to the Foundation doesn't go to the Corporation).
I don't know, but I'll paste the specific things that article is calling for:
> Reveal who is paying for advertisements, how much they are paying and who is being targeted.
> Commit to meaningful transparency of platform algorithms so we know how and what content is being amplified, to whom, and the associated impact.
> Turn on by default the tools to amplify factual voices over disinformation.
> Work with independent researchers to facilitate in-depth studies of the platforms’ impact on people and our societies, and what we can do to improve things.
I'm at a bit of a loss to guess which of those might be triggering. But then, I agree with them. (Disclosure: I work for Mozilla too, so yeah, I'm biased. I'm a long way from the relevant policy discussions, though.)
I've always had mixed feelings about that article. The whole "deplatforming" thing is kind of a red herring. The article seems to approve of deplatforming, which is something I'm generally not comfortable with, but it isn't actually about deplatforming at all. I suspect that's the part that triggers reactions, for reasons that I sympathize with even though I don't think they apply to the particular case in question.
> Turn on by default the tools to amplify factual voices over disinformation.
If we've learned anything over the past year, it's that tech companies have no business deciding what is factual. "We're going to decide what's true, then make sure people only hear what we want them to hear" is just deplatforming with extra steps.
If you follow the link, it's talking about Facebook reverting a temporary change that penalized "hyperpartisan" news sources. That's not quite the same as FB deciding "what is factual"; it's FB deciding what news outlets are "hyperpartisan". It's still on the same slippery slope, but it seems like the cost:benefit ratio is far better, if you keep in mind the status quo -- regardless of which side of the left/right divide you may be on, you'll probably agree that the people on the extremes are nuts. (Yes, you will have a different opinion of what is "extreme", but the hyperpartisan outlets are in a feedback loop that rewards greater extremism so there's a fairly stark separation between left/right-leaning and the extremes.)
Given the title, it's reasonable to assume that it's saying that suppressing right leaning people is good and needs to go further. This theory is further supported by the context, i.e. other statements by Mozilla as an organization, and it's leader in particular.
I agree with the implication of the title. The article is also anti-Trump, though anti-Trump != anti-right.
Although Mozilla does take an ideological position on some issues, I have not yet seen any mechanisms within the browser that favor one side of the partisan divide, intentionally or practically. I can't say it'll never happen, but Mozilla has been even-handed so far, and I'm sure there's not a lot of appetite to piss off any large contingents.
So then you're boycotting Chrome, Edge, Safari, and Brave too?
If Firefox is being held to a different standard, then I'm afraid that practicality had to win out here. You can either have a pure and dead browser, or a live browser that can still fight winnable fights. (Or you can have a pure and live browser that doesn't play video, for your own use; Firefox still keeps this stuff at arm's length so you can disable it.)
It certifies what it says on the tin: an independent audit was performed by Cure53. You are correct in believing that certifying complex software to be free of bugs to be practically impossible at the moment.
But which product would you rather use: one where you have to trust the developers, or one where you have to trust the developers plus an independent team got 5 weeks of paid time to study it for any flaws?
As someone working in this industry, I can also say it's significantly harder to find exploitable bugs after another audit team went over it already. A criminal would have the same problem and might chose another target instead.
Edit: I use an always on VPN on my phone but I can only have one, and that's taken by my local wireguard so I can access the not-cloud services that I run remotely.
I've figured out how to connect Mullvad at the same time on that server, such that all traffic on the server goes through Mullvad. I can't figure out how to chain them. I want to make a request to my local network wireguard (wg0) and have any traffic that isn't local be routed through to the mullvad connection (wg1) so I can both access my local network and use the internet over the VPN. Has anyone don this or could anyone point me in the right direction? This is on a linux machine...