The thing about this particular attack is... it's pretty obvious, isn't it?
I had never heard of this app before, but reading the article, as soon as they got to describing how it works (you give it emails/phone numbers of your friends, you only see secrets from them), I correctly guessed what the attack would be.
Now, maybe it wouldn't have been obvious to me without the setup (there's been a successful attack, and now we're going to describe how Secret works like this, setting you up to understand the attack).
But if Secret has security engineers, intimately familiar with their system, and trying to identify possible attacks -- how could they have not identified this? It makes one think the bug bounty program IS their security program. Which is probably true of much software, but this is software focused on secrets!
On the other hand, maybe it just seems that obvious in retrospect? Apparently they have already given out 42 security bounties, and this one wasn't identified until now, so I dunno. It sure seems obvious though.
It's not obvious in retrospect. It's obvious from the outset.
I installed the app once to check it out. The first thing it asked for in order to proceed was an E-mail or phone number. Again, THE FIRST SCREEN REQUIRES PERSONAL INFO. It should be plainly obvious to anyone who gets past the first screen that your use of the app is not anonymous.
It was a known attack, their defense against it is dummy account/bot detection systems, and they claim that they were broken somehow due to an infrastructure upgrade which is why it worked for this guy.
Google can't figure out dummy detection or what is a real person with all the data and smart people they have.
Secret don't stand a chance if that is the basis for their defense model.
Figuring out who is real or not is one of the current big problems with a huge opportunity. Were someone to figure it out, they'd do something in the ad industry and make billions before building an app like Secret.
See though, there's no way that dummy detection system is going to be good enough to prevent someone determined enough to figure out who made a damaging secret.
For example, remember that seriously hideous post to secret regarding a prominent GitHub employee?[1]. The post has since been removed, but a determined GitHub employee who could see that post could over time defeat the dummy detection with a method similar outlined in the post. Just continue to create new accounts on Secret and iterate on the friends in your contact list the way that git bisect works. Create an account with half your friends and see if the message pops up. If so, create a new account with half that list, and continue until you reach 7 and rotate in users you know aren't responsible. In the end the person who made that horrible post will be revealed.
Going back, it says.... they had automated attempts to identify bots like this in place, since May when Russian hackers attacked in the same way. The implication is not before then.
Yeah, this does not make me more confident in them having any sort of a security program whatsoever.
I had never heard of this app before, but reading the article, as soon as they got to describing how it works (you give it emails/phone numbers of your friends, you only see secrets from them), I correctly guessed what the attack would be.
Now, maybe it wouldn't have been obvious to me without the setup (there's been a successful attack, and now we're going to describe how Secret works like this, setting you up to understand the attack).
But if Secret has security engineers, intimately familiar with their system, and trying to identify possible attacks -- how could they have not identified this? It makes one think the bug bounty program IS their security program. Which is probably true of much software, but this is software focused on secrets!
On the other hand, maybe it just seems that obvious in retrospect? Apparently they have already given out 42 security bounties, and this one wasn't identified until now, so I dunno. It sure seems obvious though.