This same website gets posted once in a while and it drives me crazy.
Do not use this service. With less than 60 seconds of looking, you can hijack people's deadmanswitch accounts with a simple CSRF on the email change API.
If you have a "high value secret" and you are a normal person, lawyers work pretty well. If you have a "high value secret" and you're Ed Snow, something like this site is not a solution you should use because it will get you killed. Nobody has a use for this...
Whoops, thanks for that, will fix. The service is pretty old, but I'm still surprised I didn't add the CSRF fix on one of the rewrites.
That said, tens of thousands of users apparently have a use for this. I'd imagine it's mostly "last goodbyes" type of stuff, also many users email me saying they want something to send "something happened to me, please come so my dog won't starve".
"Millionaire with assets" and "Ed Snowden" aren't the only two possibilities.
> No, it might surprise you to learn that all humans eventually die.
This seems uselessly snarky, especially since StavrosK (who I assume is the developer, based on other posts) gave a polite and snark-free response already:
> That's actually probably a good idea, I will add that, thanks.
How many emails have been triggered thus far with the service? And does it feel kind of weird seeing that data and knowing that you're pretty much being notified of people's deaths (at least if used as intended)?
The vast majority should be testing emails. I don't have data on it, but everyone who emails me is testing the service, so I expect very few actual deaths compared to the test emails every new user adds.
Wouldn't the dog have trouble waiting out the ~70 day delay? It seems pretty hard to make it timely but not overzealous. It seems like this would work better as a mobile app that can pick up more signals that life may have ceased.
Yea, I'd be more likely to write my own if I wanted such a thing. The trouble is you don't want e-mails going out prematurely .. which is why 90 day window is probably best.
And if you host that yourself, there's a chance your credit card could get canceled and your VM/docker container/whatever gets canceled and destroyed before it does its thing. ... but I mean who cares. No you. You're dead! :-P
Perhaps the author should serve a long and random token each time you go to the change email page, and require the API call to answer back with that token.
You have described CSRF tokens exactly =]. This is exactly what they should do, but this is necessary for ALL state changing requests not just the email change.
Right! I'm just pointing out that this isn't necessarily malice or negligence on the part of the author. Dead Man's Switch has some ways it could be improved, and remembering to set up a CSRF token seems to be one of them.
> It's not malice but solving sensitive problems without taking security seriously can actually have really big consequences for your users.
This is very true - but often the author does intend to take security seriously and they've simply overlooked something. The best approach in that scenario is just to talk to them and ask them to take a look at a vuln you've found, rather than immediately telling all their potential users to stay away.
I'm not some heartless security snob. This is not the first time I've seen this site, and it's been broken for at least a year (when I pointed this out last time). Because of that people SHOULD stay away.
Even if the authors of such services are highly knowledgable about security, they're going to rely on reports from hobbyist security researchers to fill in the gaps. There are major companies out there right now with dedicated infosec departments - and even they have bug bounty programs, because they know they can't catch everything.
I don't imagine this service gets an enormous amount of traffic. You might be among the handful of visitors to the site who are capable of noticing the CSRF issue, and maybe even the only one to go in and look at the source. He's had an email address up on his site for a while (hi@stochastictechnologies.com). Did you report the vuln to the author when you first found it?
That's all perfectly reasonable, and I'm not without sympathy for StavrosK on this.
I'm also not sanguine about the idea of using a service which is meant to handle information of such gravity as this, and which has also had an extremely well understood, trivially fixed, and enormously compromising information disclosure vulnerability go unfixed for a year or more. The reason why that situation has obtained is not interesting to me; that that situation has obtained is enormously so.
This one I agree with. I wouldn't use the service personally until the issue is fixed. But it is so counterproductive to find a security problem the average user wouldn't notice, and then not tell the author about it.
I wouldn't use the service even afterward. The dev didn't think about CSRF for a year. What else didn't he think about?
Don't get me wrong - I think very highly of what 'StavrosK is trying to do here, and I agree that it's counterproductive not to report an issue once found in a case like this. (There's a certain degree of nuance made necessary by the fact that kill-the-messenger reflexes make vulnerability reporting so fraught in general. But that doesn't seem likely to obtain in this case, so I'd report, probably not even anonymously.) But at this point that trust just isn't coming back.
I'm not going to do any more responding. The author of the service is an active HN user and has been for 5 years. I have no idea if I reported the issue by email or not...
Anyone who "cares" about security should know about CSRF. It is one of the 3 attacks that all web devs encounter. It's trivial to find.
The app is written in Django, probably all the way from 0.96. I rewrote it at some point, but I must have forgotten to add the CSRF middleware on an upgrade.
Yes, this seems like the definition of a service you don't want centralized. Each of us should just find someone (or a few people) we trust, and each do it our own way. Much safer and more robust.
The good news is that it's trivial to implement a dead man's switch on Ethereum, or even Bitcoin. Unfortunately smart contracts can't keep secrets, so you'll still need to trust a third party with a private key if you want to reveal a secret in the event of your disappearance.
It's been okay so far, but yes, I definitely urge everyone to PGP-encrypt their messages before adding them to the service. No matter how much encryption you put in the model layer, the service still needs to be able to read the plaintext to send it at some point...
So, how do you suggest encrypting the data to be sent, while still allowing for the purpose of the service, which is to distribute the data to people later on? The best I can think of off the top of my head is a dedicated PGP key, and two DMS services, one with the data, one with the key. Otherwise you're always at least exposing the data to the DMS service.
I was actually thinking more the standard model, where you encrypt the email to a person's PGP key. This only works if the other person has a PGP key, though. It's the usual security <-> convenience tradeoff, unfortunately.
Considering the rate people are losing their private PGP keys, I wouldn't trust a message that was encrypted today to be decryptable a year from now.
Everyone I know that uses PGP has lost their key at least once (including myself - I forgot to write down the passphrase for a key once it expired and I no longer used it daily to refresh my memory).
The failure mode is not DoS, it's accurately determining "death". Certain users will always respond to their emails, and the site operator will never realize they've been compromised.
DoS is still a failure mode. If the site is down and the user can't click the link to confirm their 'liveness', then the deadman switch will be triggered.
Interesting project! I had a similar idea- "My Social Death". When you died, it would post to your Facebook, Twitter, etc. Thought of it when a few friends died and I saw many others changing their profile pictures and continuing to bring up how close they were. People grieve in their own way but it rubbed me the wrong way and I started thinking of how you could control the message when you died. I would love to post a "I had a good life, don't be sad for me" type of message. But verifying death is certainly tricky...
My verification idea was the self emails, but also a few friends would have to verify (or they could initiate). That way if you lost access to your email, you could avoid a false positive.
First, your point about controlling the message when you pass is TERRIFIC. How many of us have seen opportunistic acquaintances take over the memorial of a dead loved one. It really, really sucks, especially when the acquaintance colors everything through their own beliefs, and those beliefs contradict the beliefs/opinions of the deceased.
Then you gotta think about the potential to send individualized messages to your loved ones. Damn that would be touching.
Plus, what about stuff like Mark Twain's journals. Supposedly he had written much that he feared to publish because it was too incendiary. He thought it'd be ok for it to be published after his death. However his estate chose to keep it private.
Next you can imagine the fun you could have planning your account to 'haunt' people you didn't want to say nice stuff to.
At first I thought this was Dead Man's Snitch[1], a service based on the same concept but used to alert you when cron jobs fail to execute. A pretty valuable part of my toolbox on every project.
I have at least one datapoint to suggest that people are looking for this kind of thing.
I run a service, Cronitor, essentially a dead man's switch for jobs, heartbeats, etc. Doing some analysis on notification method preferences (Email is top, followed by Slack) I noticed a webhook that will hit a url that ended with send_secrets.php if the creator doesn't ping at least once every day.
No alerts had yet been sent so the secrets seem to be safe. I added a todo for a couple more test cases around alert sending -- I'd hate to ship a bug that accidentally sends whatever the heck is in that file. I would suspect the creators of this service will feel that same pressure soon.
> I would suspect the creators of this service will feel that same pressure soon.
The service has been running for eight or so years now, and the safeguards against this have worked pretty well, but yeah, every change is a chance for something to go wrong. I have extensive tests for the sending code and the logic to check whether it's time to send, though.
I would imagine you have a low number of alerts sent every day, even after 8 years? Or do you have enough noise from trialers that it would prohibit a manual review step? Early on we would basically get cc'd on every alert sent -- had to stop that by the 3rd month because jobs/services/etc actually fail all the fricken time.
I did have that in the beginning, I stopped it when erroneous emails fell to zero. Extensive testing helped greatly with that, and the service had barely any false positives even in the beginning.
To EVERYONE screaming at how insecure this service is: PGP. I've ensured key members of my family have PGP keys via keybase.io, and stored only a signed and encrypted message on the servers.
I've used this site for years now and I can firsthand tell you how awesome it is. I once accidentally triggered my death when I dropped out of contact for a few months, but otherwise, it's amazing to know that even if I die, I can from beyond the grave, I can set in motion an elaborate chain of events that will end up with my family possessing all my digital assets (including about half a bitcoin), people whose digital assets I manage will receive full control back, and MOST importantly: my best friend will delete the porn on my computer in time, and only then surrender my passphrase to my family.
This would be a brilliant long game phishing attack. You have regular emails with links arriving that people think they have to click or bad things will happen. Not saying at all that's what this is, just that the thought popped into my mind.
I get that a lot. The answer is "if the service goes down, I'll send you an email well in advance so you can switch. If you die before that, well, your emails will have already gone out, so you don't have anything to lose."
So presumably you have an automated dead man's switch built into the service? Do you also have arrangements for continuation of the service if you 'get hit by a bus'?
I'm a bit worried by the reliance on email to validate aliveness. If something were to ...happen... (ominous music playing in the background) it's possible that such an entity could gain access to my inbox, allowing them to sidestep the DMS entirely, perhaps even gain access to its contents?
Ideally this would require two or three separate methods to verify you're still alive. Or some form of challenge/response relying on at least two different mediums/communication networks/devices.
The worst thing for me is if their emails start going through my spam folder--I check it once in a while, but when my life gets hectic it's the first thing that gets dropped.
This is a really great idea, with a really poor execution.
I'm just a regular guy, but the messages and details that I would leave in a service like this are too sensitive to be put on someone else's server for a period of years. (Maybe I'm just too private in this age of social networks).
We really need a PGP like solution for the masses. On that note, I wonder why someone like Mozilla doesn't take this on and push for human friendly client certificate authentication.
> We really need a PGP like solution for the masses.
How would that solve the issue of putting the secret on someone else's server and guaranteeing that it'll be available after death only? They'd have to have a key to decrypt to release it (in any useful meaning of the word) then you're dealing with storing the key and the encrypted secret. You still have to ensure that the decrypt key will get released on your death but not before and used to unlock your other secret.
Assuming we restrict the solution to this problem space. Since I trust the service to send the message, then I trust them as well to generate the encrypted package and communicate the passphrase needed to decrypt. I trust them enough to not keep that passphrase around.
Then the workflow would be something like the service sending an email to each intended recipient with a short message saying keep this message saved it includes a passphrase that you will need later. Then when the switch fires another email is sent asking the recipient to provide the passphrase. The service will then decrypt the message with the passphrase.
I know this still requires a lot of trust, but it at least protects the data at rest.
That at least protects it on the server but the secret sitting in the email is pretty insecure since the generated decryption key is only as strong as the least secure recipients security.
I've been thinking about something like this for a long time, but I've never come to a solution where I'm fully comfortable with the technology to fully trust it.
There's too many "What if?"s. The biggest one being: What if their system glitches and mistakenly sends out emails even though I've been checking in?
What if I lose access to my email account?
What if I end up in jail (regardless of reason) and can't hit the switch?
What if they get a subpoena for the information contained within the emails?
What if the service goes down? I'm under 30 - it's unlikely anything like this will be around for 50+ years.
To have a fully safe and secure Dead Man's Switch you would need more than just one layer of confirmation, and more than one layer of failsafes.
My thought so far is:
A service such as this one, except instead of an email with your secrets, it would simply release private key to certain key individuals. In said email would be only the key, and nothing else. This would have to be somewhat tech-savvy individuals that would know the significance of the key upon getting it. The email should come from my own personal address.
In my will I would entrust a different set of individuals (immediate family, etc) to one or more lockboxes contained an encrypted drive (HDD, SSD, flash drive.. something resilient.. maybe even the entire device.. maybe all of the above to protect against obsolescence.. ). The recipients of the first email would have to get in contact with these folks and mutually agree to decrypt the data on the drives.
If the emails are sent by mistake, then all I have to do is seek out my lockboxes and encrypt with a new key. The people who got the key will probably be asking "wtf is this?" about. Had I been dead, them seeing an email from my personal address with a private key would elicit a much different response.
Unfortunately this plan still relies on some technology to continuously log that I hit a button, which I don't like. I could misplace emails, I could lose access to my domain, if I do it via SMS, I could close my cell phone number... list goes on.
The only thing I can think of that doesn't involve another piece of technology is trusting the private key to a person (or organization) who is impartial and bound to secrecy to only divulge the key upon a copy of your death certificate. Lawyer perhaps?
Come to think of it, this could be a really cool add-on service to a life insurance policy. Instead of just cash, your beneficiaries get cash + vaulted information.
I've been thinking about this on-and-off for a while now. I don't want to entrust my secrets with any single entity; I want a cryptographic solution.
A couple ideas I have considered:
- Using a secret sharing protocol to split a key into N pieces for people you trust. This would be relatively simple, but there's the risk that they might be hard to get into contact with, or might have come into harm's way themselves. A threshold scheme can be used to allow N<M people to reconstruct the key. This is vulnerable to collusion. To prevent compromise of individuals in a targeted attack, the keys could be stored on smart cards. Also vulnerable to hardware failure.
- Unparallelizable time lock / time release crypto. The problem is, this needs to be able to be decrypted in a reasonable amount of time after my death. But if that's the case, then any bad actor could decrypt it given enough time. One solution is to embed the private key and program in an HSM, and use enough rounds that it will take, say, one week for the HSM to brute force. I could reset it daily, and connect it to my UPS. Should I die, I wouldn't be there to reset it, and it would release the private key. The problem there is, should I ever get held up for any reason, it'd be released. The idea sounds cool, but I'm not sure it'd work very well. Also, what if the hardware becomes damaged after my death? Or a violent criminal breaks into my house and steals the HSM with everything else indiscriminately?
Both schemes could also be used in parallel---the HSM key could be absolutely required in conjunction with everything else. If it is compromised, I generate new keys and redistribute them. This all requires more thought.
The scheme needs to be very unlikely to fail. If I die, I want it to include private keys for my disk encryption, select account passwords, etc so that my wife can access the information she needs, or would find valuable. I would also likely sign (cryptographically with my PGP key) a document that states that I have indeed died, to send to others.
The best part is that it goes based on your Google Account activity, which, for most people, consists of searches, Android/iOS use, etc -- not just clicking a link.
I was building a similar product at deadmansswit.ch for a while, but gave up. I always thought the privacy-leaning Swiss aspect of the URL was fun, and I planned to have it run out of a DC in Switzerland.
I had more than email as the check-in transport, and the trigger would have delivered more than just email (upload files and videos to online services, post to Twitter, etc).
Anyway, the domain is free again if anyone wants it.
Edit: spoke too soon, it appears to have been taken by someone else.
A great idea. When my girlfriend got sick and immunent death was a realistic possibility, she wrote down all her usernames and passwords on a piece of paper, which I then put in a drawer in my home office, only to look at when the worst happened.
It struck me at the time as being insecure but the low tech solution worked.
Perhaps this site would have been a better solution.
Hence my question. I have to read the technical details of contracts and ethereum itself to see if it would be feasible. But if that money guaranteed that it was sent even after the company that provided the service ceased to exist, it would probably be worth it.
Of course that it depends on the amount of money needed..
Not to mention, I imagine greater than 90% of the customers would jump ship immediately. People have a perception of Google as gobbling up all their info and learning all their personal information, so I imagine quite a lot of them would be worried that Google would learn whatever secret they don't want exposed yet.
I've been using this service since shortly after I had an autoimmune attack that ate away a chunk of my nervous system about 8 years ago where I nearly died(technically, did kill me at one point for over a minute before they revived me). I went from being pretty much like you blokes, lost in the details of living without seriously thinking about dying...But i mean, who doesn't in their late 30s. It's easy to get tied up in your day to day concerns until those days start becoming distressingly short. Thankfully, I survived that, and the five years of rehabilitation it's taken to get me back to some shade approximating "normal", though i'm still very functionally and impaired and will never again be what I was...But such is life. Life is 10% what happens to you, 10% the available choices of your circumstances, and 80% how you CHOOSE to feel about it.
All that being said, it's important to remember, the only people who know how and where they're going to die are suicides and people on death's row. The rest of us don't. Having lost many dear freinds and family over the years, I can tell you all unequivocally you should consider what you'd want to say to your friends, your family, even your most cherished rivals....Because statistically, at least one in one in three of every one reading this will die unexpectedly(heart disease, car wreck, lingering crotch rot from a unsanitary toilet seat, et al). Another third of you will get a terminal diagnosis only a few months before you expire because you put off "having that thing looked at like you've been meaning to" until it was too late.
I want that to sink in to all of you. Two of three of you wont have time for carefully crafting what you want to say, or to whom...Because you'll either BE dead, or WILL be shortly. And what i'll bet most of you want to say isn't about where you buried the treasure chest of Al Capone, or that you're the secret love child of Ronald Regan and Janet Jackson...It's that you love them, that you treasure this and that shared memory with them, and that you wish for them to remember you a particular way, or perform a certain ritual in your memory and honor, should they choose.
Sure, this database wouldn't be hard to hack. What's in there is almost completely useless to anyone other than their loved ones. can you imagine how depressing it would be to read a hundred thousand dead people's communiques to snag the one juicy bit about some senator being the illegitimate son of Prince and Madonna? Yikes!
All that being said, I've spent quite some time thinking on this subject, and the answer I keep coming back to on the "Verifying you're dead" problem I think should be handled by examination of the customers data footprint in some sort of routine way. If any of you have ever done data analytics or doxying, you'll know that there are some distinct signs that start to appear when your subject focus has died or dropped deeply off the grid, even after a few days. It seems to me that such qualities could be mathematically modeled into a set of software scripts, and that tied to the trigger. With a failsafe of primary and secondary inquiry designated by the client. I have a few other ideas along these lines that prudence dictates I should keep to myself until it can be developed and patented properly.
If any of you would like to work with me on such a project, please contact me at dontfeartherepair@gmail.com
If I was an immoral criminal minded asshole, I'd probably create a service like this to get my hands on as many juicy secrets as possible and see what knowledge I could get. Maybe I'll find out some hidden business info that I could make money on, or maybe I could get leverage over on somebody to threaten and blackmail them.
I'm not saying that this is that site's intent, but I'd be cautious about trusting them with anything particularly juicy.
This implementation is a bad idea to use because of the aforementioned security smell.
Another implementation would have to solve the problem of how to ensure message security while still being able to deliver the message at the right time.
The "right" answer for something like this is as you said, lawyers, but "get you killed" is... overly dramatic. Snowden is, after all, still alive.
If you don't want this to happen, I suggest taking more care to edit flamebait out of your comments. Leading with "Don't be so melodramatic," "That's so unbelievably false it makes me sad that you," "You stop", etc. guarantees poor results and therefore off-topic snippage.
Stop. Broken privacy tools have gotten people tortured and killed. Just because you can't imagine it happening to white dudes in the US and western Europe doesn't mean it isn't the norm pretty much everywhere else.
You were fine before your "Snowden is still alive" thing. I don't have an opinion about whether Stavros's site is a serious privacy application. Maybe it isn't. But other manifestly unqualified people have tried to deliver privacy to people and failed catastrophically.
It is definitely not a serious privacy application. It's meant for sending things like last goodbyes to loved ones, not state secrets. I will make this clearer on the front page, on a second read the "stored securely" part might be misread as "secure against the NSA" rather than "secured against a curious DB admin", thanks for the clarification.
I'd go pretty far in that direction, were I running this service. Something along the lines of "hey, I do my best, but anything that'd significantly impact anyone's life if it were disclosed accidentally or compromised is best stored offline, with information on how to find it being all that's actually set up to go out via my service." And maybe I wouldn't even be comfortable with that.
Actually, on further reflection, I wouldn't be comfortable running a service like this at all, for reasons amply detailed elsewhere in these comments - in short, it's a great way to paint a target on your chest for everybody from random scammers to state-level actors. I must admit I am considerably impressed by your courage in continuing to do so. Don't get me wrong - I think you must be a madman. But I can respect a very brave madman!
I added something here (https://deadmansapphr.appspot.com/help/), hopefully it's clear enough, but let me know if you have any feedback. I would hate to have people thinking this is secure against the NSA or anything.
It's clear as far as it goes, but as I said before, I don't think it goes nearly far enough. It's not just that messages stored with your service aren't secure against state actors - messages stored with your service aren't secure against anyone with the nous to compromise the host(s) on which it runs. And I hope you'll forgive me for saying that, given that you managed to overlook trivial CSRF for a year in the application's request handling code, I'm not entirely sanguine that your host configuration is strongly secure against any, perhaps even most, non-state actors.
That's not something I like saying, and I can't imagine it's all that pleasant to hear, either. I wouldn't say it at all if I did not feel it necessary, and I feel it necessary precisely because there seems to be a severe mismatch here between the gravity of any potential compromise of your service and the measures taken to prevent such a compromise from occurring. You know a lot more about the nature and scope of those measures than I do, of course, and it's possible I'm underestimating them here. I do hope that's the case. If it is, please accept my apologies for having spoken harshly where doing so was unwarranted. If it isn't, I hope you'll attend to that situation as best you can without delay. If you're not sure, then for the sake of your users, I hope you'll bring in someone with the capability to evaluate and resolve whatever security issues exist, and do so with as little delay as possible.
In any case, you've (perhaps accidentally) put yourself in a position that encourages people to invest a great deal of trust in you, and it seems that many people have done so. Had I put myself in such a position, I would not be comfortable remaining in it if I could not be entirely confident I had either done everything within my power to fulfill that trust, or removed myself from that position without betraying the trust I'd found myself ultimately unable to fulfill. But that's just my own evaluation, and perhaps you feel differently.
Sorry. I totally believe you --- but if I were you, I'd take extra care to make it clear that this isn't a security tool, just because of the nature of the secrets you're inviting people to vouchsafe with you.
I'm responding more to the indignant claim that casual security/privacy tools can't harm people. It's true: they are very unlikely to harm the kinds of people who read and write comments like these. Like I said, it was the allusion to Snowden that moved me to comment.
> if I were you, I'd take extra care to make it clear that this isn't a security tool, just because of the nature of the secrets you're inviting people to vouchsafe with you.
Yes, definitely. I need to spend a bit of time clearly communicating "it's fine for telling people you love them, but not that your multinational's CEO defrauded millions of pensions".
That's so unbelievably false it makes me sad that you, a respected member of the security community, are touting it like a clear and present danger to anyone reading this site.
You stop.
Edit: I'm being absolutist because it's easier, rhetorically, but if I have to be explicit, I'm saying that the realistic occurrence of murder as a result of privacy tool usage, one way or another, is low, to understate things. To talk about murder as a problem in cybersecurity is like talking about meteors hitting car windshields.
Are you suggesting it's "unbelievably false" that broken privacy tools have gotten people tortured and killed? A lot of us in the "security community" know this to be true, with specifics.
There's a paid subscription option, and its developer's comments elsewhere in this thread very strongly suggest people are using it in a serious fashion. If you're thinking of it as a toy, your threat model is bogus.
You can, and I think ethically must, take responsibility for the fashion in which people use the tool you make.
If you intend it to be only a toy, then you can, and I think ethically must, make that clear.
If people persist despite all warnings in using it in ways that might cause them harm, then you can, and I think ethically must, cease to make it available.
To do otherwise is dangerously cavalier at best, and incompatible with the minimal degree of responsibility which I would require of an employee or a colleague. Perhaps you feel otherwise. That's your prerogative. But, if so, I do hope, for the sake of any users you might have, that you aren't in a position to make similar decisions yourself.
I'm not talking about 'StavrosK's service here. I'm talking about your evaluation of threat models, which seems to include a great big exception around "well, if people aren't using it the way I want them to, then to hell with them."
Well yeah, actually that's how threat models are built -- you can't protect users from themselves, if a user wants to store all their passwords in a greeting card website, in the little "special message to your loved one" field, there's realistically no way you can stop them.
Your greeting card website's threat model doesn't include this.
Does it? It seems like you'd need to argue that this notional greeting card website made some specific claim of security around the content of that field.
No, I'm done. The author of the tool has come out and explained things to you directly, and if you're going to keep playing games, you can do that with someone else.
Who's playing games? I've read the comments in this thread from 'StavrosK. You and he don't appear to be talking about the same service, and he's the dev. But if you don't feel any purpose would be served by your continued participation, that's your call to make, of course.
I'm not sure, but perhaps the parent poster's idea was that if you have secrets that would be released if you die, you might give people who want to know those secrets an incentive to kill you.
These seem to have nothing to do with each other, and if you're trying to imply they stole the name then you should know "dead man's switch" is nowhere near an original term. https://en.wikipedia.org/wiki/Dead_man%27s_switch
Do not use this service. With less than 60 seconds of looking, you can hijack people's deadmanswitch accounts with a simple CSRF on the email change API.
If you have a "high value secret" and you are a normal person, lawyers work pretty well. If you have a "high value secret" and you're Ed Snow, something like this site is not a solution you should use because it will get you killed. Nobody has a use for this...