I think there are two types of destructive actions; destructive actions you do as a part of completing some other task, and destructive actions you do for compliance reasons.
I think the problem arises when product designers aren't sure which case their data deletion function is for.
If you dumped a bunch of social security numbers in your non-confidential data history and batch job system, you'd want to be able to permanently delete it as part of the incident response, to reduce the number of people that are exposed to data they shouldn't have access to.
But if you delete some lines of code from your text editor, they can just pop back into existence with the press of a key because you were probably just moving them around, or trying some buggy function without a piece of code you thought might be unnecessary to isolating the defect, etc. Editors clearly understand that you don't kill text out of an Emacs buffer to solve a compliance issue, so they keep it around even though you technically asked for it to be deleted. (Imagine how crazy it would be if deleting text in your editor deleted it from memory, the clipboard, version control, and your upstream VC server!)
In the case of this Github issue, there was clearly a fundamental misunderstanding. Github probably imagined the feature as a compliance type thing; get this stuff off the Internet as fast as possible at any cost necessary. Delete the list of people that even knew this thing existed! I could see why someone might want that. But what the user thought was that this was something they needed to do along the way to some other goal.
I have a feeling that people pick the compliance route more often than the "experimentation" route more often than not, not because they expect users to have actual compliance-type issues, but simply because it's easier. The net result is that users are trained to fear computers and fear experimentation, and that's a bad state we've put society in. This article is yet another victim of "if they said they were sure they wanted to delete it, we can just delete it". But it's actually pretty rare that people are sure.
I think the problem arises when product designers aren't sure which case their data deletion function is for.
If you dumped a bunch of social security numbers in your non-confidential data history and batch job system, you'd want to be able to permanently delete it as part of the incident response, to reduce the number of people that are exposed to data they shouldn't have access to.
But if you delete some lines of code from your text editor, they can just pop back into existence with the press of a key because you were probably just moving them around, or trying some buggy function without a piece of code you thought might be unnecessary to isolating the defect, etc. Editors clearly understand that you don't kill text out of an Emacs buffer to solve a compliance issue, so they keep it around even though you technically asked for it to be deleted. (Imagine how crazy it would be if deleting text in your editor deleted it from memory, the clipboard, version control, and your upstream VC server!)
In the case of this Github issue, there was clearly a fundamental misunderstanding. Github probably imagined the feature as a compliance type thing; get this stuff off the Internet as fast as possible at any cost necessary. Delete the list of people that even knew this thing existed! I could see why someone might want that. But what the user thought was that this was something they needed to do along the way to some other goal.
I have a feeling that people pick the compliance route more often than the "experimentation" route more often than not, not because they expect users to have actual compliance-type issues, but simply because it's easier. The net result is that users are trained to fear computers and fear experimentation, and that's a bad state we've put society in. This article is yet another victim of "if they said they were sure they wanted to delete it, we can just delete it". But it's actually pretty rare that people are sure.