To clarify: There is not, and cannot be, a universal DissidentX encoding detector. It will always, always, be possible to iterate arbitrarily on the encoding technique, and this iterating is easy to do.
Every time someone iterates and releases the code on github, law enforcement will then be capable of writing a DissidentX encoding detector for that encoder. Every time you publicly release stego code, that stego code becomes ineffective.
At best, this framework provides a way for people to write stego encoders that they don't plan on releasing publicly. But you should say that! Warn people how dangerous it is to be releasing their stego code. And warn people not to trust any of the default encoders.
It's not as easy to iterate on stego techniques as you're implying. There are only so many ways to creatively hide a message. And if people happen to come up with a scheme which has already been broken in the past, then their encoder will provide no security at all. They'll trick themselves into believing they're secure, when they're not.
The endgame of effectiveness of stego with this framework of published techniques versus customized detectors for those techniques specifically is going to be much, much more difficult for the detector than it is today, if not outright losing. As for your claim that it isn't easy to stego techniques: Seriously, go read through the docs before making claims. You're just plain wrong, and obviously so.
Can you elaborate? I don't see why this must be true. Just as good encryption is indistinguishable from random data, good steganography should be indistinguishable from whatever universe of target plaintexts you've chosen. In both cases, the code is public, but the secret key is needed to see that the message is non-random, or non-plaintext.
Here's one way it might go down in practice. After law enforcement seizes your computer, they'll scan your computer for any encrypted containers, along with any code that looks like it's used for steganography. They'll find DissidentX, since its README mentions "steganography," which is a keyword that their forensics tools will search for. Then they'll use each encoder in your DissidentX folder to scan your computer for any encoded messages. Unless the message is trivially short (<50 bytes) then they'll come up with a list of suspect messages. This list will include any encoded message you've created using DissidentX, along with some false positives. Then, if you're in the UK, they'll have a judge demand you cooperate with them; any plausible deniability you may have had is gone at that point. It's "cooperate or go to jail."
Shannon provided us with a proof that
such systems are secure regardless of the computational
power of the opponent [43]. [...] Yet we still
have no comparable theory of steganography.
The problem is that there's no such thing as perfectly secure stego (undetectable covert messages), even though there is perfectly secure encryption (unbreakable encrypted messages, regardless of the computational power of the adversary, when implemented correctly, and when not defeated via side channel attacks, and when not compelled to cooperate by a judge).
I just read it, and you are completely extrapolating what that paper says. It does not say steganography is hopeless. It contains no mathematical proofs, and it's also from over 15 years ago.
More generally, "we do not have a proof" does not mean "we disprove". You also completely ignored my point about the secret, without which the encoder will not work when an attacker tries to run it.
So can't you just embed messages in every file you own? Then when asked for the keys, give out fake keys for the really really secret stuff. So law enforcement ends up with a few sensitive documents and a whole bunch of random bytes where you cannot distinguish between "actual random bytes" and "bytes decoded with the wrong key". And there is your plausible deniability.
Obviously it's not perfect. Obviously a totalitarian regime which suspects you of dissident activity will pick any reason out of thin air to lock you up for as long as they like, or just execute you.
But being able to say "here's the keys" with them having no way to know if they are all the right keys, is at least something.
Though of course at best you won't keep those files on your PC in the first place. You'd keep them on a microSD card that you keep in a tiny pouch under your skin. You'd keep them encoded in photos you have printed out and hung as wall pictures. You'd have them embedded in a well-torrented movie and backed up willingly by hundreds of thousands people (though not you). And if you just use them to send encoded messages, neither you nor the recipient will ever store them on an hdd.
I'm terribly confused by your argument. Who's talking about equipment seizure here? What does that have to do with the on-the-wire security of the encoded messages?
Your argument appears to concern only the risk in openly publishing encoders. Are you also arguing that Bram's framework encourages such publishing? If not, then what exactly is your beef with it (the framework)?
OK, so when you're talking about seizure of equipment, with all the tools and past encodes just sitting there on the machine for the taking, you're really far afield of the kind of argument you had seemed to be making. I understand now why you seem to assume that the adversary is near-omnipotent here - because you're assuming that the user is a dolt who is doing most of the hard work of damning themselves for the state.
An effective dissident is going to employ some reasonable opsec practices and have multiple layers of security, they're not going to be foolish enough to think that one program is a magic bullet.