I run one of the largest email sending services on the internet. I have been living the “mess” of internet email for over 20 years.
Here’s the thing: despite the internet’s email system being complex and confusing and riddled with problems, it is universally adopted and interoperable. No other open communication tool can boast of email’s massive interoperability.
Any new system will face the impossible burden of winning hearts and minds while the old system continues to chug along with billions of annual R&D spending and dozens of conferences full of smart people working on solving problems.
Even if “MX2” can peacefully coexist in the DNS, why would anyone spend the millions of dollars in engineering effort to move while their teams are busy building the latest layer that’s been invented to patch over the current system?
By all means if you want to propose a new email system, show up at IETF or M3AAWG and make a bold proposal. Someone will buy you a beer while they explain why you are much better off getting into the mud pit with the rest of us and working on the next pragmatic fix to keep things rolling along.
Also, they would school them on actual-world problems in the process:
- You can't wait until you receive the entire body to be able to compute a signature to then validate the sender as your first line of defense. It is just too expensive and opens you to DDOS attacks. People use IP reputation as a first line of defense because it is cheap, not because it is good.
- You cannot enforce people's behavior through RFCs. I can assure you that random guy next desk will not care about your "this is a top-posting-thread" header and bottom post there. Even if she has to manually copy/paste things around.
- Likewise, auto-generated plain-text versions of HTML (or other rich-text formats) are no better than what screen readers can achieve. Most poeple won't bother writing that alternate version, meaning the obligatory alt text is now less useful than when it was optional and only people who cared inculded it.
- Your largest client may not update their e-mail infrastructure to comply with the latest standards. If that happens, you don't tell them to update or otherwise you won't be answering them because their e-mails go to spam. You do whatever is necessary to ensure that their e-mails don't go to spam. Business always comes first.
1. Could a future protocol require an immediate initial message (a “hello”) stating exactly how much content will be sent, and until the “hello” is sent, it’s limited to, say, 128KB before the connection is immediately terminated? (And of course, if the content exceeds the declaration, termination and immediate IP temporary ban, safe to do as this is an obvious violation of a new spec?)
2. The goal is to make it easier for the email client which by itself will encourage good behavior. There’s also no requirement for the messages to all be in one massive blob.
3. The goal is that it would be automatically created by the client. For personal emails, this is easy. For enhanced HTML emails, that is where the requirement comes in. Email providers can come up with their own ways of enforcement from there (I.e. “if it’s only one sentence, you obviously didn’t do it”), though I get your point and that would become messy unofficial spec again.
4. Could a future emails system have versioning, allowing the server to clearly communicate (“Hello, I implement MX2 v3.1.”)? In addition, a business can obviously make their own settings that original email alerts do not go to Junk in their business mailboxes - but they do know they’d better get on it or their messages to clients might go to Junk.
SMTP already has the BDAT command where the size is sent first, and arbitrary bytes can be sent (unlike DATA).
SMTP already has versioning through extensions.
If you're banning an IP for exceeding a processing resource limit please keep the ban short. Presumably you can afford to process the first 128KB of one bad message per six hours, for instance. There should be no need to make a month-long or permanent ban, and these just hurt interoperability if the sender realizes their problem and fixes it, or if the address is reallocated.
Trying to limit data between the hello and the email data is futile, since the attacker can just flood you with random packets no matter whether you told them to stop (closed the connection) or not. You can only limit things you have control over, mostly your own memory usage, and how much data is accepted into more expensive processing stages.
> 128KB of one bad message per six hours, for instance. There should be no need to make a month-long or permanent ban
As someone who saw the actual bruteforce attempts, most bots abandon any attempts after an hour or two. Resources are cheap but even for spammers (almost unlimited resources) futile attempts are costly.
> I can assure you that random guy next desk will not care about your "this is a top-posting-thread" header and bottom post there.
We should move away from having a single mutable body for email. It should be a series of immutable messages that reference the message that it is replying to. Each message can contain a hash signed by the private key for the domain that wrote it. Then when you write your message it just gets appended to this chain.
How it is shown is up to the email client so that it can be done in the best way for the user.
What you’re describing is already possible with email as it is, using the In-Reply-To header or whatever its name was. No need for cryptographic signatures. The only issue is that common mail clients still automatically quote the whole message being replied to for no good reason. It should work like it used to on phpbb forums: no quote by default, quote selected part if text is selected.
> The only issue is that common mail clients still automatically quote the whole message being replied to for no good reason.
Here is a good reason: In-Reply-To is a reference, not content. The recipient(s) of your message might not have that email.
Also including the quote is a default. The sender can edit it, splice responses into it and remove irrelevant parts of it. Admittedly quoting norms are in shambles though for various reasons.
> Each message can contain a hash signed by the private key for the domain that wrote it.
Me being able to prove that I wrote something is good. Other people being able to prove that I wrote something… it's good under many circumstances, but not in general.
We do it like Hacker News. It's just another message, with > indicators. Globally, inline replies are A. Rare and B. Often used with prank intent (i.e. you can make it look like you're replying to something they didn't say).
I run several of the smallest email sending services on the internet. Been doing it for 26 years or so.
{rest of your comment ... verbatim!}
Email doesn't need fixing or a v2. I've run GroupWise, Exchange, Lotus (various flavours but never peppermint) and others too. For me, for little systems: Dovecot (IMAP) + Exim (MTA) is a golden combo.
Email_v1 itself is just complicated enough without being too funky. You bolt on TLS/SSL and the rest. MX records are a simplified form of SRV records (I think MX came first but the point holds) and remove the requirement for gateway load balancers and clustering technologies if you don't want to do all that stuff.
Nothing is perfect but email does deliver a lot more significant messages than any other method from before or since its inception.
If email didn't need fixing, spam wouldn't exist. There's more spam than there is legitimate email traversing the internet. That is a problem worth solving.
You could solve it with existing infrastructure to some extent, eg. your email address is actually a cryptographically generated guid rather than something easily guessed or harvested. If you combine that with a background handshake procedure for introductions, so that all of your contacts get their own guid alias mapped to your canonical one, then you can revoke any of those if they get compromised at any time. Spam is effectively solved.
This is basically like the web of trust, but for email.
> If email didn't need fixing, spam wouldn't exist.
Spam is not a technical problem, it's a societal problem. As usual the tech reflex is to find a technical solution but it is mistaken, once again. Societal proplems require societal solutions, not more tech. The only thing you will achieve with more tech is more segregation between those who have and those who don't, you'll create more issues than already exist
Email being so cheap and easy is is a significant component of spam, which is a technical problem to a degree. Spam on Signal, text message, voicemail, Discord, etc… is significantly less present for various reasons (cost, complexity, etc…)
Being cheap and easy is not a problem, quite the contrary. That we as a society make this a good thing for spam is a problem, but I don't want to make the system shittier just because of the bad use of it.
My work email is virtually useless at this point due to the absurd quantities of spam I receive. I think the OP suggestions would actually make email less shitty.
Any communication medium that is cheap and easy will be relentlessly abused by spammers.
> I suppose snake oil salesmen were a technical problem too?
Snake oil salesmen were not created because of lax technical infrastructure. Spam is not a technical problem because scammers exist, it's a technical problem because the technology is what lets them reach you. The technology can then be amended so they can't reach you and spam is solved. I'm not purporting to solve scams, but to solve the technical mistakes that lets scammers spam you.
wut. I guess we just need to wait for a big enough CME and we don't have to worry about spam anymore; so in that respect, I guess you are right. Though its like using a hammer on a window to screw in a picture frame.
> ...your email address is actually a cryptographically generated guid rather than something easily guessed or harvested. If you combine that with a background handshake procedure for introductions, so that all of your contacts get their own guid alias mapped to your canonical one, then you can revoke any of those if they get compromised at any time...
Here, you're kicking the problem further down the road, though, to another known attack vector: Directory Harvest Attack[1].
In this case, though, the directory (presumably) contains the guid mapping (which - by definition - would have to be a different guid than the object) and would have to process parsing these guids against the users. (This already occurs on recipient receive for some SMTP servers [just before BDATA/DATA] via the email address).
What would one bad email to an email guid do? Would it force rotation of the guid[s] throughout the entire forest? If so, how would that be communicated externally? How would you communicate it for just the one address, if you just changed the one guid?
Would you, instead, have to keep a guid history to check against -- or lose all of the email between possible compromise and the sender's database update? Would you just keep it in the Transport Queue, until manual intervention could check out email between the possible compromise of the guid and new mail would be received for the new guid? That wouldn't scale for large enterprises.
Keep in mind that nothing has to be sent for recipient validation to occur. The SMTP Server[s] just respond[s] to the recipient block with the next step -- but the caller doesn't have to complete the SMTP negotiation from this point, they already have validation if the addresses (even these proposed guids) are valid.
Tarpitting is somewhat of a viable option, here, but it isn't foolproof.
> Here, you're kicking the problem further down the road, though, to another known attack vector: Directory Harvest Attack[1].
Dictionary and brute force attacks don't work against cryptographic ids, so I don't see how this is relevant.
> What would one bad email to an email guid do?
I assume you mean, what would happen if you received a spam message and had to revoke a guid? First, revocation means the guid is no longer valid and to any incoming message, so it acts as if the guid simply doesn't exist.
Second, the idea here is that every entity gets their own guid designating you, so the same guid is not known by more than one entity. This is the purpose of the handshake protocol during introductions. If A and B know each other, B and C know each other, and A and C want an introduction, B triggers the introduction protocol which mints new guids for both A and C that are then exchanged with each other. This can happen transparently without the user seeing what's going on under the hood. Revocation is just a mark as spam button, and introduction is triggered by CC'ing more than one person in your address book (introduction is the trickiest part).
So if A gets a spam message from C, you just revoke the guid sent to C and you're done, any message from C now acts as if A's address no longer exists. This doesn't affect any connections to anyone else.
If B's guid for A is compromised in some way, you can trigger the introduction protocol again to mint a new guid after the compromise is resolved, then revoke the old one.
There is simply no way for spam to gain a real foothold here: they can't guess ids, and if they somehow obtain someone's address book, those addresses are valid only for one or two messages at best, before it gets revoked. The revocation and introduction protocols can happen using the existing protocols in a few different ways, like by exchanging some message types that are not seen by the user. There are definitely some details still to work out but I don't see any real roadblocks.
The only real "problem" is that now all email addresses are effectively private, eg. no globally addressable emails, which is not great for business purposes like info@mycompany.com. You could of course keep running the old email system for this.
Email can be delayed ... for days, hours, even weeks. What if I set up a dead-man email to you, you revoke the id, then I die? Would you somehow magically receive my email for a revoked id?
Well obviously I wouldn't get an email at a revoked address anymore than I would get messages at an email account that I closed. If you want to set up a dead man email, then set that up with an address that isn't shared with anyone else, then there would never be a reason to revoke it.
I don’t see that really working. I regularly delete personal tokens off of GitHub, especially if they haven’t been used in awhile. I could see the same cleanup happening (or even being forced by disk space usage).
Anyway, I don’t think this idea would work with normal human patterns. At work, we regularly saw people opening emails years after we sent them. Hell, I’ve emailed people years after not talking to them. I just don’t see this working.
Why would disk space be an issue? Guids are 16 bytes each. Even if you have 10k contacts, that's only 10k guids your email server has to store. That's 160kB. What's the big deal? You get more spam than that daily. Why wouldn't you persist 160kB to never get spam again?
> work, we regularly saw people opening emails years after we sent them
So? There just really isn't a need to revoke anything until you receive spam on that address. Maybe we're just not on the same page about how this works. Here's a more detailed overview of what I have in mind:
That's ~15gb per billion contacts. There's an estimated ~2 billion gmail users, so we're talking 30gb just to have one guid per user, and you're suggesting multiple guids per user (unbounded). So, let's assume each user has at least two services, we're now at a 60gb table, and that doesn't even include a mapping between users and guids, which will probably double the table size even more.
At scale, you're probably looking at a multiple-terabyte table, right from the start, and spending compute-days, or even compute-weeks, just running migrations; just to get some dubious returns and a lot of additional end-user complexity.
> So, let's assume each user has at least two services, we're now at a 60gb table, and that doesn't even include a mapping between users and guids, which will probably double the table size even more.
That's literally nothing. As I said, each user gets 10x more spam than that daily.
> and spending compute-days, or even compute-weeks, just running migrations
Migrations for what?
> just to get some dubious returns and a lot of additional end-user complexity.
There is no additional user complexity.
Supposing your math is correct, each user has a relatively fixed but larger than normal storage overhead for their address book and a inbox that that grows slowly because there's no spam, rather than a a small but fixed storage overhead for their address book and an inbox that grows 10x-100x faster due to mountains of spam.
I just really don't think you're comparing the storage requirements correctly.
> ...the idea here is that every entity gets their own guid designating you, so the same guid is not known by more than one entity
Ok, now you're sending a list of guids that _can_ be emailed to, per negotiation? Otherwise, how are they sending to that specific guid? A guid is not a hash of an object but an identifier object (a 16-byte array, if I recall correctly) - it has to map to the recipient _somehow_.
In other words, in each SMTP exchange, that information would have to be stored in some form of look-up table, _somewhere_, on both the sending and receiving servers.
How do you enforce the senders destroying that table, so that many versions of it don't expose your half of the signature? Do you generate a new key per session? If so, where are you storing that key, in memory? How would you prevent the heap from exposing those keys in a process crash (say, where a dump is automatically generated - like in Windows)? How do you prevent a nefarious actor using A, B, or C from generating a flood of SMTP sessions and creating a tonne (yes, the metric kind) of these look-up tables in memory? What happens when back-pressure is hit? Do you force everything else to paging but keep the tables in memory?
I think you're overthinking it. For simplicity, instead of [human-readable]@mydomain.com, let's use [guid]@mydomain.com, a dynamic set of unguessable aliases for your account. Your guids that have been handed out are completely under your control and stored on your server.
There are no cryptographic keys to manage here, just cryptographically secure identifiers that are stored on a server.
If you and I had been introduced, you would have a guid@naasking-domain.com designating me in your address book, and I would have a guid@felsokning-domain.com for your address in my address book.
So revoking the guid you have for me is an operation that happens on my server and simply invalidates the only address that you have. This part is simple and why spam is easily stopped in its tracks.
The introduction protocol is the tricky part, because C and B would have different guids for A, so if B CC's A when messaging C, then there should be a way for C to resolve their guid for A. This is done via a petname system.
If C does not already have a mapping for A (and so doesn't know A), then it can request an introduction from B. C sends B an "introduce-me as C[guid-intro]" message with a new guid for C, then B then sends to A "here's who I call C [guid-intro]". Guid-intro is a use-once guid for introduction purposes.
A then sends to C[guid-intro], "hi, I'm A[new-guid]". C replies, "hi, I'm C[new-guid]". C then revokes guid-intro since it was used, and we're done. A, B and C each have their own guid addresses for each other. You can keep the audit trail of where you got a guid introduction in a database, but that's not strictly necessary for this to work.
This introduction protocol happens transparently to the user just by exchanging specific message types the server recognizes. It's a protocol that can be built atop SMTP just to manage the database of addresses that the SMTP server accepts.
> If you and I had been introduced, you would have a guid@naasking-domain.com designating me in your address book, and I would have a guid@felsokning-domain.com for your address in my address book.
The description, here, is no different than S/MIME encryption exchange - in that the guid exchange has to be done before it could be used.
You still have the issue of [A]Guid and [C]Guid correlating between themselves _and_ it being "unique" per SMTP session (after all, you said before that each guid has to be unique per session). This is where my earlier reference to generated guids being exchanged during the SMTP session comes into play. However, leaving that aside...
A single-use guid is no different than an SMTP address, if we're going based on the single guid inferred from that line -- and that guid has to be stored elsewhere in your system for it to be resolved to a recipient. So, you need something like a guid history array on the object for the forest to be able to resolve that guid (on recipient resolve) to a mail object inside your forest.
You have no mechanism (from your description) for B sending to [A]Guid or [C]Guid junk mail (assuming they've been able to discover the guids) using those guids. You say you would invalidate [A]Guid or [C]Guid -- but this doesn't resolve the issue of [A] and [C] now having to re-exchange Guids, for something that B has done.
So, now, all valid email between [A]Guid and [C]Guid is invalidated (per your description) and they're calling into your helpdesk, trying to understand why valid email isn't being delivered.
Do you tell them to re-exchange guids? How do they re-exchange guids when the mail system is dependent (directly) on those guids already being established on both sides? How do they "re-introduce" themselves, in other words, in that scenario?
> The description, here, is no different than S/MIME encryption exchange - in that the guid exchange has to be done before it could be used.
There are formal connections between some encryption protocols and what I'm describing here (effectively a system based on capability security, ie. this is modelling spam as an access control problem for an unbounded set of actors). Basically encryption let's you do away with extra storage requirements for the guids, but the cost is additional complexity around key management and revocation, and more compute cost. I haven't thought about it enough to see if there's a formal correspondence with S/MIME, but my proposal is very simple so I don't think you need to try to understand it through that lens.
> You still have the issue of [A]Guid and [C]Guid correlating between themselves _and_ it being "unique" per SMTP session
No, these guids are not per-session, they are persisted in a user's address book.
> and that guid has to be stored elsewhere in your system for it to be resolved to a recipient
Yes, each user's address book contains the guid address for a contact just like right now it contains an email address. Just take the existing address book and make the emails cryptographically unguessable guids. If you and I have the exact same set of contacts, none of our guid addresses will match. That's literally it.
> you need something like a guid history array on the object for the forest to be able to
No such history is needed. I really think you're overcomplicating this.
> You have no mechanism (from your description) for B sending to [A]Guid or [C]Guid junk mail (assuming they've been able to discover the guids) using those guids.
I don't understand what you're trying to describe here.
> You say you would invalidate [A]Guid or [C]Guid -- but this doesn't resolve the issue of [A] and [C] now having to re-exchange Guids, for something that B has done.
If A and C have been introduced per the protocol I described, then anything B does has no impact on the relationship between A and C. If B sends them junk mail, the user (A or C) could decide to revoke B's access to them, or may opt to not revoke if they think it was an accident.
You could opt to track who introduced you and make revocation decisions based on that extra info too, but it's not strictly necessary.
> How do they "re-introduce" themselves, in other words, in that scenario?
In the case that a guid address has to be revoked but you want to keep the connection, (perhaps the guid address leaked somehow), then the mail agent would have renew their connection by re-running the introduction protocol before revoking the previous guid, or they would have to request another introduction through someone they both know.
This is as simple as having a "mark as spam" button, and when the user clicks it, it asks if they want to block the user entirely or if this was accidental (or something). If the former, the system revokes immediately, if the latter the system re-runs the introduction protocol using the existing guids to get new ones, then revokes the old ones.
> If email didn't need fixing, spam wouldn't exist.
Let's start with something easy - [1] define spam. [2] Can spam be identified with a purely mechanical (no humans involved) process?
Do you have a definition of "spam" that is substantially different from "mail that I don't want."
It's also interesting that you assume that spammers can't generate new identities. (Yes, you seem to think that introductions solves everything, but it doesn't)
I ask that because "I don't want" requires mind-reading by the sender.
I don't think you've actually read the details of my proposal because the fact that spammers can generate new identities is irrelevant.
Nevertheless, to answer your question, spam is generally understood to be unsolicited email. The fact that computers can't read minds is exactly why they shouldn't try and should instead simply remove the core mechanism that enables spam to begin with: the easy ability to reach you because of guessable and harvestable global identifiers, and the difficulty of changing a compromised address means collecting and reselling addresses has value.
Both of these properties are violated in this system. Minting new email guids is a trivial core operation that literally happ all of the time, and addresses cannot be guessed/brute-forced, therefore addresses have almost no value to spammers or brokers.
You're right - I thought that you'd made one fatal error (sender registries) when you'd actually made a different one (introducers).
> spam is generally understood to be unsolicited email
Not so fast. If spam is universally disliked and should be eradicated, it can't include all unsolicited email.
That's because people LIKE some unsolicited email. Unless you can distinguish unsolicited email that someone will like from unsolicited email that they won't like ...
FWIW, you come across like the "advertising should be banned" people. (That's another group that confuses the existence of a problem with the existence of a solution that doesn't have any downsides. They fixate on their solution and try to define-away its downsides.)
Advertising is a lot like spam. It is product information. If you don't care about the information, it is unwanted, but if you do.... The thing is, there's always someone who cares.
FWIW, every time I've physically met someone who claims to be anti-advertising, said someone has happily displayed several pieces of advertising for products that said someone liked. When I point that out, "that's different" or "I don't mean that kind of advertising."
> That's because people LIKE some unsolicited email. Unless you can distinguish unsolicited email that someone will like from unsolicited email that they won't like ...
Then they can opt into a service that sends them products they might like. Defaulting to opt-in, which is the current situation, is always terrible. Spam consists of a huge fraction of mail sent over the Internet, and vast majority don't want it, and it's a huge security problem (phishing, viruses etc.).
If Gmail implements this kind of protocol, then they can opt you into their advertising list as part of the sign up process.
Spam exists because email can be sent from any domain on the internet by default without requiring any validation.
The moment that enforced DMARC with p=reject is mandatory a lot of problems will go away because you will be required to "turn on" email for your domain with SPF and DKIM. In the mean time, every domain that has ever been registered is subject to being used for spam.
I just want to thank you for your contribution to this thread. I've long thought email should go in a direction similar to what you're describing, and I appreciate the specificity you've provided.
I wish I could say I'm surprised by the animosity (and relative lack of substance) in some of the comments you've received, but I guess that's a problem with social media and/or humanity that's unlikely to have a technical solution.
Each attendee gets their own guid introduction as usual, just by scanning a QR code or something, just before or after the talk. The same basic introduction protocol works here, the QR code just names a guid address that has constraints on it (time-limited maybe, or that limits introductions by number of attendees).
Corner cases like this have solutions, and even if they're a bit more awkward, who cares? 99.9% of people who suffer from spam don't encounter these corner cases. Should a solution solve the biggest part of the problem, or prioritize making the corner cases easy?
Attendee? What year is it in your world? (Also, time-limited is stupid for this case.)
This isn't an edge case - people pass out addresses expecting to be contacted by randoms all the time.
And, most contacts are second-order or further, so introductions aren't even possible because there's no common link. (I pass on "you should contact {other person}" to groups that I don't control all the time.)
I get that you're proud of your introductions hammer, but you don't get to ignore the existence of screws.
Perhaps you're unaware that most public talks take place at conferences, which yes, still have attendees.
> (Also, time-limited is stupid for this case.)
Time-limited is perfectly fine policy for some cases. Maybe you have a different case in mind, but that wasn't specified in your scenario.
> This isn't an edge case - people pass out addresses expecting to be contacted by randoms all the time.
And? If you want to open yourself to a flood of possible spam, then create an address just for that and publish it. Nobody's stopping you.
> And, most contacts are second-order or further, so introductions aren't even possible because there's no common link
What does "second order" even mean if not "someone I know knows them"?
There's nothing stopping the use of public addresses, the point of the proposal is that most people don't need it, and the default public nature of email is what creates the security and spam nightmare that it is.
Which address are they buying? Every contact you know has a different guid address for you, and as soon as one of those is used for spam, you can revoke it. What value does such an address have to a broker or spammer? Addresses have value now because they are global, easily guessable and harvestable, and difficult for the user to change. The system I'm describing violates all of those properties, thus devaluing the whole spam enterprise.
That seems region specific. I don't get paper spam in Australia. I did get some for a week, until I put a "no junk mail" sticker on my box, which is respected here. It's less of an issue of paper mail and more of applicable regulation.
Protocols aren’t going to change that. Email has to be open or it won’t work. But being open means there is abuse. And abuse means there is reputation. And reputation… means you will be blocked sometimes.
Question is whether a new or updated protocol could force users such as "email providers" to change their behaviour.
Here is something to downvote, a series of questions:
What if email were "pickup only" _from the sender_. Not from some intermediary recipient, e.g., an "email provider". What if the the recipient had to identify acceptable senders before they could "send" mail to the recipient. Arguably this already happens every time an email recpient gives out their email address to some email sender. What if the sender did not "send" mail but instead uploaded it to host run by the sender and accessible by the recipient. Then the recipient retrieves the mail from the sender.
In the past, one large email provider had an RSS feed for a user's inbox. What if the RSS feed is not provided by some third party "email provider" but by _senders_. What if the feed indicates whether there is new mail waiting to be retrieved by the recipient. Arguably, something like this already happens, albeit using third party intermediaries, for example as millions of people use non-public webpages on third party websites to communicate with each other, instead of using email. Recipients check these pages for comments or messages from "senders".
A simpler idea that requires no changes to any email protocol, which I have tested successfully on home network, is for sender and recipient to be on a peer-to-peer overlay and run their own SMTP servers, like the original internet. The sender SMTP server communicates directly with the receiver's SMTP server, not a third party SMTP server run by an "email provider". Obviously, sender and receiver should not invite anyone onto this network who they do not know and from which they do not want to receive email.
The fundamental problem with email is that personal, non-commercial email is mixed with commercial email, mail that is selling something. That's beneficial to marketers, but not email users. Any change to email that threatens to exclude the unsolicited, commercial email will be opposed fervently.
> What if email were "pickup only" _from the sender_.
Congrats, you've invented DJB's Internet Mail 2000[1]. Definitely a good proposal for moving the burden of spam back to the spammers but I don't think anyone took the time to seriously consider it.
Pickup email makes one problem in spam much worse. It demonstrates that the address being spammed is valid and active. That's one of the reasons most email clients/hosts don't load remote resources by default.
Unless there was a google-scale grab-n-cache going on for those messages, I think there'd be a problem.
Here is a fun thought/question: Could it be that (using rss) only one of the two parties needs to remember/have login credentials for the other to be able to obtain them?
I doubt there will be a technology that can reliably exclude unsolicited or commercial email. How will the system know, what's unsolicited or unwanted? It can make a guess and that's what the big ones do. But it won't get better than this. I don't think there will ever be an alternative where this could be opposed.
As for unsolicited, this is already taken on by the GDPR and if you're a company that wants to sell their stuff in the EU, you pretty much have no choice than to adhere to these laws.
Software developers trying to manipulate computer users for financial gain make lots of assumptions about what people want without ever asking them. I am not a software developer.
As a replacement for current email only, you're probably right in that nobody would care enough to do the significant investments necessary in a reasonable timeframe, similarly to IPv6 (and the pain there is arguably even greater than for email).
As an alternative to something with the semantics of what companies currently use SMS for (i.e. having ostensibly higher reliability and security), I think you'd see a lot of interest.
The next phase of email should be for Federal and state governments to host it. We need to move it off of private infrastructure for essential services like payments and recordkeeping. There should be better penalties for misuse of email services and scams now that it handles so many important things, including authentication to many sites everyone uses. I wrote a whole rant on this a while back on my blog, despite objections, email can well replace the outdated physical mail delivery systems currently in place around the world, vital communication in the age of corporate surveillance and scams needs to be regulated if not run, regulated, and secured by more accountable government resources.
We have already seen how our personal information can be harvested and weaponized. Leaving important email services to strategically opportunistic companies like Google will not be wise into the future.
The Gmail engineering team is perhaps 500 strong based on various estimates that try to exclude team members who work on Google Workspace more broadly. At Google’s median pay of $300K, the cost is very approximately $150M/yr to engineer the stuff that makes Gmail work. That’s before accounting for infrastructure specifically required for engineering efforts, such as training models.
Let’s speculate that migrating to a whole new email standard would eat up 10% of the team for a couple of years, giving time to build and adapt a software stack that can run at Google scale on the new standard. That’s a $30M spend.
While this figure is peanuts for Google, consider that during the span of time that engineers would be transitioning to “email 2.0,” the full Gmail team would continue to work on just maintaining “email 1.0” and adapting all the new standards that continue to make the existing system improve.
Why would anyone at Google want this disruption unless email 1.0 was so broken that it was hopeless?
From the article, the new software stack is just a slight modification of the existing stack. I just don't see how it would take a few years and 10% of the workforce to add a few new steps to the existing pipeline.
Once implemented in some open source mail agents, everyone can just upgrade on their usual schedule, so it's not like every single person needs to spend millions of dollars to transition.
I got to present at M3AAWG several years back and it was a cool experience...but that is not a conference you get to just show up to. The only reason I was there is because I was invited and memberships are pricey.
Email is amazing and underappreciated. The federated social network that everyone keeps trying to invent has been there all along. If more people controlled their own email address, it would be nearly perfect.
It seems a lot like trying to fight against Javascript on the web. They are eternally linked at this point, and the only way to replace it is to envelop with some sort of language polyfill for bckwards compatibility - which is just yet another layer of abstaction...
There's a whole world of email mess that the article didn't even touch on: ambiguity, for example by repeating headers with conflicting values.
For example, you could have, in an email:
Content-Type: text/plain
Content-Type: text/html
now, maybe a virus scanner will think that the email is plain text, but the recipient's email reader parses it as HTML.
Same potential confusion with Content-Transfer-Encoding.
And then there's the matter that headers are supposed to be ASCII-only, but what happens when you send header values with 8 bits set?
And then there is how email threads work. Most of the world uses "In-Reply-To" and "References"-headers, but Exchange, in its infinite wisdom, decides to ignore them and has its own proprietary headers.
Basically every aspect of email has very annoying problems, and every attempt at a version 2 needs to either ignore some of them (annoying those particularly invested into this problem), or reinvent the whole wheel, running in Second System Syndrome worries.
I missed that little detail, but even then, it doesn't matter. That just becomes a parse error and any half-decent parser can recover from that error. The important bit is that the spec doesn't specify how to recover, unlike html5, where it goes into great length to specify recovery so browsers are somewhat consistent. So, as the OC mentioned, some systems will recover by keeping the first one and some by keeping the last one, making it so various systems show different things.
Replaced a complete mess with a little less complete mess. Nobody calls it Atom feeds as well, leading many people to still offer RSS despite it being the worse protocol.
and they are almost more powerful than email. Look how widely Discord is used, compared to email. Email might have more sites, but outside of corporate reminders, more communication happens on Discord.
A quick google seems to suggest around 350 BILLION emails are sent PER DAY. The statistics I could find show discord has between 0.8 and 4B messages/day, and those will be far shorter in content than an email. It’s two orders of magnitude behind.
Yes a good amount are spam, marketing, scams, etc. But that just comes with the fact that the platform is free and open.
Very strange to me to make such a barrier with corp communication. That’s communication too, and most of these corps now have things like Slack aka Discord anyway.
Meanwhile, if you can't use discord, then You're locked out of all of discord forever. And discord is like the 10th Gen of "hot new chat apps", what do we do when it starts to die (because Discord will probably start dieing in the next 10 years)? Just lose everything in all those discord groups, like we lost ICQ and MSN messenger and AIM and Skype?
And if Discord decides to ban you due to some kind of automatic flagging, you're suddenly unable to communicate with everyone who insists on using it as their sole means of communication.
It's drastically different from Gmail. If Gmail bans you, you can go and get a different email address (and if you're not naive, you already have your own domain name for email even if you use Google for it). And you would be able to send messages from that different address, even to Gmail users; and they would be able to send messages to you.
The problem isn't the federated nature, it's the combination of:
- loose standardization, and lack of proper versioning
- Postel's law, which is a recioe for disaster
Takeaway if you want to design a federated protocol use a “the server chose violence” approach and reject any messages that is not perfectly compliant with the expected input for the given version (which must be part of the protocol itself).
Basically every issue with purported email alternatives that various parties are trying to shove onto me (usually companies I need to communicate with for work or customer support purposes) is a direct result of them being non-federated, or that purported solution being SMS.
"A federated protocol is a protocol (defined next) that makes it possible for servers to communicate with each other, regardless of who is running those servers."
Federation implies delegation from a central authority, which makes Mastodon and ActivityPub more “confederated” than anything. Inasmuch that MX records stem from the root servers, email is federated that way, but otherwise it’s decentralized, with every MTA being its own authority. The line is fuzzy.
I think it's important to distinguish between application layer and lower layers of the stack here. For the lower layers (firmware updates for protocol changes notwithstanding) it is basically all federated, as you say. But at the application layer there are protocols that you can participate in just by knowing the protocol, and others which require knowing a secret or getting included in trusted peers or otherwise filtered to effectively make them proprietary and not federated.
Replaced a complete mess with a little less complete mess. Nobody calls it Atom feeds as well, leading many people to still offer RSS despite it being the worse protocol.
> If an email has a rich, HTML view; it should be required to come with a text-only, non-HTML copy of the body as well; for accessibility, compatibility, and privacy reasons.
I can't imagine this working. The plain text section would probably something like "Upgrade your email client to view this message" 90% of the time and be completely pointless.
Maybe it's an unpopular opinion but if I were rebooting email I'd forego HTML entirely and either say it's strictly plain-text, or use some Markdown-like formatting spec that looks fine even when viewed as plain text (email clients could provide WYSIWYG editors for less technically-inclined email authors). The evils of HTML in email (phishing, impersonating companies, and other scams) far outweigh the benefits (none, as far as I'm concerned).
And exactly zero of the people who send marketing emails are going to adopt your plain text email replacement. In the year 2024 we ought to be able to format text and add images to our communications.
For the folks who really want to RETVRN to the days of plain text, command line mail clients are still a thing.
My old marketing department would beg to differ. Plain text is easy, graphics and layout is hard. Further, all our emails were written in plain text first, then styled once we were 100% on the copy (so they could be sent off for translations in 20-odd languages).
> I'd forego HTML entirely and either say it's strictly plain-text
Inferior to typical print media text richness would rightly get rejected by most users.
> or use some Markdown-like formatting spec that looks fine even when viewed as plain text
No rich format can "look fine" when reduced to plain text. Reason being that the reduction loses information that the sender relies upon the receiver seeing.
> The evils of HTML in email (phishing, impersonating companies,
I've started reviewing some mail as plain text first. I've noticed some are exactly this kind of junk. The hard part of making a "new email" is that it needs to substantially better than current options.
Outlook still supports RTF. (I have no idea what clients support that)
Any new format could also be included as a new content type.
For all the evils, I can't see any replacement markup being a significant improvement: What is the sender trying to communicate? Why is it beyond plain text? How do attachments not fill that gap?
I think the answer is: any client could choose to behave differently on the existing ecosystem. They currently choose not to. While an individual may think it's complex, the solutions aren't truly reducing complexity.
The main issue is whatever replaces email MUST be intentionally asynchronous. There are a 1000 email replacements. The problem is they try to be better. No one wants email to work better than it does from a performance or function perspective. Email is literally the last tech bastion keeping us from 24x7x365 work days as matter of default procedure. Everyone knows this deep down and thats why they refuse to learn or adopt the better systems that already exist.
I think it's important to recognize the article does not suggest any obvious user-facing changes to how email operates; only technical ones that would hopefully fix the many pain points there.
All the behind the scenes failures are also a feature. Diffusion of responsibility, plausibly deniability. No one needs "mgmt" to be able to nail their coffin easier than they already can.
99% of the world doesn't work in HN land of high pay for high expectations. Most places are trying to force people to work 100hrs a week for as close to min wage as possible.
Unless countries start enacting actual labor laws with teeth I stand by my statement that no one really wants or needs email to work better, there already are alternatives.
While I understand the unspoken rules and norms of text messages, and chat messages are significantly different from email there is nothing prohibiting one from treating chat messages as asynchronous. You can train recipients the same you train a dog.
Yes, for me (as a recipient) it's about training the senders. Keep their expectations for fast replies low by developing a track record of not responding to texts for variable amounts of time, that sort of thing.
Email sucks, sure. But it sucks less than any of these alternatives:
- Corporate support chats (no way to export/capture a papertrail as a customer, usually horribly brittle)
- Proprietary/company-specific messaging solutions in my account on various sites (no notification channel for responses, often no papertrail for me either)
- Contact forms (unidirectional, not standardized, no papertrail for me as a sender)
- Push notifications (single-device only, no reasonable inbox management, headline-only)
- SMS (just no on so many levels, most importantly that I don't want everything to be tied to my phone/phone plan and that I can't own my phone number in the same way that I can own a TLD)
The EU (or Germany, I still haven't found out) mandates companies to have a support email address, and it's just so much more pleasant than the US pattern of providing only phone support, a horrible support chat experience, or a mailing address.
Get a few years on you and the Grind-set will fade as you realize you don't want the only thing in life to be work. You will be glad email still exists.
I don't personally think it sucks. There are bike-shed levels of improvements that probably any person on HN could dream up to make it more effective and work more reliably.
Precisely. All actual communication (family, friends, etc) has moved to text, facebook messenger, etc, in large part because of how crappy email is/has gotten.
> large part because of how crappy email is/has gotten
The funny bit here is that email itself hasn't actually changed at all (at least not in ways that affect personal communication). It's just that folks have become used to the intrusive instant gratification of push-based chat systems.
Well, for a start, there is no way of knowing if a message I sent actually get delivered.
Messages sometimes take minutes or even go up a to come through for no apparent reason what so ever.
Massive spam issues. Headers are trivially forged
People can sign up other peoples addresses to mailing lists, and if that 3rd party is a company, good luck not getting re-subscribed to random stuff for the rest of eternity.
> Well, for a start, there is no way of knowing if a message I sent actually get delivered.
Every major email client has supported read receipts for a very long time now. It's just rarely enabled by default outside of corporate environments (I would guess in part because users find it invasive).
The person you're responding to is saying that email sucking is a feature for the user, preventing them from being always at work. That making email like Slack would make it less appealing, and that's why nobody wants to replace it with something quote unquote better.
This is an interesting thought experiment, but I really want e-mail to stay mostly as it is today, except with a nicer way for me to make sure SPF/DKIM/etc. are enabled sanely for small domains.
That is a very good suggestion and it is my highest priority as well, in terms of "thinking about improvements to email".
I do understand the attraction of highly specialized, modular and pluggable tools - so I know why OpenSMTPd does not have a built-in nameserver, spam filter or DKIM service. The architects are thinking about correctness and scalability and modularity, etc.
But Oh My God it would be so much simpler - and comprehensible - if there was just one package with just one config file.
If the only purpose of a particular nameserver is authentication tasks for a mailserver, we should consider moving the nameserver into the mailserver. The same goes double for dkimproxy.
You're describing new solutions like Stalwart and Maddy. Which try to automate away a significant portion of these issues and maintainability concerns.
What do you think needs changing in the standards, rather than simplified implementations? That is, implementations that are suitable for small- and medium-scale turnkey deployments.
Something clicked when he was saying that SendGrid and Mailchimp are clean, and it remained till the end.
Email born as internet, something decentralized. But it is getting harder and more complex to have your own mail server, because you have to comply with a lot of requirements of major mail providers. For big scale servers, and commercial mass sending companies ("legal spam") they can afford to comply with all those requirements. And about ("not legal") spam and malware senders, or they don't care if they reach a limited set of target, or the reward is high enough to try to trick the system.
So the smaller mail servers, without so many users, or without so knowledgeable maintainers, if any, are getting expelled from the game by both bad and big players, some just move their email administration to some of the big providers (and privacy and territorial requirements may be a problem here). What the article proposes is another change to push things in the same direction.
And, for good and bad, some of the present use cases may be harmed by this proposal too, like devices and other simple notification services, or mailing lists.
It's the same on social media and the Internet at large, until the Elmo's tantrum accidentally popularized Mastodon. Sure, you don't have to do anything to be visible in a web browser (except you do, since you need a domain name and a Let's Encrypt certificate) but you have to obey Google's rules to be searchable at all, for example, and everyone's just browsing the same big websites all the time except when they click on an outlink, so you have to go to those websites and post outlinks to yours if you want any traffic, which is likely to get you banned for self-promotion even if it was something people actually wanted to read.
Basically everything's centralized now because bad money drives out good, and I don't have any ideas to fix it. It may be fixing itself to some extent as some CEOs keep banning their anchor users in strange tantrums.
The usual checklist for anti-spam ideas applies.[1]
Some minor fixes to implementations might help, though.
- Mail forwarders and the SMTP server side of receiving servers should, when possible, forward immediately, making a connection to the next node while holding the incoming SMTP connection open. Pass back errors immediately as SMTP errors if at all possible. Phone to phone emails should be as fast as SMS.
- Same for spam filtering. Reject at the SMTP level for invalid sender identification.
Gradually tighten up on invalid sender identification. In general, reject and pass back errors to the sender rather than sending to a "junk" folder. This is useful for making the big services clean up their act. If Google's gmail sender gets a hard reject at the SMTP level for a lot of mails from a sender, they will probably do something to the sender.
> Same for spam filtering. Reject at the SMTP level for invalid sender identification.
This is what Posteo (posteo.de) does by default* instead of accepting emails and putting them in a Spam folder. [1] It checks for a few different things and rejects the mail if the spam criteria are met. In my limited experience, many mail server admins of large organizations don’t even look at these email rejections to take any action. The result is that the mails keep getting rejected and the receiver doesn’t even know the context or content (by design).
* Posteo now has the option of accepting mail and putting it in the Spam folder too, if one desires this mode of operation.
Nice. That, plus the first item I mentioned, real-time forwarding, would mean that fails would show up immediately in your sending program, rather than a possible bounce message from a forwarder. Anything that can reject mail should work that way. The recipient's final IMAP server, which may have spam filtering, should also pass back rejections as SMTP statuses. So, if you get the "send completed" from your sending SMTP client, that should mean it's in the recipient's mailbox waiting to be read. This would improve the user experience for person-to-person email.
This is completely backwards compatible, so it's quite do-able as an enhancement. Most email forwarders today are old and from the store-and-forward era, and from systems where opening large numbers of TCP connections was a problem.
There's a sub-status for rejected spam:
550 5.7.1 Delivery not authorized, message refused
That subcode list hasn't been changed since 2003. A few new codes would be useful. I'd suggest:
550 5.7.8 Message refused - spam
550 5.7.9 Message refused - not compliant with laws in recipient's region. (CAN-SPAM, etc.)
550 5.7.10 Message refused - phishing attempt / hostile code
550 5.7.11 Message refused - previous messages from same sender also refused
Let the spam filters talk back that way to the delivery services.
It would be useful if mail delivery services noted such statuses and gave the sender a spam strike.
They don't have to pay attention to that info, but the better ones would.
> many mail server admins of large organizations don’t even look at these email rejections to take any action
Because, sadly, it's trivial to forge the return address and generate (many!) rejections to places that didn't send the email in the first place.
(I've only got a tiny email server and I largely stopped caring about bounce messages a decade ago for this very reason. I can't imagine the volume Gmail or Hotmail get.)
Thanks to the OP for writing this up. I'd love to see mail be rebooted.
A few thoughts:
1. I don't know if HTTPS is the best analogy, since it took 20 years for it to really become widely used (vs. non-secure HTTP). I think a better analogy might be the transition from HTTP/1.1 to HTTP/2 (or HTTP/3?).
2. I don't think we can rely on large providers (e.g. GMail) to quickly adopt a new system and even issue switchover ultimatums. A simpler mail system may not be in their best interest. Aiming for slower, organic growth could be more realistic. Even if a next-generation mail system never becomes dominant, it could still provide value. (Maybe Mastodon/ActivityPub is the analogy here?)
3. In any sort of reboot, I think a modern encryption system (something like Signal protocol?) would be a must.
4. Can we have a system where senders need to hold tokens to authorize them to send a message to a recipient? The whole "anyone on earth should be able to send you an unsolicited message" idea is ultimately the source of the spam problem. Messaging systems that rely on bidirectional agreement have much less of a spam problem. (Obviously this raises a slew of other technical and UX questions...)
I know that making too many changes is a risk. But I think that there is also a risk in making too few changes. (If the value proposition isn't sufficiently bold, this may hurt adoption.)
IMO, the lynchpin to HTTPS accessibility was that we made cutting certificates easy, and most importantly, free. By eschewing the baseline requirement for cutting a certificate to being able to validate that the site on the domain is trusted by virtue of a relationship between the hostname and the certificate requester, and providing an API to automate the renewal process, opting into HTTPS went from "administrative burden that needed a sysadmin to manage" to "service any hosting provider could leverage". LetsEncrypt and the ACME protocol, and Cloudflare before them, did much to radically change the landscape for making HTTPS accessible to everyone. [1] That makes the analogy pretty apt to me, IMO.
2. You make a very valid point here. By virtue of letting SMTP languish so long, bandaiding it as we went, instead of opting for a major refactor of how email works, we made an environment that was rife for consolidation. As a result, it makes sense that we might have to boil the frog to make a change.
3. I'd love to see it, but I'd be interested in learning how key exchange/trust works. I'd be interested to see how you'd envision enabling new senders to send you mail.
4. This feeds back into my comments on 3. With established contacts, this is great, but in a world where, say, you were giving a talk, and wanted to offer folks that listened to your talk a way to contact you afterwards, how would you distribute the token? If someone abuses the permissions, how do you invalidate it? I can't foresee this being an all-or-nothing system, and don't really see a system where creating some sort of one-way or two-way trust between individual senders and recipients is at all feasible.
>In any sort of reboot, I think a modern encryption system (something like Signal protocol?) would be a must.
Signal doesn't strike me as a very suitable candidate. It is quite connection oriented and email is not. For example, it requires a online "prekey" server just to do encryption.
We already have two protocols for encrypted email (OpenPGP, S/MIME). I can't see how another incompatible protocol would help. It seems to me that the first problem to address would be the usability of encrypted email, which is quite poor. While we are at it, we should work on the usability of encrypted instant messaging, which isn't that great either, for most of the same reasons.
The problem with email isn't due to a technical problem with email. It's because:
(1) there exists a messaging system where you can message anyone else in the world without prior contact
and (2) it is an open standard anyone can implement.
So even if you made Email2, it would have the same problems.
People will violate any spec you write and Email2 will not fix that. Encryption isn't a thing because no one cares about it and Email2 will not make people care. The most complex part about email is email authentication (SPF, DKIM, DMARC) and trust but Email2 will still ultimately be have to be based on "how long you've been around" because email is not intrinsically linked to a real identity that has been licensed.
> By simplifying the stack to the above, eliminating SPF, DKIM, and DMARC (and their respective configuration options), and standardizing on one record (MX2) for the future, running your own self-hosted email stack would become much easier. Additionally, the additional authenticity verifications would hopefully allow spam filters to be significantly less aggressive by authenticating against domains instead of IPs.
This assumes that the reason antispam tools make it hard to host your own email is because of spam. That’s only part of it. The other part is that deliverability is a cartel, and two of the biggest players, Google and Facebook/Meta, wish to be the intermediary between you and your audience and sell you access to their eyeballs.
Truly federated email allows you to communicate directly with your audience, bypassing apps and paid ads. This is a threat to their business models.
Additionally, domains are cheap, and moving the trust decision to domain from IP doesn’t get you much. Unknown/fringe domains will still land you in spam by default, same as non-deliverability-cartel IPs do today.
I was wondering about this as well, I think he is suggesting they report the spam to your domain registrar, who either does something about it or there whole cert chain gets blocked by default.
Yeah don't see how domains would be that much better than IPs but it would be easier to understand. It's already quite common for email senders to use multiple domains when they're concerned about deliverability, and domain reputation is already a factor I believe.
Re: deliverability, I do think incumbents are benefiting from the complexity of email, but I'm not sure I follow your argument about Meta.
Nobody would use Gmail if it didn’t deliver messages from Facebook, same as how nobody would use iPhones if you couldn’t install Instagram and WhatsApp on them.
Google and Meta are in the selling-access-to-eyeballs business. They don’t need to explicitly collude to keep others out of your inbox, their interests happen to be aligned here automatically.
2nd-gen email exists in the form of xmpp. Reuse HTML constructs if you dare (https://xmpp.org/extensions/xep-0071.html) or use simpler, safer markup (https://xmpp.org/extensions/xep-0393.html). Sign and encrypt with omemo with the assurance it hasn't been mangled with. Use the SRV records registered for it. Define multiple alternative content types if you want (https://xmpp.org/extensions/xep-0481.html) It's all there for you to take and improve on without rewriting everything from scratch.
One thing I recently realized is how many protocols are basically the same but ended up in different fields of application by accident.
SMTP can transfer arbitrary blocks of text data from a@b to x@y. Usenet can broadcast arbitrary blocks of text data with just a few mandatory headers, so why don't we use NNTP for blogging? You can put mails in your own outbox via IMAP, where the mail server can pick them up and send them. Why don't browsers load HTML via FTP? ActivityPub is basically SMTP if it was JSON (it's so much SMTP that it has a field for 'envelope sender' which was previously thought to be a specific quirk of SMTP), so why don't we use it for email? Or why don't we use SMTP for chatting? Actually, wait, Delta Chat literally does that. Or why don't we use XMPP for email? Or, I don't know, Apache Kafka or Redis. A Kafka server allows you to publish arbitrary messages and stores them until they're retrieved, which is also what a mail server or a chat server does. The only real difference is how authorization works. If you run IRC over Kafka, or IRC over NNTP, you won't lose messages when you disconnect - sounds pretty sensible. Actually what is the difference between NNTP and Apache Kafka, anyway? You get the idea...
Totanly agreed. We as a profession do not learn what existed before, only what the latest are doing and it's tiring. We value creating instead of reusing and maintaining, as seen in the extreme in the evaluation process at Google. We have Show HN for brand new stuff and not Remember HN for past, proven technologies put back to use.
Maybe that's the root of the problem. Making technological toys for fun is absolutely ok, but when we pretend to solve people's problems we should probably take a look at old ideas first
Ah yes. Which version of OMEMO are you going to use? Almost all Jabber clients (that have had any update in the last 4 years) use different AND INCOMPATIBLE versions of OMEMO.
There's the original OMEMO version from the Android client Conversations and the Gnome client Dino, the newer OMEMO:1 which is used by the Windows client UWPX, and the newer OMEMO:2 that is used by the KDE client Kaidan. NONE of the different versions can even talk to each other.
I have Jabber clients that talk the protocol from BEFORE it was standarized as XMPP, and they can still talk just fine with a client from this year. However, a client talking OMEMO from 4 years ago cannot talk to a client talking OMEMO from this year.
I join the sibling commenter in saying "PLEASE, NO MORE IDEAS WE HAVE ENOUGH".
> I join the sibling commenter in saying "PLEASE, NO MORE IDEAS WE HAVE ENOUGH".
Funnily enough, that was my comment :)
I agree with you if email is enough, and I tend to believe it is enough for much more than we use it for. Deltachat shows it is possible to use it for diregt messages and, thanks to webxdc, enough for collaborative pads and games even. xmpp is the step beyond if you want to add retrievable content, thanks to pubsub, but if it's not needed we should focus on making email more usable as a transport protocol.
As you can see, practically all clients are on the same version (0.3). The outliers are UWPX and Kaidan, which both took the unusual decision of being incompatible with the rest of the ecosystem without working on the upgrade path.
UWPX is not actively maintained (no releases for a couple of years, and the Github repo is archived).
That leaves Kaidan, which is (as far as I know) actively maintained and they deliberately chose to be incompatible with the rest of the ecosystem (for now). I agree it's an unusual choice, but the XMPP ecosystem is diverse and not centralized and nobody can control what people want to do with their time spent working on their open-source projects.
EDIT: To clarify some things that are probably non-obvious to people outside the XMPP sphere, the link I gave shows Prosody and Mongoose as supporting OMEMO 0.8.3. They are not clients, but servers - they are compatible with all versions of OMEMO and so are not relevant when it comes to interoperability. And QXmpp is the backend library of Kaidan, by the same developers.
How is this false? The clients I mentioned cannot talk with each other, and your page is showing exactly that.
You then say UWPX is discontinued as an excuse, which I could accept... if it wasn't because the problem is that it is implementing a _newer_ version of the protocol than most clients in your list! So it's not "incommunicado" because it is too old, it is incommunicado because it is too new! Why would the time since the last release have any effect, then?
Kaidan is actually implement what appears to be the _current_ version of the spec, so if anything the problem is going to even get worse over time, not better.
Is there _any_ client which actually implements multiple versions of the protocol so that it can actually talk with clients from multiple eras at the same time? Otherwise this is literally worse than Matrix, and that's saying.
No, the other clients apart from UWPX and Kaidan (which for obvious reasons very few people use) implement the same version of OMEMO. They do this because, contrary to what you're implying, developers in the XMPP community care about people being able to communicate with each other successfully :)
The upgrade path for any E2EE protocol is generally non-trivial, and we're not quite there yet with OMEMO. It's something we are working on, as a community, which is why you'll see the transition to newer versions happen across all clients at roughly the same time.
The new spec is being worked on, for example some of the more recent changes were specifically to make the upgrade path smoother. There may be more tweaks to make before we're ready for the first clients to start the transition. This kind of thing is not easy, though I appreciate that it may seem that way when you're looking on from the outside.
> No, the other clients apart from UWPX and Kaidan (which for obvious reasons very few people use)
A funny thing to say for the 2nd and 3rd most downloaded XMPP-specific clients (from the MS Store and Flathub respectively). Certainly this does not include distro data, but Kaidan is also #1 result on Google when I search "KDE XMPP".
> which is why you'll see the transition to newer versions happen across all clients at roughly the same time.
That's precisely what I'm not seeing. Whenever I try to update my Jabber ecosystem, there's always some incompatibility, and OMEMO has (in recent days) been always at the center of it. Example a couple years ago: Conversations deciding to remove libotr quite early in the OMEMO story, and later other clients chasing suite.
Now I try to update things again and see there's yet another breaking protocol change. It doesn't help that you point that the new protocol is not ready yet but it is already published (as draft) as a XEP and implemented by multiple popular clients.
I'm sure they had "good" security reasons to move forward, even if most users couldn't care less about it (same as what was argued when OTR was dropped).
Then ask the Kaidan developers why they chose to be incompatible - it was a deliberate choice of theirs, they knew what they were doing and what the result would be. It's a surprising decision and not one I, nor (demonstrably) other client developers, would have made.
In most software projects the developers have ultimate freedom on what to implement and how. But nobody is an island in a federated open ecosystem, and there are more responsibilities. In this case the Kaidan developers chose to ignore interoperability and that resulted in a poor experience for Kaidan users. I don't use KDE, so I'm sorry if there are no better options for you.
I don't know what point you're trying to make here. Obviously developers will choose to break compatibility every other day for very flimsy reasons -- I have a million examples, including "Why did Conversations decide to remove OTR support ?" Or even "why are you forking software at all in Snikket?".
My problem here is that XMPP encourages this by adopting a _breaking change_ to a protocol. To my knowledge this is not a client deciding to go out of its way to break compatibility, it's a client deciding to implement the current version of OMEMO and as a consequence ending up incommunicado. If you look at the most recent version of popular XEPs you'll end up with a client that can talk to nobody. This is not a good example of stewardship and doesn't look good look as a candidate for replacing email, a protocol that must last for at least half a century. Core XMPP did look good candidate, but the chase for the shiny has corrupted this protocol as has countless others. Same reason we're down to basically only two workable Jabber server implementations.
Arguments like "oh don't worry, we'll smooth things over by updating all clients in tandem" just don't make me see things better, because I know by experience that it never happens in practice.
"Self-hosted email is not very popular in part because of the complexity of the current email system, so between Microsoft, Google, Amazon, Zoho, GoDaddy, Gandhi, Wix, Squarespace, MailChimp, SparkPost, and SendGrid – you have most of the email market covered for the US; anyone not in the above list would quickly fold."
In my opinion the last thing the world needs is a system that is so complex that only a few people can implement it. I would go farther and take the position that any email system that cannot be self-hosted is not worth the effort.
I am much more interested in a conglomeration of selfhosters.
This completely ignores the issue of trust, which explains why the market is so concentrated. In the current system, there are no financial penalties to spamming, like landline spam in Germany.
Some great thinking here, and the kind we need. For all the bellyaching about HTML and email, the real problem is that nobody has been bothered to build a standard that everyone agrees to.
The gradual approach is a smart one, too.
I think AMP for Email has some great ideas but bad branding. It could be a useful starting point for this discussion.
Are you familiar with what AMP for Email is about? It’s dynamic content. That’s all. And that’s something that has no place in email.
AMP for Email is dealing with a completely different problem from the one discussed in this article—one that no one asked to be fixed, and which few people even agree is a problem (and they’re all trying to sell you something).
One additional point. This gatekeeping about what email should or should not be is just too much sometimes. It is the very reason why email has been in stasis for so long.
Anyway, you understand why I brought it up, right? It is an attempt at a standard in email that already exists. When we are talking about improving email, highlighting existing work is useful. That means there is something tangible that can be used to improve the weaknesses in the current model.
Additionally, he brought up HTML email in the post. AMP email is an attempt at standardizing HTML email. That is relevant to what he wrote.
It was Google trying to shove through something they’d invented, using their market power as leverage. You don’t make good standards like that.
It was also not at all about standardising HTML email—it didn’t improve anything in that way, except insofar as the AMP part being chosen implies that the client has decent HTML support. AMP for Email is purely about dynamic content in emails.
And the very way that each provider that supports AMP for Email has required whitelisting of each sender shows there’s something extremely rotten about the entire thing.
The best thing you could do to accelerate the rollout of MX2 (or any email replacement) is to have Congress bless it as a secure channel for Private Health Information
- Redoing the world is a non-starter, it avoids compatibility and avoids solving problems that exist now and instead recreates the same problem all over again.
Some good ideas in there I think. But I think unless you can shoe-horn them into the existing MX record and get piecemeal buy-in, the results will be quite similar to what we see with IPv6 vs. IPv4. No?
email has already been sufficiently captured by the big guys that if they chose to support this, there would be buy in. and without their support it's dead.
if outlook and gmail announce that emails that aren't MX2 get ranked more harshly in their spam filter, everybody will adopt it. if they don't do that, nobody will.
Seeing how we got a brand-new and very shiny `HTTPS` RR type instead of using the existing `SRV`, I'd say the hope for improved email is not zero. Still very close to zero, but not zero.
I don't see the problem with implementing it via the MX2 option. That makes backwards compatibility a lot easier than messing with the existing MX option. It also means if the proposal goes over like a lead balloon it at least won't have made the existing situation worse.
Good luck convincing Microsoft to implement anything you're suggesting in Outlook though. Or for Google to add it to gmail.
I think the best analogy here might be WHATWG and HTML5. Instead of creating an entire new and expanded 'second system' (as the W3C was trying to do with XHTML), the existing major players in that field created something that was a much more strictly defined standard that was carefully to be forwards and backwards compatible with the existing mess, with well-defined behaviour for non-conformant content, and then started building on that new standard.
The big players in email are now in the same situation as the big browser vendors. If they defined a strict subset of the existing body of de-facto email standards, critically with well-defined behaviour for non-conformant content, and then blessed that as email 2.0, they would then have something well-defined and workable to build on.
This might include mandating a restricted subset of HTML5 for HTML content, a canonical transformation of that content to plain text for interoperability, mandating plain-text email as acceptable (perhaps with a canonical transformation to HTML) the use of SPF, DKIM etc with specified defaults, SMTP with specified features enabled, etc, etc.
Then what is effetively a well-defined profile of traditional email becomes the new (forwards and largely backwards) well-defined email system, and we can all move forward from there.
But to do that, there would need to be the will to create an email equivalent of WHATWG.
> countless surprise instances of legitimate emails going to Junk
I think the definition of "legitimate" is in the eye of the beholder.
Some folks believe that their "p3n15 extension" email is legit.
Hard-core spam has wrecked the internet. There's a lot of money in it, so these moneyed people will find a way to bypass anything, because services will spring up, that will sell that.
I have an email address that receives mail on 3 domains. Only one of them is "legit," so any email that comes in on the other two, is automatically spam-canned. Apple won't let me turn off the other two domains (me.com and icloud.com), so I have to set a rule in my client to can them.
They get hundreds of spams (many are scams), each day. These are just the ones that passed the "low-hanging fruit" filters the email server does, so I assume that I actually get thousands each day.
I was just talking to someone today, who got his PayPal account pwn3d; probably by following the directions in a well-crafted phishing email. These things work, so no one will stop doing it; especially since the cost is so low, and the ROI is pretty good.
The only thing that is likely to stop the spam tsunami, is a cost to send. No one wants to do that.
As someone old enough to remember early 90s email, before wave after wave of noise and the myriad “solutions”, I find it hard to accept that we can’t do better and somehow discover email 2.0. Especially now there are so many other commonly used services/protocols for things that are “not email”. I can’t help but think that somehow recipient identify and preferences could form a part of it.
> If an ancient, 20 year old email client, tries to send a message – it finds the MX record and sends the message just like normal.
Maintaining back-compatibility: good!
> From there, the email services which implement MX2 would publish a public date, on which all messages sent to them by the old MX record, will be automatically sent to Junk.
And the idiocy kicks in two sentences later: IPv4/IPv6 anyone?
Trying to wrap my head around how bulk email providers (Salesforce, etc) would operate in this? I guess you could give them a subdomain of your main domain and have a separate MX2 record. This might help silo off newsletter, transactional, and actual person to person messages like many already do in the current environment.
It's a little confusing, but my idea is that there would be multiple MX2 records for every authorized sender's key. One of those MX2 records would have a marker on it for incoming mail.
Ah, gotcha. Feel like I've got the existing MX stuck in my mental model. Honestly with all of the existing DNS record based "authentication" with SPF, DKIM, DMARC its already a mess for bulk senders too lol.
The above situation is acting like a pseudo-2nd-gen email standard that's evolving in slow motion (without us officially calling it "gen v2.0"). Those email policy changes will be adopted because big cloud email providers like Gmail and Yahoo have a massive influence on the entire email landscape.
Therefore, a new hypothetical "MX2" standard that was more coherent with better authentication, anti-spam, and anti-phishing features could be promoted by a consortium of Google/Microsoft/Yahoo/Apple. The smaller players like Fastmail, ProtonMail, Mailchimp, Sendgrid, etc and most everyone else would all have to follow the cloud email providers' lead because everybody wants to be able to send email to them.
A solution that is 100% interoperable with the existing widely deployed solution but also offers substantial advantages can have good chances to coexist with the old solution for years if not decades, supplanting it only slowly.
Examples of success: monochrome TV -> color TV, landline phones -> mobile phones, the Windows 3.x -> Windows 9x -> Windows NT/2k/XP transition.
Email has a really well-working transport layer. The UX layer can see some innovation while staying compatible.
I found your link interesting and informative. I'm curious if you have any specific comments on this quote from the story:
> From there, the email services which implement MX2 would publish a public date, on which all messages sent to them by the old MX record, will be automatically sent to Junk. If just Microsoft and Google alone agreed on such a date, that would be 40% of global email traffic.
Do you think this change wouldn't be enough? If so, why not?
MX records don't send messages. Assuming that “sent to them via SMTP” is meant… well, moving all messages to ‘junk’ isn't a good idea: it needs to be restricted to messages sent on or after that time. But why not just respond with “554 No SMTP service here” on opening the connection?
My main requirement is verification of the sender. One way to do that could be to send only a link to the message and have the receiver request the data from senders server. Then receiver could cut it off if the file is too big. (Maybe notify end user of the undelivered mail and they could retry if they're OK with it). Not sure how long a pending message should stay on the senders server.
Another idea is to have the payload be a zip file. The end user could then have apps to process different content types. Only the "email" type would get processed by a traditional email client. Attachments would just go in a folder under the email message.
Just thinking out loud. An authenticated asynchronous method of sending "stuff" including "email" messages.
It's all internet-centric. You can't download from the sender if the sender is an onion service and you're getting the email through an onion-to-clearnet relay, even if the sender address is correct.
And MIME is already like a zip file but different.
Not addressed by the article is SMTP. Is there still a need for store-and-forward in the modern internet? I think that a lot of the current headaches with mail management are because receivers bear the cost. It made sense at a time when availability wasn't guaranteed. But server uptime now is much higher than it was in 1982. And although the original plan for email was that any machine could send a message to any other machine, it has since evolved into endpoints only ever connecting to a designated server. Rather than that server having to push all mail content around, it could hold the messages and post notifications to the destination, then release only after delivery has been accepted.
> It made sense at a time when availability wasn't guaranteed.
It still isn't guaranteed. There's any number of reasons two MTAs can't talk to one another. Then there's more reasons an MTA can't talk to an MDA.
> Rather than that server having to push all mail content around, it could hold the messages and post notifications to the destination, then release only after delivery has been accepted.
IIRC this was one of the ideas behind Internet Mail 2000[0]. The open questions on that page are actually pretty good arguments against such a system. Not that e-mail is perfect but an open message sending system has a lot of important details to get right to work properly.
If they keep the original From: header they would not work, as the sending (mailing list) server would look like a forger, so the ML software would have to do a rewrite that header.
If you wish to keep original From: headers, then ARC would have to be incorporated into this proposal:
I like what the author is saying, but I want everyone to keep in mind that how we send email is mutually exclusive from how we read/create email. You can solve issues related to one while not dealing at all with the other.
For example, something I'd love to see is SMTP2. What can we do to improve SMTP and make it faster? Can we require TLS? Can we standardize and lockdown sender authentication? We wouldn't necessarily need to have a separate DNS records because it could be handled in a fashion similar to HTTP and HTTP2 and use the same ports.
This new protocol could be tied to a new standardized email format, if that's what people wanted to do.
It could, but why change what isn't broken? Email messages are quite agnostic to how you deliver them, though. IIRC Google and Microsoft use a proprietary protocol with each other. Outlook uses a proprietary protocol with Exchange.
Email used to be transferred through a non-internet packet-switched point-to-point network of intermittently connected dial-up links. The fact that the email messages are agnostic to how they're transmitted is actually very cool, because it enables switching to future technologies without any changes to the user agents, and vice versa. You put a file in your outbox and it magically appears in someone else's inbox later. Originally mail readers had to be executed on the mail server and read directly from your inbox directory, then we created new last-mile protocols so the reader could run on a separate computer (which in my understanding is basically identical to a Fidonet "point").
You can easily imagine a mail server connected to an onion router, or to Yggdrasil or some other meshnet, or even to whatever remnant of the original UUCP net someone is still running for fun. The current design enables this sort of thing.
If these are basic questions, pardon my limited knowledge of email-transport internals.
- May be the transport is better than HTTP, but HTTP is well understood and easy to debug for developers (debugging a web-app is easy). Similarly, a JSON payload will bring structure than the current way of creating boundaries. Moving to HTTP+JSON will allow far easier access to developers who want to build on top of it, or self-hosting. Get a domain, run a web-app, set a few records and you are done.
- Put a file in outbox and it get's out. From my outlook experience, it seems to be a cron that scans a particular DB query. Should still be possible.
- I have no idea on how onion routers etc work so won't comment on that.
- And lastly, if Google <> MS, Outlook <> Exchange use a proprietary protocol then I will read that as an indication to improve current standard.
My vote for the most important improvement to SMTP would simply be pipelining. It was designed in a time of nodes with low memory connected by links that are fast relative to everything else, and therefore, it uses a turn-taking command/response paradigm where each side keeps having to stop and wait for the other. "Hello." "Hello." "user1@mydomain wants to send a mail." "Okay, continue." "He is sending it to user2@yourdomain." "Okay, continue." "He is sending a copy to user3@yourdomain." "Okay, continue." "Here is the data." "Mail accepted."
(at least there isn't an extra step after the data to say "that was all")
A redesigned version would send all parameters at once, and then get a single success or fail response at the end. Perhaps one pause could be useful to validate the headers before the main body is sent, but only for large messages (e.g. with attachments). "Hello, user1@mydomain is sending to user2@yourdomain and user3@yourdomain. Here is the message. Bye." "Hello. Sorry, user2@yourdosmain is unknown. Bye."
An extension like this exists for NNTP, in RFC4644, since that really is a mesh topology instead of everyone-talks-to-everyone and some central links have extremely high traffic. It requires two round trips per message and processing of different messages can be interleaved while waiting for the reply. "Do you want message 1? Do you want message 2? Do you want message 3?" "I want message 1. I already have message 2." "Here is message 1. Do you want message 4? Do you want message 5?"
Apologies if you already know this, but pipelining has existing in SMTP for decades now. You can see if an SMTP server supports it when using EHLO (instead of HELO).
Trying 74.125.202.26...
Connected to smtp.google.com.
Escape character is '^]'.
220 mx.google.com ESMTP 8926c6da1cb9f-489374a766fsi10850940173.171 - gsmtp
EHLO testing
250-mx.google.com at your service, [47.227.77.52]
250-SIZE 157286400
250-8BITMIME
250-STARTTLS
250-ENHANCEDSTATUSCODES
250-PIPELINING <============= HERE
250-CHUNKING
250 SMTPUTF8
I believe this RFC 2920 is the original standard going back to 2000. I remember pipelining being a thing in Postfix about 20 years ago, at least.
I believe you still have to stop at certain points to check the responses. You might not want to send a message if one of the recipients is invalid, but only that command will return an error code, and the message sending will succeed.
You'd also need to prevent command injection. If the response code to DATA is an error, but you sent the message anyway, the whole message body will be interpreted as commands. Oops! The line ending bug (SMTP smuggling discovered early this year) was bad enough.
... not sure why I decided to write the other comment here as it doesn't really address anything you said.
What value does HTTP add? Probably nothing, but it adds a lot more complication. You may think it's well-understood if you only know the happy path (requests.get("http://blah/") and magically get a response). Do you know about HTTP Request Smuggling, for example? Nobody knew about that for 20+ years. HTTP brings a lot of dark corners which are not well-understood.
The workflow with hypothetical Simple Mail Transfer Protocol (wait, that name is already taken... damn it): 1. Connect to recipient's server; 2. Send the mail as a sequence of bytes; 3. Receive response code; 4. Disconnect.
And the workflow with hypothetical HTTP Mail Transfer Protocol: 1. Connect to recipient's server. 2. Send the magic text string "POST /inbox 1.1\r\nHost: yourserver.com\r\nContent-Type: message/rfc822\r\n\r\n". 3. Send the mail as a sequence of bytes. 4. Receive the magic string "HTTP/1.1 ". 5. Receive response code. 6. Receive some more bytes you don't care about. 7. Disconnect.
I'm just not clear on what benefit is added by steps 2, 4 and 6. It's not the host header, since emails include their full destination address. It's not the content-type, since you already knew it was an email message.
---
The same, but to a lesser extent, with JSON. I don't think JSON has many vulnerabilities in dark corners, but it does have inconsistent implementations. Trailing commas? Comments?
If you're using JSON to just send the email then it's the same question as HTTP: what is added by the extra steps? If you're also converting the email itself to a JSON-based format (as opposed to MIME which is currently used) that might not be worse or better (MIME is pretty annoying) but it's also incompatible for no real reason. It will be a separate protocol, not an "upgrade" to email - may as well use Matrix instead.
It brings its own problems, some of which MIME also has. Because you can write things in any order in JSON, you could send the whole 800-megabyte attachment before you tell the server who the email is going to, and the server might have to buffer that in RAM or in a temporary file. In MIME, the headers always come before the body. JSON can have deep nesting, and a naive parsing of 9999999 open brackets followed by 9999999 close brackets will lead to a stack overflow. If you reject those brackets, now you're rejecting valid emails just because you happen to not like them. This problem also exists in MIME, but at least MIME nesting more than a few levels is extremely rare since only entire messages and files get nested (e.g. attachments, and plaintext versions of HTML emails), not metadata, while JSON also has nesting for metadata. Additionally, servers don't actually have to process MIME nesting in many cases, but can treat the whole message as one big byte stream. It's mainly only clients who have to parse them. Servers would have to parse the whole JSON message in case there's important metadata after it.
Also since the order can be anything, servers couldn't look at the recipient first, open a file in their mailbox, then write the mail to the file as it arrives, since they might not know who the recipient is until after the whole message has arrived.
Finally, JSON handles binary attachments even worse than MIME does.
---
You mention "get a domain, run a web-app, set a few records and you are done". Why do you believe that is something exclusive to HTTP? Get a domain, run a mail-app, set a few records and you are done, too.
The difficulty with setting up your own mail is with outgoing mail. Incoming mail is generally unrestricted. But when you want to send mail, the recipient's mail server has to be convinced it isn't spam, and that's really hard no matter what the protocol is. Whatever you can do, spammers can do it too. The Fediverse works a bit like you describe, with JSON-based mails going over HTTP, and just hasn't gotten flooded with spam yet because it hasn't. Also spam is a little bit out of fashion thanks to the good filters at places like Gmail. It used to be that literally 99% of email's in everyone's inbox would be spam - only 1 out of every 100 a real email - but that's no longer the case. I imagine that setting up a new spam ring isn't really worth it, but some spammers who set them up long ago just keep operating them since it costs little. It will happen to the Fediverse too; it has almost no spam defences.
A lot of hosting providers block you from sending emails (connecting to other people's port 25). That's because hacked servers that send spam are much more common than servers that want to send legitimate mail. Most of them will unblock it if you ask nicely. This mechanism doesn't exist for HTTP, but I'm not sure if this mechanism is required to stop spam these days anyway.
---
Onion routers means Tor. It has its own address space. The point is, it's a network that's not the internet. It's an overlay network, so you need an internet connection and specific software to use it. If there's ever a serious .onion mail infrastructure, you might get mail at your internet address from a .onion address via some kind of gateway server. I'm on foo.onion, my server (being set up as a special .onion mail server) knows internet mail can go to bar.onion, which is also bar.com on the internet, and it will be passed on to the right internet server at yourdomain.com. The fact that this is even allowed at all is something I find elegant about email (and Usenet) and it would be a shame to get rid of it for... basically no reason. It would still be possible to do this if it was all HTTP-based; foo.onion would connect to bar.onion using HTTP instead of SMTP and that wouldn't really make any difference. Scenarios like this one only become problematic if the new design requires two-way communication between the sender and the receiver, like a key exchange. They don't become problematic just because you added steps 2, 4 and 6 to the protocol.
A related scenario is networks with limited connectivity. There are proposals for wireless ad-hoc store-and-forward networks, with very high latency, where two nodes would exchange messages whenever they happen to be within range of each other - imagine people with cellphones walking around a city, and only a small percentage of people have the app. Whenever two people with the app sit in the same café, they automatically swap messages to get the messages closer to their destinations on average. The one-way latency could up to days, if the messages arrive at all. If they often don't arrive, then the network is broken, non-viable, so let's assume they usually arrive after some hours to days. Email can work on these networks because it's one-way. You put a message into the network, and it somehow (according to the network design) spreads around until eventually the recipient has a copy and can read it. The recipient can't send a message back to negotiate a secret key, because that would take more hours or days, and then more hours or days while the key negotiation result comes back, and so on.
Most protocols can't work in this kind of network. Email (and Usenet) can. It would be a shame to ever delete this ability just because of... not much reason at all. Protocols in the Internet age are designed for systems to be able to "directly" communicate with each other with up to a few seconds of delay. I put "directly" in quotes because there are actually many IP routers in between them. Email actually pre-dates the Internet, so it doesn't rely on the Internet existing. Actually, the environment which email (as well as other systems like Fidonet and Usenet) were designed for wasn't too different from the wireless mesh scenario, although with a more predictable schedule. (Actually, I think Usenet was designed for more expensive permanent connections, but it still has the same design)
Of course, corporations won't be sending you invoices over any weird networks. Nor do any major mail servers know how to find a gateway for an overlay network. In fact 99.99% of the internet won't be able to send you mail in these weird situations. So feel free to delete it from the protocol if you like.. .nobody will notice. I think it's a pretty cool ability though. When Earth and Mars establish communication and there's a 2-to-6-hour light-speed delay, people will definitely notice if the email protocol takes extra round-trips.
---
Several months ago I signed up for a Usenet account just to see how it works. The most interesting thing I noticed, other than the one-way store-and-forward synchronize-once-a-day-if-you-want design, is that it has all the exact same problems as modern systems, and the exact same solutions, and even the exact same caveats to those solutions, 30 to 40 years earlier than our modern systems. We have been reinventing the wheel all this time and haven't gotten anywhere. Well, I lied - the problems and solutions are a little bit different, but really not all that different.
I also noticed that ActivityPub is basically just email if you change SMTP to HTTP and change MIME to JSON. It might be exactly the protocol you're looking for, and the fact it's associated with federated Twitter clones is accidental - just like the fact SMTP is associated with email, since SMTP can send any blocks of text data, and ActivityPub can send any blocks of JSON data. You might be interested in experimenting with it.
Mail over Matrix would be amazing; you'd get PFS e2ee; reactions; edits; redactions etc for free... and be able to segue between longform and shortform chat and VoIP and calendaring etc. If the core team wasn't stuck focusing on the core project, I'd build this like a shot.
This all sounds like a great proposal to fix some major problems with email, but it doesn't address the biggest issue. The reason these issues haven't been fixed is vendor lock-in. Making it "insanely difficult" to self-host is a feature, not a bug. Try migrating off of google or whatever your email provider is. You will find that you have 300 accounts that use email as your core identity provider. Getting locked out of your account is on par with having your house burned down. Email is a huge moat that these big companies are not going to let go of easily.
I would love to see some sane, email 2.0 standards like the author talks about.
But, email is only decentralized in theory. In reality, email was fully captured a decade ago by a monopoly of 2 particular inbox providers.
In consumer, Google has a defacto monopoly and runs the show. In B2B, Microsoft has a defacto monopoly and runs the show.
Nothing can change without Google or Microsoft making the move first. And neither of these companies have any interest in changing/improving anything, given they already have a monopoly in their respective corner of the market.
What we need first, is to lower switching costs to open up the market again. This could mean making DNS less of a nightmare so domain-based email becomes easy again. This could mean a government mandate that your email address (like a phone number) must be allowed easy transfer to other providers (Since Google owns the Gmail.com domain, they own your "phone number" in a sense). Etc. Etc.
Imagine if Google and Microsoft owned your physical mailbox...and they decided what type of letters you could receive from who...and they sold ad-space in it. That's essentially what we've done with our digital mailboxes.
> And neither of these companies have any interest in innovating
I don’t think this is true, or at least I think they have a strong interest in standardizing. Enterprise and personal users are routinely frustrated with Outlook and Gmail for dumb UI problems which are largely due to a lack of standardization. The only solution requires collective action. In addition, a well-written technical specification outsources a lot of difficult or highly specific questions to a committee of experts (kind of like how the C specification is an excellent technical manual, or K&R was a good de facto specification).
Gmail and Outlook both have market lock-in on personal / business email because of how their email clients integrate with other personal / business software. (Gmail is also given a hand by rational consumer apathy; Gmail is fine and free, changing email addresses is a pain.) I don’t think either company would gain or lose any competitive advantage by standardizing things around email itself. But it would probably reduce a lot of technical management headaches.
> What we need first, is a government mandate that your email address (like a phone number) must be allowed easy transfer to other providers. As long as Google owns the Gmail.com domain, they will be able to hold the entire network hostage.
A mandate is perhaps heavy handed, but a statement that the government will no longer communicate via e-mail and only via protocol X would be appropriate.
As for the email transfer... that would certainly have to be a new protocol. There is no mechanism to do anything of the sort today. the @... part literally means @ that server.
Enterprise and regulated communications can break any monopoly if the money/legislators agree that an alternative is better. Let’s say, EU issues a directive that certain types of communications must adhere some requirements which email v1 cannot implement. It will automatically create the market for email v2 and v1-to-v2 gateways. Or Salesforce and Meta agree on a new protocol for CRM comms, that brings more trust to email v2 campaigns, because SF can certify senders and Meta recipients.
So... intentional and deliberate centralization of email? No, thank you. We already have plenty of proof that corporations cannot and will not do what's best for humankind, and would prefer to do specifically what's not best in order to leverage pain and strife to bolster themselves and/or to punish competitors.
Interesting how later this is mentioned:
"running your own self-hosted email stack would become much easier"
So... the list is supposed to be of people and organizations that don't adopt MX2, and not a list of organizations that aren't "Microsoft, Google, Amazon, Zoho, GoDaddy, Gandhi, Wix, Squarespace, MailChimp, SparkPost, and SendGrid"? Specificity and clarity are good things.
BTW - self-hosting email is not "insanely difficult". It's just difficult to get outgoing IPs that have a decent reputation. But it really doesn't even matter - anyone can self-host easily, and simply smarthost through a reputable provider. This makes self-hosting really not that hard at all.
If we’re really going to mx2 there are so many things that could be done to improve things, but at minimum at the protocol level, it’d be great if there was a two way street to let ppl know you’re treating or reporting them as junk. This not only is great signal for advertisers to stop spamming me, but also let gmail know some account they have is sending out phishing emails (domain reputation is meaningless here).
the spam filter algorithm can be provided with information on the number of attempts, and the explicit user intervention of marking something as spam, so what if the spammer learns they exhausted their attempts or user tolerance?
Email is one of the services I would like to self host for small projects, ran into issues immediately for spam. I like the idea of using public keys tied to a domain, but feel the email service providers have little incentive to adopt this because the complexity adds value to their service.
I don’t know that email is worth saving. I barely look at my inbox. It’s 90% spam and ads and the only functional use I get out of it is password resets and MFA.
Just let it gradually die of neglect like newsgroups. It’s was built around the idea of lifting a communications paradigm from the physical world in exactly the same way that newsgroups were a riff on a message board.
There’s all kinds of newer ways of organizing communication on line. We don’t have to try and recreate the experience of writing a letter on the internet any more.
- no separate body for rich html and plain text, This will just result in "Please enable HTML rendering to view this message"
- maybe we need to define a subset of HTML (no external resources, css etc..)
And require that if any colours are set, both foreground and background are set. (I've seen too much breakage with assumptions about one or the other.)
Not useful proposal. Markdown is both limited and ambiguous. It's okay for writing by hand more naturally than HTML, but the sender should parse it and then explicitly indicate which characters are bold, which ones are underlined, etc, and you're back at a subset of HTML.
1) No one changes established tech (software/hardware/services) unless the "new new thing" is 10x better/radically different than the old.
2) "Bullshit talks, code walks."
Here's how something like this becomes an actual standard: Someone codes up a prototype, and gets others to use it. Enthusiasts join in and improve it. Five to 10 years or more passes and if the new standard is actually worthwhile, it will have grown to the point established organizations finally pay attention and - if it's in their best interests - will adopt it.
Take git as a prime example. Launched in 2005, it was still at only 42% adoption by 2014 (according to Stack Overflow yearly survey). A decade after that, however and it's at more than 95%. It took nearly two decades for one of the most quickly adopted technologies in the 21st century to become ubiquitous.
At this point I just want self-hosting to be possible. I can set up all the SPF, DKIM and DMARC nonsense and still be unable to deliver email. Those things don't actually matter, what matters is whether I'm part of the super special trusted IP club. It's 2024 and this supposedly federated system still doesn't allow me to self-host.
While we are at it, making email that implements MX2 HIPPA, bank, etc compliant by default so medical/bank services can send you medical notifications and communications directly to your email instead of telling you to login to your account to see a message about your next doctors appointment or test result.
There is no reason why this couldn't exist in the current e-mail technology. In fact, as someone who has written HTML, I'd rather not have HTML anywhere if possible! It's a terrible, nonsensical technology. A simple XML format for display text/images would be much easier to deal with, and also easy to transpile to a HTML/CSS subset. I'd like to hope that e-mail will last longer than HTML!
I think the biggest problem with e-mail is that the lack of inter-provider communication protocols, something that distributed social media (the fediverse) has. For example, there is no way for a provider to know whether or not an e-mail sent was opened, so EVERYONE uses tracking pixels! Like, just put this into a protocol already! The problems you're talking about with HTML support are simply because there is no way for a provider to announce what HTML it supports, partly because this depends on the client that opens the e-mail, not the server that stores the e-mail. But even then, seriously, this should really just be a setting per account and if you use Thunderbird it announces to everyone "hey this dude uses Thunderbird, so don't use fancy HTML!" and if you switched to something else you would just start getting different e-mails. You can see why this wouldn't work with HTML, since HTML is a terrible language that has no actual support for being displayed in a zillion different clients despite what it claims.
Return receipts are already a thing aren't they? But most people don't actually want to be tracked, so the correct thing to do there is not allow any external resources to be loaded from an email, and to make user tracking illegal/codify that stalking millions of people doesn't make it okay; it makes it millions of counts of stalking and harassment.
HTML already gracefully degrades. If you don't support a feature, you just ignore it and continue. It works just fine unless you're shooting for exactly matching some figma design on all clients, which is exactly what HTML is not supposed to do.
> Send a real spam email? Block that domain when there’s complaints.
What happens when the spam comes from gmail.com, outlook.com, or any of the other big cartel domains? Their domains can't be blocked without loss of functionality.
The biggest problem with this idea is that its major point is to make self-hosting mail servers a reality again: you will never move the masses for the wet dreams of a few of us nerds!
The past/quoted reply should be a separate part too. The main part should only contain the new content.
Potentially even the signature should be its own part.
Proposes to replace SPF, DKIM, and DMARC with a brand new signature scheme that Trust Me, Will Work This Time. And because this scheme is of course innately perfect, hardwires a reject policy.
Mixes up MUA and MTA technologies as if they were even the same ecosystem.
Yeah, email needs replacing, but it needs actually serious proposals.
Some good ideas, but also stops short on a few things.
* It talks about signing the message to prove the sender, but why stop there? We might as well extend it by having recipient public keys available via DNS records directly (or DNS specifying how to fetch them using some other protocol) and encrypting the contents of the e-mail as well. This alone could be the killer-app reason to adopt this thing.
* As the recipient of spam, I don't really care so much whether the sender owns the domain or not, because it's relatively cheap and easy for them to buy a domain and set up DKIM/DMARC if they had to. What I really care about is only receiving e-mail that I have opted into receiving. Whilst there will always be a need for some people to have public mailboxes that anyone can send to, for the most people it'd be better to have something where you can only send a mail to them if you have a token that grants you that right. My mail client should be able to fetch the potential sender's public key and sign a token that says this key can send mail to me until this date. Maybe every time you reply to someone, you send out a new token with a longer window. Maybe you need your mail client to prompt you to re-issue tokens to people you've previously been in contact with. But the ability to choose to end the conversation further would be great.
* Maybe for people who don't have a valid token to send to someone, there can be a way of requesting a token, maybe the sender's name, public key/address, a short sentence for justifciation and if the recipient agrees, then the sender can send the rest of the mail.
* Mailing lists would get a bit more messy, as they'd have to re-sign and maybe re-encrypt every message. Maybe that's not a bad thing though, as with re-encrypted contents it'd be harder to correlate the same message sent to different people as the same, thus tying them to the same list.
* As some other senders have said, I'm not sure full HTML is the answer. Markdown is better, because it would enable a client to unambiguously cut, quote and cite parts of the original. But people want all sorts of fancy formatting at times, so maybe we need a better markdown and/or restricted set of features. Most people also, don't really need all those colours etc, they just want something that's visually different to the rest of the conversation so you can tell who said what in a very jumbled thread. Maybe the markup should be required to identify which bits of copied content came from where.
That said, getting critical mass of adoption is going to be hard, and there's not really any money to make from implementing it, only expense. The article correctly points out that something will only happen if it gets critical buy-in from all the big names, but of them probably only want to do it if they can get up controlling something and being the gatekeepers of the rest of the internet.
> We might as well extend it by having recipient public keys available via DNS records directly (or DNS specifying how to fetch them using some other protocol) and encrypting the contents of the e-mail as well.
That's just OpenPGP. There's even an OPENPGPKEY DNS record type. We all know how that went. It requires users to know what public keys are.
Yeah, some way of authorizing senders would be nice. I don't think this will ever happen, but one can dream.
When registering on a website, the "validate email address" step would then be an auth flow, instead of "we send you an email and you click that link". And there's no need to authorize the website to send you messages, since successfully completing the flow would be good enough verification.
Could be based on OIDC or IndieAuth.
Forgot password for that website? Enter email, get redirected to your auth flow, verify. (Or just do something like "Login with Google", but with this thing instead).
The website abused the auth flow and requested permissions to send you messages, and now you're getting spam from them? Click some button that says "I don't want to receive messages from [NAME] anymore", where "[NAME]" is whatever name associated to that token, and it revokes that token.
Getting a weird message from some unknown site? Same as above, plus you would know who sold your token without needing email aliases or anything.
> * Maybe for people who don't have a valid token to send to someone, there can be a way of requesting a token, maybe the sender's name, public key/address, a short sentence for justifciation and if the recipient agrees, then the sender can send the rest of the mail.
It won't be long before you start getting request from Mr. "Buy MyProduct" and Ms. "Visit MyWebsite".
>Clients which implement MX2 can, optionally, have an updated encryption scheme to replace OpenPGP.
That's entirely vague...
>Something like Apple’s Contact Key Verification.
So a key fingerprint?
>Hopefully there would be forward secrecy this time.
I for one would not want to delete all my email on receipt. Otherwise there is no point, the attacker gets the stored email when they get the private key if the email is still available to the user. Normally people keep their email indefinitely. I hope this is not a proposal to store email insecurely. Having said that, I have sometimes thought that it might be cool and maybe even useful to have a "Mission Impossible" style email type that would entirely delete itself after reading.
I think this part of the proposal needs much work. A good first step would be to study the details of current encrypted email technical culture.
Would be great if Google/Apple/Microsoft could lead some of these Internet upgrade initiatives. I think we have more cross corporate collaboration on Emoji selection.
They did. Google's one is called Gmail, and Microsoft's one is called Outlook 365. They each want you to use their product and offer you improved service if you only use their product and talk to other people who are using their product. But Apple got the best lock-in with iMessage.
- different, more strict message structure and format replacing headers and parts (maybe some less verbose equivalent of XML)
- signature as a separate part of the message that is never quoted (and more standard way to identify and attribute quotations in the text)
- legal information as a separate part of the message (imprint, privacy policy, confidentiality and copyright notices etc)
- privacy controls as part of the message (unsubscribe, GDPR disclosure/removal etc)
- replace HTML with Unicode-based formatting that is equivalent of Markdown subset and is email-specific, to avoid any attempts to reuse renderers built for other purposes
> - different, more strict message structure and format replacing headers and parts (maybe some less verbose equivalent of XML)
Hard agree on that. Differences in header implementation is such a wide-spread problem with any kind of message forwarding program design, whether it's reverse HTTP proxies or chains of email middleware.
> - signature as a separate part of the message that is never quoted (and more standard way to identify and attribute quotations in the text)
I think this will inevitably lead to people accidentally mailing each other information they never intended to forward. I do agree that quotes and replies need better standardisation, but I don't think this is the solution.
> - legal information as a separate part of the message (imprint, privacy policy, confidentiality and copyright notices etc)
I can't say I see the need for this. Just a few links at the bottom are enough.
> - privacy controls as part of the message (unsubscribe, GDPR disclosure/removal etc)
There's already a standard unsubscribe header (https://www.ietf.org/rfc/rfc2369.txt has List-Unsubscribe, for instance). In my experience, it's only used by companies that have visible and simple opt-out links.
In a new protocol, the remote side will probably just ignore the unsubscribe request.
> - replace HTML with Unicode-based formatting that is equivalent of Markdown subset and is email-specific, to avoid any attempts to reuse renderers built for other purposes
This would prevent any company doing marketing from using this standard. That also means sign-ups won't be supported, which will work against any adoption.
People who want this can use text/plain in email already. You can even read most HTML email by telling your email client to view text/plain instead of text/html.
> I think this will inevitably lead to people accidentally mailing each other information they never intended to forward.
Why? A conforming client will just render the signature below the message as well as the other information (e.g. legal one).
> I can't say I see the need for this
Have you seen corporate email signatures in Germany? It’s basically the demonstration of the lack of sense and lack of taste of some exec, often being more than a half of the message. People do need to be constrained here and relieved from signature design duty.
> In a new protocol, the remote side will probably just ignore the unsubscribe request.
Unless the standard will require digitally signed receipt in absence of which reputation of sender will suffer.
> This would prevent any company doing marketing from using this standard.
Not really. They will have more constraints for the design, but embedding vector graphics should be possible, just with some restrictions. There exist brilliant marketing emails with minimal formatting.
> Why? A conforming client will just render the signature below the message as well as the other information (e.g. legal one).
Because in all other messaging platforms, content is either shown right above or right below where the user types. Attempts to forward a single paragraph will forward an entire email.
> Have you seen corporate email signatures in Germany? It’s basically the demonstration of the lack of sense and lack of taste of some exec, often being more than a half of the message. People do need to be constrained here and relieved from signature design duty.
That's a German problem, not a protocol problem. You can't fix social problems with technology.
> Unless the standard will require digitally signed receipt in absence of which reputation of sender will suffer.
Large mail providers already track this stuff and it doesn't help. Most of the spam I receive falls squarely in the category of "anyone with a spam filter will catch these".
I also don't think any kind of decentralised reputation system will work, because spammers will try to poison anything small mail servers can contribute to. We'd end up with the same IP reputation list system we currently have.
> Not really. They will have more constraints for the design, but embedding vector graphics should be possible, just with some restrictions. There exist brilliant marketing emails with minimal formatting.
So you're saying the Germans will have vector graphics email signatures?
I agree that a lot of these points could've made email better, but only if they were applied three or four decades ago. Nobody is going to switch to email without at least the abilities they currently have.
Personally, I like the idea behind Delta Chat, using email as no more than a transport for instant messages. You get the benefits of legacy email, with practical messaging shaped like modern instant messaging.
> Because in all other messaging platforms, content is either shown right above or right below where the user types. Attempts to forward a single paragraph will forward an entire email.
Why having knowledge of the protocol any UX designer would design the interface of the client so that such mistakes could be possible? It certainly can be solved by interface.
>That's a German problem, not a protocol problem. You can't fix social problems with technology.
It is not a social problem. Those ugly signatures exist because there’s legal requirement to include imprint in business mail without adequate support by protocol. It belongs to message metadata, not to message body (and any well-designed system considering this requirement would do it as metadata btw).
> I also don't think any kind of decentralised reputation system will work
It’s a matter of a separate discussion, but here it’s not about generic reputation system, but the one where cryptographic proof of trustworthiness is possible. It may work and it doesn’t have to rely on IP address.
Yes, please. Every day we don’t fix email is another day organizations will continue shifting to phone numbers as identifiers, which is 10 times worse.
approach to fighting spam. Your idea will not work. Here is why it won't work. (One or more of the following may apply to your particular idea, and it may have other flaws which used to vary from state to state before a bad federal law was passed.)
( ) Spammers can easily use it to harvest email addresses
(x) Mailing lists and other legitimate email uses would be affected
( ) No one will be able to find the guy or collect the money
(x) It is defenseless against brute force attacks
( ) It will stop spam for two weeks and then we'll be stuck with it
( ) Users of email will not put up with it
( ) Microsoft will not put up with it
( ) The police will not put up with it
( ) Requires too much cooperation from spammers
( ) Requires immediate total cooperation from everybody at once
(x) Many email users cannot afford to lose business or alienate potential employers
( ) Spammers don't care about invalid addresses in their lists
( ) Anyone could anonymously destroy anyone else's career or business
Specifically, your plan fails to account for
( ) Laws expressly prohibiting it
( ) Lack of centrally controlling authority for email
( ) Open relays in foreign countries
( ) Ease of searching tiny alphanumeric address space of all email addresses
( ) Asshats
( ) Jurisdictional problems
( ) Unpopularity of weird new taxes
( ) Public reluctance to accept weird new forms of money
(x) Huge existing software investment in SMTP
(x) Susceptibility of protocols other than SMTP to attack
( ) Willingness of users to install OS patches received by email
(x) Armies of worm riddled broadband-connected Windows boxes
( ) Eternal arms race involved in all filtering approaches
( ) Extreme profitability of spam
( ) Joe jobs and/or identity theft
( ) Technically illiterate politicians
( ) Extreme stupidity on the part of people who do business with spammers
( ) Dishonesty on the part of spammers themselves
(x) Bandwidth costs that are unaffected by client filtering
(x) Outlook
and the following philosophical objections may also apply:
( ) Ideas similar to yours are easy to come up with, yet none have ever been shown practical
( ) Any scheme based on opt-out is unacceptable
( ) SMTP headers should not be the subject of legislation
( ) Blacklists suck
( ) Whitelists suck
( ) We should be able to talk about Viagra without being censored
( ) Countermeasures should not involve wire fraud or credit card fraud
( ) Countermeasures should not involve sabotage of public networks
(x) Countermeasures must work if phased in gradually
( ) Sending email should be free
( ) Why should we have to trust you and your servers?
( ) Incompatiblity with open source or open source licenses
( ) Feel-good measures do nothing to solve the problem
( ) Temporary/one-time email addresses are cumbersome
( ) I don't want the government reading my email
( ) Killing them that way is not slow and painful enough
Furthermore, this is what I think about you:
(x) Sorry dude, but I don't think it would work.
( ) This is a stupid idea, and you're a stupid person for suggesting it.
( ) Nice try, assh0le! I'm going to find out where you live and burn your house down!
> A standardized HTML specification for email; complete with a test suite for conformance. Or, maybe we just declare a version of the HTML5 spec to be officially binding and that’s the end of it.
keels over in laughter
Oh, there's been quite a few attempts at at standardizing HTML rules for email. I was even in one of them, which petered out very quickly.
Functionally speaking, the problem is you have three groups of people with HTML email (well, four groups, if you include the people who wish it died off). You have the marketing folks, who want HTML email to work essentially exactly like regular webpages so they can do all their normal design stuff and get it to work. You have the MUA implementers (particularly webmail), who need to aggressively sandbox and sanitize the HTML because to do otherwise is to risk security leaks. And you have Microsoft, who needs to keep visual compatibility with Word HTML because their userbase would flip out if they broke stuff. These groups want different stuff from HTML, and they're not going to do a good job of reconciling their viewpoints with one another.
Microsoft has done more damage to HTML email than everyone else put together. They’ve single-handedly held it back by at least ten years (maybe fifteen), and created tens or hundreds of thousands of jobs.
In Outlook 97, they used the MSO renderer (Microsoft Word) for editing and presentation. It has an incomplete and buggy implementation of HTML 3.2.
In Outlook 2000–2003, they did the obvious sensible thing: ditch that, and use MSHTML (Internet Explorer).
In Outlook 2007, they switched back to MSO, for reasons that never made a skerrick of sense to me (their explanation was vague nonsense that included the word “security”, but the articles that discussed it vanished from the web long ago so I can’t point you to any). I believe they still use the MSO renderer to this day. Windows Mail still embedded MSO. I think that the new Outlook client they released last yearish? was still using MSO, though I’ve heard claims to the contrary as well.
The MSO component has had, I think, approximately two changes in the last 28 years, one of which was supporting high DPI (… which it does imperfectly) and the other I forget.
Since one of the major email clients is still using a dodgy implementation of 1997 web standards, what incentive have other providers had for supporting newer stuff?
We’re slowly getting places, but when you contrast it with the web’s pace, in both backend and frontend (e.g. HTTPS deployment, and new CSS/HTML/JS features)—well, it’s very obviously a completely different environment.
They were embedding IE — I imagine their security concerns were well-founded. But even gmail has crappy html support, and that runs in a friggin’ browser!
MSO probably had security problems at least as large.
And if there were security issues, they needed to fix them for IE’s sake already!
I don’t remember the details of what they wrote, and wasn’t able to find it even five years ago, but I do remember that the reasons claimed just made no sense.
> MSO probably had security problems at least as large.
Down in the parser and such, no doubt about it. But it would have also lacked much of the attack surface of IE, such as, oh, ActiveX. Granted that specific example would be easy enough to disable, but that's just one mine in the whole field. They definitely should have wrestled IE into shape, but the IE team clearly wasn't taking marching orders from the Office team. Organizational dysfunction manifested in sofware.
1. Yeah the marketing folks would hate this but we need to use a subset of HTML.
2. Ok so these are the people who are going to want to help us and they run the largest email servers at the moment
3. I think MS products already have web views in them (either edge or old edge), if we had a change over like MX 2 maybe that would be an acceptable trade off to break the old.
We don't need a subset of HTML. Actually, we need Markdown emails. You can format stuff that needs structure, but not abusively so (no blinking marquee banners in eyesore colors), it is sufficiently compatible to plaintext that you don't even need the text alternative mime object. It is also more compact than HTML.
And before somebody says "won't fly", all those fancy new "will replace email someday" messengers use markdown or some parts of it's formatting.
You know what? We had that. Markdown was modelled after it.
I would also state that Markdown itself is completely unsuitable for the purpose. You’d need to design something new which might look very similar to Markdown, but which would have basically no shared behaviour with even CommonMark as regards parsing, since you don’t want HTML to be a thing. Markdown itself is seriously compromised by its HTML basis.
I can’t actually think of a single comparatively-mainstream messenger that uses even a variant of Markdown; rather, they use their own lightweight markup languages <https://en.wikipedia.org/wiki/Lightweight_markup_language> that are very clearly incompatible with Markdown. (It’s also often a frontend editing feature that gets turned into something like HTML after that.)
text/enriched has been around for decades, and supports basic font styling (bold/italic/underline, color, font face, font size) and that's basically it.
Actually, text/markdown exists as well. However, the definition of markdown syntax is, um, less than precise: https://daringfireball.net/projects/markdown/syntax (the text/markdown RFC literally has a parameter to indicate which flavor you meant by markdown!). And it incorporates HTML too, FWIW--legal HTML fragments are legal markdown as well. Honestly, markdown's million variants makes the HTML support landscape look uniform.
Honestly, I wouldn’t consider this trolling. It’s a low-effort dismissal here (probably why it’s been flagged to death), but it’s still insightful and accurate. The first time it came up, certainly over twenty years ago and I’m not sure how much more (but you can find the form from at least that far back), it was honest weariness with claims of having solved spam.
If you’re not familiar with these things, the keyword is FUSSP (Final Ultimate Solution to the Spam Problem).
I just looked through my Spam folder. I have 48 messages in the month to date. 18 of them are from at least ten domain names matching /^(marketexec|marketexecmail|market-exec)?\d{1,2}\.co\.uk$/. Domains cost money, but not that much, and people do do things like this.
(There’s also the problem of senders aggregating messages from untrusted parties, so that reputation is always a tricky balance. For example, I have four spam messages from gmail.com this month. When I’m not getting bursts like this marketexecmail stuff that started two weeks ago and will probably stop within another couple of weeks, I find that at least a quarter of my spam messages come from gmail.com, outlook.com or hotmail.com; in January, it’s 12⁄46, and at least four more are from other general-public domain names that I recognise.)
Here’s the thing: despite the internet’s email system being complex and confusing and riddled with problems, it is universally adopted and interoperable. No other open communication tool can boast of email’s massive interoperability.
Any new system will face the impossible burden of winning hearts and minds while the old system continues to chug along with billions of annual R&D spending and dozens of conferences full of smart people working on solving problems.
Even if “MX2” can peacefully coexist in the DNS, why would anyone spend the millions of dollars in engineering effort to move while their teams are busy building the latest layer that’s been invented to patch over the current system?
By all means if you want to propose a new email system, show up at IETF or M3AAWG and make a bold proposal. Someone will buy you a beer while they explain why you are much better off getting into the mud pit with the rest of us and working on the next pragmatic fix to keep things rolling along.