Hacker News new | past | comments | ask | show | jobs | submit login
JSON Mail Access Protocol Specification (JMAP) (jmap.io)
98 points by alfiejohn_ on Jan 29, 2014 | hide | past | favorite | 61 comments



Unfortunately it includes my biggest gripe about IMAP -- the requirement that messages are given a server-side message number. Maybe breaking with IMAP on this would make interop too hard though.

This requirement is convenient for the client but makes implementing robust servers harder than it need be. If instead each message was identified by a {timestamp,UUID} tuple then multiple MX servers could do final delivery to an eventually-consistent shared store. The requirement for strict, durable message ordering is the one thing that forces hard synchronization among multiple machines doing delivery.

POP3 actually made a step this direction with the common "UIDL" extension; IMAP4 felt like going backwards on this.

While I'm on the subject, another less-common POP3 extension I liked was "XTND XMIT". This gave a simple way for the POP3 client to send mail; sadly it never really caught on in favor of just using SMTP submission. Originally though this only worked OK because SMTP was open-relay. Once spam put an end to that it became complicated to send a message when accessing your mail remotely. Then to solve that problem we had to graft an auth layer on top of the existing SMTP protocol and upgrade all of the clients and servers to support it. If only "XTND XMIT" had won in the beginning everybody would be submitting outgoing mail over that already-authenticated channel and so much work could have been avoided.

So anyway, those are my two requests for any "IMAP killer": allow the server to use unique (but non-sequential) IDs for messages and provide a way to submit outgoing messages over the same channel.


I'm in two minds about the UID stuff. There are advantages and disadvantages.

But one thing that you MUST have for sensible synchronisation is the global MODSEQ. Otherwise there's no single state you can use to know if there's new data on the server short of (as you do with POP3) fetching the entire UIDL list every time.

The way we do this with Cyrus replication is to reinject the message if we get a UID clash - so you can have delivery at both ends of a (theoretical - not yet finished) master<=>master replica pair. Upon discovering the mismatch, BOTH messages get new UIDs larger than any yet used, which brings all clients into agreement.

But anyway - our unique identifiers in JMAP can be anything the server would like. We use 'fNUMBERuNUMBER' at FastMail, which is a folder/uid pair - but a more modern server could use UUIDs that don't change for moves between folders. We could even implement it in Cyrus on top of the Digest.SHA1 field if we built a reverse mapping database and deduplicated on it.

But synchronised modseqs... I don't see a way around that. We've been discussing "modseq validity range" stuff - where a delivery logs an intent to use a modseq, and then clears the intent upon commit. Any request that includes a range past that intent needs to wait for the commit to complete or abort.


For anti-entropy in a distributed system, one could use a combination of the following:

1. Get everything created since last sync. A created index (as opposed to updated index) would not need to be resorted for subsequent updates. This would get only new mail.

2. A fixed-size Merkle tree, 8 blocks per level, 7 levels deep, with around 262144 blocks at the base. This would be the best tree configuration for syncing from a few thousand up to 40 million emails. This would identify all mail updated since last connection, within 4 roundtrips with a minimum of bandwidth. The tree should be incrementally updated using XOR at both client and server whenever an update is made. It's possible to use a tree with less depth for fewer roundtrips but with support for less emails or requiring more bandwidth.

3. Push sync. Server would push all creates/updates to client while client is connected.


Specifically, you want a Merkle tree where the final bucket is based on the lowest N bits of the message arrival time, probably at a granularity of a several seconds. In the typical sync case all of the unseen messages will then be clustered in a few buckets and the Merkle tree will be very efficient.

For instance, if each bucket contains 8 seconds worth of arrival time, then 2^17 buckets mean they'll wrap around only every ~12 days. So if you are syncing after a day away there will be only 8% of the buckets potentially dirty (even if you get an incredible flood of mail!)

Having the bucket use based on time also allows you to avoid some of the roundtrip costs when doing frequent syncing. In a single request, the client can speculatively send its hashes of buckets that are likely to have changed since its last sync time along with its root hash. The server finds which buckets changed, sending those new bucket contents -- it can then see that the top level hash must now agree on both sides and you're done in one RT.

Of course it's possible that other buckets have changed in rare edge cases; in this case the top hashes will not match and you have to do the full Merkle descent to resync.


That's an interesting idea to hash emails to Merkle buckets by arrival time. It may lead to uneven distribution of emails to buckets though, increasing the amount of bandwidth required to sync a bucket in the worst case.

There are other optimizations you can bring in when you sync the tree:

1. If you are working your way down the remote's tree and you notice that your local signature is zero for the equivalent remote's node signature, then you know that your entire subtree is empty, and you can short-circuit and start downloading entire buckets.

2. Conversely, if you are working your way down the remote's tree and you notice that your local signature is present but the equivalent remote's node signature is zero, then you know that your entire subtree has data, but the remote's entire subtree is empty, and you can short-circuit and start uploading entire buckets.

3. As you work your way down sections of the tree, you can start to build an idea of the average email to bucket ratio. If several buckets are only likely to contain at most one email, then you can short-circuit again.


I think the LSB bits of arrival time is actually a really good distribution long-term (remember: we want the distribution to be uneven in the short term, but even in the long term) Maybe you could improve it a little bit by making each bin a prime number of seconds so automated messages sent at the same time every day can hit each bucket eventually. i.e. instead of 8 second granularity, use 7 or 11.


(Author of the spec here)

A fixed-size Merkle tree is a great idea if we wanted to sync the entire set of messages between distributed MX destinations. In fact, we should probably consider that approach for our server <-> server replication. However, JMAP is aimed at client <-> server sync. There are a number of different issues to consider here. The client may have limited or no cache (a webmail app for instance must normally start anew each time, and needs to efficiently fetch just what it needs; if we had to wait for a 60GB mailbox to sync every time we loaded our webmail, well, obviously that's not going to work. A mobile mail client often only wants to download the first few items in the mailbox given the current sort order so it can display them to the client. The JMAP spec allows a client to download, say, just the list of the first 50 messages, newest first, in the Inbox (single round trip). Then, at a later point, it can ask for and receive a space-efficient delta update of what has changed within those first 50 messages in the Inbox, regardless of how big the Inbox really is, or what other messages may have been delivered elsewhere (including lower down the Inbox, but outside the top 50). Again in a single round trip. This is really powerful: you can see that by comparing the refresh time of a folder in the FastMail mobile web app with the iOS or Android native mail app (over IMAP); the mobile web app is much quicker.

The other thing to remember is that the metadata about the message is mutable, and needs to be kept in sync as well. Again, the modseq system described in JMAP allows this to happen efficiently. If you have a distributed server system with multiple masters, you will again use a different system to keep those in sync (as they each assign new modseqs, whereas the client does not). However, this too can be dealt with relatively easily: if the modseq for a message differs between two masters, it must be set on both to max(the higher of the two modseqs, the next highest global modseq on the server with the currently lower modseq). This will ensure that both will end up in the same state, and that the client will get refresh its state for that message if it has to, no matter which master it last synced with.

So essentially, there's a difference in what you want in a protocol for server <-> server synchronisation for distributed mail backends compared to client <-> server sync for mail apps. JMAP is focussed on the latter, but tries to make sure it doesn't include assumptions that severely limit the former. The spec does not require a particular format of message ids: they do not have to be based around IMAP's ascending UIDs (although the sync algorithm for partial mailbox listings may be slightly less efficient if not).


> if we had to wait for a 60GB mailbox to sync every time we loaded our webmail, well, obviously that's not going to work

First, the Merkle tree would just be of the IDs not the message contents. I.e. the problem it is trying to solve is for the heavy-weight client that wants to resync its idea of which message UIDs are contained in a mailbox. Which ones it then requests metadata (or full-contents) is up to it.

It's true that for a ephemeral client (like a webapp) you don't even want to get the whole list of 100K messages in your inbox (although you probably will ask the server for a count of them) However, here having a sequential message number hardly helps either. For example, when I click on "sort by date" in my mail client it shows me the messages by sending date, not by what integer IMAP UID they were given.

The problem of "give me the metadata for the top 100 messages sorted by criteria X" is just something that has to be offered by the server. This is no different than features like searching -- if bandwidth were infinite the client could do it, but it's not so you need to do the work near where the data is stored.

The server will have to maintain data structures to make these kinds of queries efficient. The important point however, is that these are conceptually just caches of information that is in the mailstore. Therefore there is no need to keep that synced between far-flung servers.

> The other thing to remember is that the metadata about the message is mutable, and needs to be kept in sync as well

Yes, message data and metadata are separate problems. If two servers both try to update the metadata for message X at the same instant one needs to win (which one is probably not that important) If two servers both try to add new messages to a folder at the same instant they both need to win, ideally without having to coordinate at all in advance.

For the metadata you can use a modseq, a Lamport timestamp, or probably even just a {wallclock time,server ID} tuple. Assuming good time synchronization is reasonable for a mail cluster. If two clients both try to change message flags within 10ms of each other it probably isn't important who wins as long as somebody definitively does.


I've been waiting a bit to reply to this to see if there's a polite way to answer, but there isn't.

You suffer from a disease, and it's far too bloody common amongst people who design IETF standards.

You're optimising a rare case, where in the worst case you would be re-iding the bulk of the messages delivered in a time period of a few hours, leading clients to have to re-fetch the metadata again (note: it's only the modseq that would change in our design - the UUID is still globally unique and unchaged).

That's no worse than "FETCH 1:* FLAGS" now.

And in exchange for this slight benefit, you would cause EVERY SINGLE CLIENT EVERYWHERE to implement a significantly more complex protocol which requires 4 roundtrips to resync.

Talk about a dog with a very big, very waggy tail.

I'm sorry, but I can't take this suggestion seriously. I've seen too many protocols which are designed with this sort of focus on the worst cases. You have to make sure they don't totally suck - but the important place to optimise is the common case.

...

It turns out there is a way you can gain almost all of the benefits you're looking for - which is delayed MODSEQ allocation. Newly delivered messages can be created with no MODSEQ. When you sync with another either client or server, you allocate a MODSEQ higher than than any which the other end has seen.

Which means you only pay an extra cost if a JMAP peer or client connects at both ends during the link downtime.


The reinjection on clash (CSMA/CD for numbers :-) works in close-knit clusters but has issues with network splits. Imagine one server is in SF and the other is in Amsterdam, and a network problem makes them unable to talk for a few hours. In the mean time, mail gets delivered and read on each server. Then they reconnect to each other and you find a storm of conflicts to resolve.

Also, I strongly believe that per-message UUIDs should always start with the delivery time. That way as long as your servers are all reasonably synchronized you can get reasonable "sort by folder order" behavior without any sequential ID.

You're right that the modseq issue is hard, but the Merkle tree suggestion from @jorangreef can address that. Refer to my reply to him for more details.


Some IMAP servers allow you to use a send folder to inject mail to be sent. I think that's a pretty reasonable way to go about it with the way IMAP works, but afaik no client implements it.

Agreed on the messageid thing. I frankly think that IMAP is too messy to try too hard for interop with. Using a relatively similar data model and being able to work correctly with a maildir should be considered enough.

But I love this idea and hope it takes off in spite of that. It's really time to revamp the interface to mail servers to be more friendly to webmail. It'll make a huge difference to the level of choice of webmail providers/software.


Do those IMAP servers allow separate control of the envelope recipients, or does it just take them directly from the message headers? To be fair, POP3's "XTND XMIT" had that flaw as well (IIRC qpopper just passed the message to "sendmail -t")

A proper mail submission command would preserve parity with SMTP by keeping envelope and headers distinct. I just want to leverage the fact that I've already auth'ed the user and shouldn't have to redo that to a separate server.

Also, presumably one of the advantages to the REST+JSON design of JMAP is to make it easy for javascript clients to use it. Since they can't do direct SMTP anyway there will presumably be some REST service that they will use for submission.

It just seems that it should be part of the same standard. Most mail readers are also mail senders.


> Unfortunately it includes my biggest gripe about IMAP -- the requirement that messages are given a server-side message number. Maybe breaking with IMAP on this would make interop too hard though.

I don't understand this criticism, the criticism is usually that IMAP does not require maintaining UIDs (the server can announce UIDNOTSTICKY[0])

> While I'm on the subject, another less-common POP3 extension I liked was "XTND XMIT". This gave a simple way for the POP3 client to send mail

https://news.ycombinator.com/item?id=4826447

> IMAP has actually been extended to support that feature (not just kind of randomly, but by the IETF "Lemonade" working group), even allowing (in concert with other extensions) compositing a message from parts (such as attachments) on the server without having to download them first to the client.

> However, and I think this is actually more relevant in this context, Mark Crispin was an "opponent". (Note: the following e-mail snippet by Mark Crispin, was written years after the Lemonade Submit proposal expired, and thereby should be considered to already take into account that context.)

>> For many years, there have been various proposals to add mail sending capabilities to mail access protocols such as POP and IMAP.

>> These proposals are always strongly opposed. It is one of the "attractive nuisances" of email protocols. The value of the capability is obvious to many people, but the high cost of having it in POP or IMAP is much less obvious.

>> I am one of the opponents. For the past 25 or so years we have been in the overwhelming majority. It is quite unlikely that this concensus will change. If anything, it has become stronger in recent years.

>> […]

>> If you're not really that curious, suffice it to say that the people who design the protocols and systems understand the attraction but have excellent reasons not to do it.

http://mailman2.u.washington.edu/pipermail/imap-uw/2009-Janu...

[0] http://mailman2.u.washington.edu/pipermail/imap-protocol/200...


Mark Crispin was wrong. The underlying model of IMAP is strong, but a lot of the complexity of mail access comes from clients working around things which are either broken or missing in the protocol.

Their reasons aren't excellent, they are some theoretical notion of purity which leads to things like the "MOVE" capability taking years to finally happen, and everyone implementing copy+store+expunge independently and poorly in the meanwhile.

He's dead now, so he can't tell me I'm a feckless Gen X with no clue how to do real protocols - but there's a reason why proprietary protocols like ActiveSync get traction, and it's not because you need to pay a licence fee - it's because they actually solve problems that need solving.


(Author of the spec here)

We will be adding sending to the JMAP spec, although support may be optional. The way it currently works at FastMail is a bit too specific to our architecture though, so we want to clean it up before adding it to the spec.


>>> These proposals are always strongly opposed. It is one of the "attractive nuisances" of email protocols. The value of the capability is obvious to many people, but the high cost of having it in POP or IMAP is much less obvious.


Lucky I'm not considering adding it to either POP or IMAP then. :)


Good luck with your doom-repeating experiment, I guess.


> the criticism is usually that IMAP does not require maintaining UIDs

Yes, exactly. UIDs should be sticky, but they should also be allocatable by widely distributed nodes without the need for central coordination.

If your UIDs are 32-bit monotonically-increasing integers then this is impossible. If they are 128-bit random numbers you get it for free. If you prefix them with a timestamp you even get reasonable ordering.

The entire UIDNOTSTICKY problem is a result of IMAP UIDs being so restricted.

I can see why a proposal as complicated as Lemonade failed to get traction, but I find the argument pretty thin. The "attractive nuisance" was the old open-relay SMTP infrastructure. If you were writing a POP client in 1991 you wouldn't think twice about sending mail since you just had to hit a SMTP server (didn't even have to be the "right" one!) and the mail would get delivered. When this all came crashing down far more effort got put into authenticated SMTP (talk about not separating concerns!) than would have been required to just get message submission right in the first place.


You could `sha1` the messages, similar to how `git` deals with blobs, trees, etc?


HTTP itself has an overhead, I'm skeptical changing a persistent IMAP connection into a request/response model with a different serialization format is going to be more efficient, especially given compression (like COMPRESS=DEFLATE)

IMAP4rev1 is one thing, but there are many extensions supported in modern imap servers to speed up syncing and mailing, like CATENATE/BURL/CONDSTORE/MULTIAPPEND/MOVE/etc

Granted, the IMAP protocol is pretty hairy and difficult to work with. On the other hand, there's a huge ginormous deployment of it on the client and server, and the IETF WG behind it. I doubt JMAP will replace it anytime soon unless the WG itself takes up the issue. And given the neckbeards there ;-), some of whom have been working in IMAP for decades, you face the same opposition to change as those trying to legalize drugs or gay marriage. ;-)

We're still trying to get IPv6 deployed!


> I doubt JMAP will replace it anytime soon unless the WG itself takes up the issue.

Maybe. I'm curious what the effect would be if a single mobile platform provider (iOS, Android) supported a new protocol with a single main free mail provider. Specifically, Gmail and Android could possibly push major change through.

Look at SPDY, it was originally an experiment with Chrome and only a subset of Google services, and is now being deployed to other properties and iterated on rapidly in other browsers, too.


That's why a major focus of JMAP is to be able to be put in front of an IMAP server with a proxy (and it takes advantage of CONDSTORE if present - we built from an IMAP base)

Honestly, if it's not significantly nicer to work with than IMAP, it won't see traction, and that's fine.

Mind you - MOVE took bloody long enough in IMAP, I was arguing for it for years before it finally got support behind it. There's still not much support for SUBMIT via IMAP though, which means multiple connections via different protocols to do basic emailing, with all the support fun that entails.

Anyway - worst case, we still have a better protocol for our own stuff, and that's not nothing.


HTTP can be made persistent too, both with good ol' KeepAlive and with recent innovations like SPDY. Massive savings in latency, especially when SSL is involved.


One of the main efficiency gains is in massively reducing the number of round trips required to do a series of operations. This batching of operations is hugely important on high-latency connections, relevant for us in Australia all the time but also of importance on mobile connections the world over. Even 4G networks have relatively high latency, but can do a burst transfer pretty efficiently.


> One of the main efficiency gains is in massively reducing the number of round trips required to do a series of operations.

Er…

> 5.5. Multiple Commands in Progress

> The client MAY send another command without waiting for the completion result response of a command, subject to ambiguity rules (see below) and flow control constraints on the underlying data stream. Similarly, a server MAY begin processing another command before processing the current command to completion, subject to ambiguity rules.

IMAP already supports batching.


A binary protocol running over persistent socket would have fewer round trips and parsing latency. HTTP and SPDY can't compete with that, because they introduce too much header overhead and complexity.


(Author of the spec here)

The protocol could just as easily go over a WebSocket as HTTP (you'd have to change authentication mechanism, but that would be trivial). The only reason we don't do that at the moment is: 1. Browser support (we support back to IE8, unfortunately) 2. Lack of GZIP compression (particularly of responses from the server); this was being worked on last time I looked, but I haven't checked recently to see if it's been implemented yet. This is crucial for saving bandwidth.

That would get rid of your HTTP overhead; I will put some mention of that in the spec (I think HTTP support should be required, and WebSocket support optional).

As for JSON vs. binary: * There are libraries to use JSON everywhere – binary protocols would require everyone to implement another parser, adding extra difficulty for adoption and a surefire source of bugs. * JSON is readable over the wire, binary is not. Trust me, in the real world, this is a huge advantage for debugging all sorts of things. * JSON can be used trivially in a web app. Apparently this World Wide Web thing might catch on. * The parsing overhead of JSON is not too large; yes it's longer than a binary protocol, but it hasn't been a bottleneck for us, even on mobile.


Persistent connections are tough on patchy mobile networks.

Parsing latency isn't really an issue; all this stuff is IO bound anyway. CPU cycles are cheap.


If persistent connections are patchy, then HTTP is no further ahead.

A sure-fire way to decrease latency is to send as little as possible.

An empty default browser HTTP request is likely to cost you around 500 bytes in headers, before you have added any headers or data. Contrast that with a binary command which can be 1 byte. And then multiply this by hundreds of requests. CPU cycles on mobile are not cheap. It makes no sense to parse 500 bytes (and then GC this later) when you don't have to.


Even on servers, parsing isn't cheap. Look at nginx code or other high-performance parsers. All sorts of little tricks to compare 4 or 8 bytes at a time to determine the verb and whatnot.


This is similar to what we're using at FastMail for our web interface, but simplified and made a bit more generic based on our experiences over the past year.

Our plan is to port our web interface across to this spec and implement both a direct backed in Cyrus IMAPd and a standalone implementation which can proxy for existing IMAP servers.


Keep up the good work! 6-year customer here. Notwithstanding some lost features compared to the old UI, your new web interface really lives up to your company's name. Everything is fast and snappy even when I'm accessing it from halfway around the world.

But I'm not sure it's a good idea to emphasize the fact that you use JSON. Sure, it's cool right now, but I find it technically more interesting that it replaces IMAP with something that is built on top of HTTP, a somewhat RESTful protocol.


You need some form of structure unless you want a zillion roundtrips. JSON is cool because it provides the basic structures (scalar, list, map) in a consistent and reliably parsable form. You don't need a context-aware 50 page ABNF to be able to parse it cleanly.

(don't even get me started on trying to write a proxy for IMAP that's aware of the connection state)


Interesting step, but the decisions taken around concurrency seem to assume small-scale, single-user use cases only. I would be the first to admit I haven't given this enough thought yet, but to me it seems that a global lock and MODSEQ make distributed usage etc. very difficult. Isn't this a step backwards?


No. See my answer to jorangreef above, but essentially this protocol is for server <-> client sync, but you can still do efficient server <-> server sync for distributed backends.


Call me curmudgeonly, why replace a protocol that works...instead roll up a library to provide the functionality...

Those things called standards generally find their way through bodies like the IETF. Maybe I missed it while reading on my phone, but I don't see this submitted as an IETF draft...


IMAP works, but not particularly well for a lot of modern tasks. Its highly stateful, requires a fair amount of knowledge on the part of the client to make things work correctly, and too many extensions are optional, requiring a lot of fallbacks for when functions are missing. Implementing a client or a server well is difficult.

JMAP isn't the be-all and end-all of mail protocols, but it goes some way to address some of the problems in IMAP. We're using it in production right now at FastMail so we already know its pretty good as a low-latency, stateless protocol.

As for the IETF (and standards bodies in general), they're useful in their place, but they're also a slow-moving beast and thev're published a great many "standards" that have either never been implemented or are just rubbish.

But there's nobody forcing you to implement it. Its there, its documented, and we're encouraging others to get involved to make it work. If others get on board then we'll see improvements everywhere. If not, then FastMail will continue to use it and get the advantages that come from it.


Read the IMAP specs. If you still don't want to replace it, I can't help you. When I read it the first time, I wanted to hit people in the face.


Because AFAIR, you need to point to two working implementations of a spec before turning it into an RFC.


Well, no, first you discuss the idea in a Working Group mailing list, which is like HN – you get lots of great negative feedback. When you have fixed all your initial bugs, you write a draft RFC. With which implementers can create independent implementations. If there are not anyone willing to do this, this is a sign that the idea is not wanted enough, i.e. not widely useful enough to warrant an RFC in the first place. The implementers will find bugs. You fix those, and discuss some more. Eventually, you, the mailing list and the implementers will be happy with the standard, at which point you move to have the RFC published as an official RFC.

Anyhow, this is all for a “Standard track” RFC. An “Informational” or “Experimental” RFC has no such requirement, and can basically be submitted by anyone for anything.


Because JSON is cool!


Because IMAP doesn't work.


Does JMAP have an equivalent to IMAP IDLE? I.e. the ability for the server to notify the client when an email arrives. This means client's don't need to poll, and messages are seen instantly.

For me instant notification of inbound message is a must-have.


We're using eventsource push - just to say "you need to poll for changes now". I think that's cleaner than trying to push the actual changes.

Our actual push does include some extra detail, but the important bit is just "here's a new state that exists on the server - if you haven't already heard of it, you probably should poll for changes now".

It's out of band(ish) - because JMAP is connectionless. And because depending on which platform you're using, there might be a nice channel for push notifications already.


How about documenting the EventSource interactions?


We probably will publish this as well. We're focussing on the core API first. Having said that, it's really very simple. Here's the entirety of our (currently internal) spec for push events at FM:

This is a text/event-stream resource, as described in [http://www.w3.org/TR/eventsource/](). The following events are pushed:

- progress: Sent during a long-running api method call to let the client know not to timeout the connection. The data is a JSON object with a single property: `connectionId`, with the id the client sent for that connection. This event does not set a new id in the event source stream.

- push: Sent whenever the user's highest modseq changes. The event id is the new highest modseq. The data for the event is a JSON object, with the following properties:

  * **clientId**: `String` (optional). If the change was due to an action initiated by
    the API, the `X-ME-ClientId` header will be echoed back as this property. This
    allows clients to ignore push events for changes they themselves have made.

  * **mailModSeq**: `Number` (The highest modseq for a mailbox)

  * **contactsModSeq**: `Number` (The highest modseq for contacts/contact groups)

  * **calendarModSeq**: `Number` (The highest modseq for calendar events).


Interesting. What is the reason for grouping method calls together? Is this to reduce the number of requests sent to the server? If not, is there no way of providing a higher level operation instead of bundled discrete operations?

In any case, I think you could have something a lot closer to REST, even if you carry on with grouping calls together.

Something like:

    [
        [ "GET", "/messages", { "search": "foo" }],
        [ "GET", "/mailboxes", { "etag": "bar" }]
    ]
Obviously, you're not limited to HTTP verbs if you do it like this, but it's a familiar metaphor.


(Author of the spec here)

REST is way too slow. The JMAP format massively reduces round trip times, especially when you have sequential operations where you must wait for one to finish before the next can happen. And once you're not doing REST, I don't personally think putting HTTP verbs in the method calls doesn't really make it any clearer, and is just likely to confuse matters.


> The JMAP format massively reduces round trip times, especially when you have sequential operations where you must wait for one to finish before the next can happen.

I hear you (REST and batch operations don't mix too well). But wouldn't it be possible to simply provide higher level operations, grouping several of these commands?

> And once you're not doing REST, I don't personally think putting HTTP verbs in the method calls doesn't really make it any clearer, and is just likely to confuse matters.

As I said, it wouldn't necessarily be restricted to HTTP verbs (eg, using 'create' and 'update' instead of 'POST' and 'PUT'), but I do like the idea of separating the resource you act on from the action on this resource.


I didn't take the time yet to have a proper look at it, but nothing stops you from doing batch REST operations on collections. You can do compound documents - with references - à la jsonapi.org. One thing I would consider essential though, is using RFC2518 MOVE, COPY etc, you really want to do those server-side.

Do you have an example of the type of operation that would map poorly to REST?


IMAP is mostly slow as hell when dealing with large mailboxes. For example fetching the 10 most recently updated threads (for a sane definition of what a thread/conversation is, which imap doesn't deal with at all) is insanely slow. It entails scanning the whole mailbox trying to figure out what the threads are and when someone last posted to them. Not fast when you can have millions of mails in a mailbox. Is that the kind of problems JMAP might solve?

Also interested in hearing if you know anything about GMail's json api? Surely they must have something similar to JMAP which their web client is using even if it the api is proprietary?


I'm a big proponent of everything-behind-http because of the way it introduces a common language for integrating disparate services.

However! as someone who has had to debug some particularly gnarly email issues in the past I really do quite like the ability to telnet into an IMAP server to dig around. I also have an appreciation of some of the more horrific things that can go wrong...

So my point is that I'm not sure I agree with "generally much easier to work with than IMAP" - there's two types of "working with". One is building a new client/server from scratch - I'd agree this might be easier, but not something that is done that often. The other is being able to quickly investigate issues with hand-crafted queries, and unfortunately I'd say a simple plain text question-answer interface is better - and here IMAP wins.

Maybe I'm just getting old - it's probably true that almost everyone is happier with HTTP than "telnet-able" protocols (SMTP, IMAP, Redis, etc.) nowadays.


Lets reinvent everything badly in javascript.


I maintain an imap client library for PHP ( https://github.com/tedivm/Fetch ), which is far more painful than it really should be. I really am happy to see attempts at a more modern approach.


As a complete aside, what is the language used under the section "getMessageList"?


That's just their own pseudocode language


Any new access protocol for mail should be designed to work over persistent secure socket, not over HTTP. Additionally, there should be a minimum of work imposed on the server by the client.


See the comments above about mobile networks and persistent sockets.


I swear I was also thinking of inventing a JSON-based replacement for the IMAP just last week.


Looks very useful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: