Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft REST API Guidelines (github.com/microsoft)
419 points by excerionsforte on Nov 22, 2019 | hide | past | favorite | 161 comments



Every single person writing a REST api should have to memorize this table:

GET - Return the current value of an object, is idempotent;

PUT - Replace an object, or create a named object, when applicable, is idempotent;

DELETE - Delete an object, is idempotent;

POST - Create a new object based on the data provided, or submit a command, NOT idempotent;

HEAD - Return metadata of an object for a GET response. Resources that support the GET method MAY support the HEAD method as well, is idempotent;

PATCH - Apply a partial update to an object, NOT idempotent;

OPTIONS - Get information about a request, is idempotent.

Most importantly, that PUT is idempotent.

Credit to arkadiytehgraet for retyping the table to be readable. Please give them an upvote for the effort.


Every single person writing a comment on Hacker News should have to memorize this rule:

Never use code formatting for text, as it makes it unreadable on the mobile devices.

For my own (and others') convenience, here is unformatted rule table from parent comment:

GET - Return the current value of an object, is idempotent;

PUT - Replace an object, or create a named object, when applicable, is idempotent;

DELETE - Delete an object, is idempotent;

POST - Create a new object based on the data provided, or submit a command, NOT idempotent;

HEAD - Return metadata of an object for a GET response. Resources that support the GET method MAY support the HEAD method as well, is idempotent;

PATCH - Apply a partial update to an object, NOT idempotent;

OPTIONS - Get information about a request, is idempotent.


> Never use code formatting for text

Until such time that the site supports quoted text a little civil disobedience is warranted. It's not like it's a hard problem.


Using code formatting for quoted text is not civil disobedience, it is an innocent mistake that should be gently pointed out and corrected.

Continuing to use code formatting after being made aware of the problem wouldn't be civil disobedience either. It would merely inconvenience the large number of people who read HN on mobile devices, for no purpose.

(Edited to remove a somewhat unrelated philosophical note.)


> Continuing to use code formatting after being made aware of the problem wouldn't be civil disobedience either. It would merely inconvenience the large number of people who read HN on mobile devices, for no purpose.

We all do it from time to time, even you did it few days ago [1], monospace have its purpose, maybe HN should simply fix it.

[1] https://news.ycombinator.com/item?id=21436522


Oh come now. I used code formatting quite deliberately in that comment. I wanted to preserve the centering and single-spacing of the text on the back of the radio as best I could. The lines are short and render fine on a mobile device.

So yes, given HN's meager formatting options, monospaceisgood for that! :-)

You may note that later in that same comment there is a longer quote that I formatted in non-code italics.

But hey, if I encouraged you to create an HN account, that's great! Speaking as someone who writes code in a proportional font, we may have some interesting discussions ahead of us. ;-)


It is a sign of an unmet need. Why do we even have code formatting? If minimalism is the goal then remove that overwrought feature too.


> Why do we even have code formatting?

Because once upon a time the “H” in “HN” meant something.


> Until such time that the site supports quoted text a little civil disobedience is warranted. It's not like it's a hard problem.

Italics works fine for quote IMO.


____

Italics works fine for quote IMO

____

I propose using four underscores as a delimiter of a quote block, like asciidoctor, in addition to italics. I think it looks better than the markdown way of using ">" in front of a quote as it can get quite cumbersome and ugly when you're quoting multiple lines of something. However, I generally enjoy using markdown more if it works on a website. It just looks ugly IMO if it doesn't.


the markdown way of using ">"

I think you'll find the practice of prefixing quoted lines with ">" originated with old Unix email and Usenet.


I’m rather young (on the border of Gen Y and Gen Z) so I guess I’m not old enough to have used either old Unix email or Usenet, so thank you for that tidbit.


No problem. Wasn't mean to be sarcastic, but reading back it may come across that way.


It didn’t come across like that at all. I found it very educational actually as I imagine that’s where Gruber/Swartz got the syntax from.


Another rule to memorize: if one of the reasons for using code formatting is to preserve line breaks, simply add an extra blank line between paragraphs. Think of each line as its own paragraph. Then the text will be readable on any device.


That’s still super annoying. An annoyance shared with markdown (which at least has the double space at end of line trick, which is super ugly).


As someone who mostly uses mobile, I agree with you, but in this case I couldn't figure out a way to make it readable without code formatting and without retyping it.

Thank you for the effort. I've updated my post with your text.


Expected caching behaviour is as important as idempotency IMHO, and too often ignored.


There should be an option for deleting many objects. For that, you have to do a POST request on AWS S3:

https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteOb...

Instead of polluting existing method names to mean new things, HTTP could offer more method names.

Having POST alternatively mean "send a command" makes it meaningless. The command could do anything.


> There should be an option for deleting many objects.

What about fetching many objects or creating many objects or updating many objects?


That's right, that's the same issue. But in both cases, you can just send your list of items to fetch (respectively delete) in the body of the GET or DELETE request. (Because AFAIK, it's HTTP compliant to put a body in a GET request, even if it's rarely used).

Edit: I was wrong. It's allowed to put a body in a GET request, but it isn't OK to use it according to the HTTP spec (which admittedly kind of weird). Source: https://stackoverflow.com/a/983458

Edit2: according to an edit int the SO response, it looks like I wasn't actually wrong: or at least not since 2014 when these RFCs[1][2] became the standard for HTTP.

[1]: https://tools.ietf.org/html/rfc7231#section-4.3.1 [2]: https://tools.ietf.org/html/rfc7230#section-3.3


A body is and has always been syntactically permitted on any HTTP request.

However, it is semantic nonsense to put it on a DELETE or GET.

DELETE is defined to remove the resource identified by the URL; GET is defined to fetch the resource identified by the URL. There no purpose for an entity.


This is the point made in the SO answer I quoted, and this was the official position of the rfc2616[1], but as I said in my second edit, this rule was relaxed in the new version of the specification: rfc7231 published in 2014.

[1]: https://tools.ietf.org/html/rfc2616#section-9.3 [2]: https://tools.ietf.org/html/rfc7231#section-4.3.1


It looks more or less equivalent the situation previous.

> A payload within a GET request message has no defined semantics


It went from

> The GET method means retrieve whatever information (in the form of an entity) is identified by the Request-URI.

To :

> A payload within a GET request message has no defined semantics

Which, to me, sounds like: “it was forbidden to put info in the body, and now it's just not especially encouraged”. But maybe I'm over thinking the whole thing.


Seems like you could just use Delete with query params on the collection path. Is that not RESTful?


It doesn't really scale. There's a de-facto limit of 2,000 characters [0] in a URL. HTTP/1.1 is not opinionated about URL length, but practical implementations are.

If your ids are UUIDs, each has a length of 36. If your URL consisted of nothing but UUIDs, you could safely delete 55.5(repeating) of them.

[0]: https://stackoverflow.com/questions/417142/what-is-the-maxim...


I usually implement this as Http delete with a json array of ids in the body.

Probably in violation of some standard, but seems sensible enough.



PATCH on the collection.


We don't need a profusion of operations in the core protocol. Do you need idempotence or not? Pick from one of these two things and elaborate on top of that.


Having "create many", "update many", and "delete many" is three additional operations. Profusion may not be the word I'd use. Also, I will say they are very commonly requested operations, and they've lead to POST becoming overloaded in meaning.


Alternatively, in a GraphQL server, everything becomes POST and it all becomes meaningless.


If you don’t need it to be atomic, you can just make parallel requests. Should not be an issue with HTTP/2


Except now your server has to handle a lot more connections.

Was recently dealing with a list of 100+ simple text fields each with unique identifying meta info.

Most usage of the form would only involve updating a few at a time, but on initial entry or a large update all of them could be updated or deleted.

A RESTful approach would have me submit 100+ individual PUT requests. However for a smaller instance of our server it would likely be affected by 100s of requests at a time and it would significantly complicate the client code to make all of these requests at a time and do the error handling for it.

It’s actually a small payload, even with 100 items, so it’s ridiculous to send them individual in order to be “correct” if it

- leads to a worse user experience. - increases load on the server. - complicates the codebase.


What advantage is there to having a bulk operation vs many operations?

With HTTP2 you can send a ton of requests in parallel.


>What advantage is there to having a bulk operation vs many operations?

For one, having it be transactional.

>With HTTP2 you can send a ton of requests in parallel.

That's orthogonal. And they still have a cost (in time and size of transfer).


Http2 has efficiency gains at the transport later but not typically at the application tier. Having a single http request lets you things like handling authentication, fetching common state (user object), etc a single time for a batch request. Http2 doesn’t help you here.


A lot of sites might have an HTTP/2 upstream server, but everything downstream is still communicating via HTTP/1.1. I would imagine you can only the benefits of HTTP/2 multiplexing if you fully utilize HTTP/2 end-to-end.

That means your CDN, your load balancers, and finally your actual applications-- any any other intermediaries -- all communicating via HTTP/2.

Not to say that HTTP/2 isn't the solution, but it's going to be slower than looking up "number of sites using HTTP/2", since you can't easily inspect the behavior of the intermediary servers.


Getting onto H2 is worth investing in then though, and drastically simplifies things that would otherwise need bulk operations everywhere.

I'm not entirely sure what you mean with downstream, but in my experience H2 is really well supported almost everywhere these days. Ymmv ofc


What I mean is, typically, there are many layers between an end user's network request and reaching the server where it will be fulfilled. For example, you might have your site fronted by Cloudflare. They provide DDOS protection, etc. That Cloudflare server might accept HTTP/2 requests, and then open a new request to your load balancer server. Then, your load balancer opens another network request to the server fulfilling the request. The connections between these servers may only support HTTP/1.1. So you lose the benefits of HTTP/2 since you can't multiplex the network request end-to-end.


In this particular example (S3), I don't think this is really a concern.

Anyway, I'm curious... what components do you run into that don't support this yet? At least in your examples I can't think of any major players that don't do HTTP/2 out of the box, except, well, old versions.

Is this more of a hypothetical? In my experience HTTP/2 is pretty much ubiquitous for anything current.


There are environments that do not allow http2 or websocket traffic on the network. Most government/big enterprise have these restrictions.


The load balancer could append a header though with a session ID that is the same for the same HTTP/2 session, and then the server can batch based on the session ID and timeframe.


Have you ever heard the concept of the common antipattern of n+1 queries that the back-end developers try to avoid? What you are suggesting would create that pattern on the front-end too.


Yes, I understand the concept. I totally get that certain situations call for batching operations, including things that need to work transactional.

I think though that most operations don't need this. It's not an optimization I would blindly add to any operation, unless there's a specific case that warrants it.

I don't think that this is an anti-pattern. Most APIs do '1 thing at a time'.


The backend can batch deletes made in the same timeframe in the same HTTP/2 session, like how you can batch queries with Facebook Dataloader in GraphQL APIs to avoid the n+1 query problem.


Careful with PATCH - some enterprises block PATCH requests (had some clients that did this)


I haven't yet encountered the horrible, deplorable organization that both MITMs its SSL traffic and filters on HTTP method--but I know it's out there, just waiting to ruin my day and diminish my faith in humanity.


True. Some firewalls will add/remove headers as well. An easy way to lose a day debugging network traces.


I wondered recently - where do large queries fit in to this ? URL encoding has size limits and is inefficient - but you can't use get body


If you want to be purely restful, you'd POST your request with all the parameters, which would create a new query resource, and then return the query resource ID. You would then make a second GET request with just the query ID.

The GET would be cacheable because they query would be repeatable and idempotent. In theory you'd never have to make the POST more than once unless you are creating a new query, which would make a new ID.


That would be an operation in this API definition.

See section 13.


I use POST for complex queries for this reason. Some things aren't easily URI encoded. Easier to just post JSON. Not pure REST, but I'm not really seeking purity...


Yea that's what everyone does but it seems like a pretty big oversight in REST


It's not an oversight in REST, it's one of the ways of trying to faithfully implement REST over HTTP, which has a gaping hole in the standard set of methods that messes with that use case. While HTTP (specifically 1.1) is an implementation of REST principles, that use case wasn't front-and-center at the time (PATCH is also a recent fix to another hole.)

There have been proposed standardized fixes such as generalizing the HTTP SEARCH method that first appeared in WebDAV[0], but none have yet gone far.

[0] https://tools.ietf.org/html/draft-snell-search-method-01


I suppose you could POST the query object and receive a 201 Created with Location header and then retrieve the results via GET?


Or complex nested queries, such as those frequently created in a CRM specifying many key value pairs with different comparators: =, >, <, !=, LIKE. Add complex sorting instructions, pagination instructions, etc.


Technically GET request allow a body in the HTTP spec if I remember correctly. It's just that most implementations don't support it.


GET and POST seem a ===lot=== easier.


How do any of these HTTP verbs interact with HTML? When you have a pure HTML site (no JavaScript) how do you use these HTTP verbs when all you have is <a> links?

For example, how do you have an up-vote link? What verb is that supposed to be? And links are all GET aren't they? Up-vote isn't idempotent. How do you make a correct up-vote link?

How do we have this disconnect between HTTP verbs and HTML? How did it work before AJAX?

(I'm not a web developer.)


POST is effectively the catch-all verb. There's nothing that violates HTTP spec in having a site that only has one endpoint, `POST /`, that specifies the actual JTBD somewhere in the request body. It's simply nasty and unidomatic.

c.f. https://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html

User agents still don't support PUT et all in forms without JavaScript -- you're limited to GET/POST, c.f. https://developer.mozilla.org/en-US/docs/Web/HTML/Element/fo...


> User agents still don't support PUT et all in forms without JavaScript

So why were they in the original HTTP spec if they weren't supported by original HTTP agents?


> So why were they in the original HTTP spec if they weren't supported by original HTTP agents?

Bad assumption.[1]

HTTP/1.0 spec (1996) defined GET, POST, and HEAD.[2]

HTTP/1.1 spec (1997) defined all of the other, newer methods.[3]

> supported by original HTTP agents

The first web browsers I know about were Mosaic (1993) and Netscape Navigator (1994) predating the HTTP/1.0 RFC.

[1] https://stackoverflow.com/questions/41411152/how-many-http-v...

[2] https://tools.ietf.org/html/rfc1945#section-5.1.1

[3] https://tools.ietf.org/html/rfc2068#section-9


What I still don't understand is why they didn't add support for them to forms.


They were only added in HTTP 1.1, and it's initial drafts had even more variants that were clearly for file management: https://tools.ietf.org/html/draft-ietf-http-v11-spec-00#sect...

This is before me being old enough to follow technology, so below is a good portion of conjecture and hearsay (I'd love to be corrected by someone who actually remembers it):

Tim Berners-Lee initial web browser was intended as an editing tool as well as a viewer, and these ideas came back when they went to write up HTTP as a a "proper" IETF standard. Not uncommon (bit less so today, but back then especially) to design specs with what they thought it should do, not what they'd tried/done in practice. Making an official spec is the chance to get this stuff in.

At some point, a bunch of the file-specific stuff was split off to WebDAV as an extra thing (work on that started 1996, HTTP 1.1 was finished 1997) - or rejected and WebDAV created as a new home for those ideas? But PUT among some others survived.


You’d use forms with buttons in them, and style them to look however you like. I used to do this a lot when building pages using progressive enhancement; it’s less common now.

An “upvote” is probably an idempotent action. Calling an “upvote” endpoint moves an object into an “upvoted” state, regardless of what it’s previous state was. However, it’s probably semantically a PATCH, since it’s a partial update of a resource.

There is no real disconnect here.


I see things like <a> GET links for destructive operations all the time - are they in violation of the HTTP spec? Do browsers speculatively load these resources and delete things without the user interacting?


It’s definitely a problem. You should never have any destructive or modifying actions of any sort done from a GET request. In addition to browser prefetching, users might share a link without the receiving user realizing they are taking an action as soon as they click the link, which can be a source of security issues.

That being said, I too have seen things like this on the Internet. I would assume Google has some sort of heuristic to determine whether or not they can safely prefetch a link, but who really knows? You should absolutely avoid making this mistake in your own code.


You can have buttons and forms with pure HTML. Sure links are all GET requests, but you can specify the method of a form.


I agree REST APIs should be built this way, however I disagree any end user should ever have to think about APIs this way. I think a good SDK should hide most of this complexity away, and no real human should ever have to know the difference between a PUT or a POST. APIs should think more like people; people shouldn't have to think more like APIs.

EDIT: Since I didn't make my case well enough, here's an example. Let's say you want to charge a credit card. Are you doing a `POST /charge`? Or would it be better to say `card.charge(500)`? My point isn't that there shouldn't be a proper REST API below it, but rather that we should offer abstractions so normal semi-technical people can use APIs and don't have to think too much like a computer.


“SDK” is a much less useful hard boundary than “Web API” for where I stop providing service. Should I provide an SDK in every language a client might use? If they want to access my service in a language I don’t support, should I tell them to piss off? I could offer for them to just hit the API themselves, but then we’re back where we started.

On the other hand, if I offer a web service and some someone asks “but what if I can’t make HTTP requests?” I’m _perfectly_ happy to tell them to piss off.


That’s what Evernote did: binary protocol and implementations in a bunch of languages. Their API was apparently easier to develop this way.


A human should know because it affects user agent behavior. Over use of POST where it is not needed causes pointless friction with confirmation dialogs that could be avoided with PUT.


I feel like, if we're not writing data-interfacing applications with PUT and POST as specifically different actions in mind, we're doing something wrong.

People think in ambiguity; adding a new item or replacing an existing item, as in this example, shouldn't be something an application needs to guess at.


How could someone use an API correctly without understanding it?


The client SDK exposes just the actual API and not the web API which is typically much more complex.

I really don’t want to worry about setting http headers to make an add-if-not-exists call for example. The API in a client library with a method “TryAdd(...)” is understandable as an API, and the setting of headers and 4XX code you might receive on a duplicate is just returned as false.


People have being using banks for centuries.

POST /my-account/charge is not an abstraction, it's pretty much exactly what people do and have done.


When this happens, it’s usually meant as a RPC endpoint, yes. However, it can be an abstraction, especially if you call it “POST /my-account/charges”, in plural form: you’re sending a new object to the “charges” resource (this architectural technique is usually called reification)


What is the problem that such codes solves ? Why cant it just be json with a POST request with a field specifying the operation ?


You can always do your own custom thing, but you give up on leveraging anything that anyone has written and anyone trying to implement your thing has to figure out what’s going on from scratch instead of being able to understand it as a variation of something s/he has seen many times before.


that is true. people overcomplicate things. but i went the other way, because the number of articles about how people get rest wrong is staggering, so all my api calls are post requests no matther what. what matters is the rpc method being invoked and i truly don't care if someone cannot use plain web browser to perform requests. thats what documentation and api client is for. i have spoken. this is the way.


I'm glad you haven't described your API as RESTful.

You're perfectly fine to design an API around RPC calls and POST. There are all of the issues around RPC that you will have to deal with (long running operations, versioning, serde and marshalling of arguments, argument and response typing and interface definitions, codegen of the IDL to product client and server stubs that will require modification and back/forward porting of implementation).

But an API definition like this one, covers a lot of that as part of the protocol, most of which is defined by HTTP anyway. You also get encryption, compression, caching, tooling "for free".


You can make anything idempotent if you add a "If-Match" header that returns a 409/412 if the current resource version doesn't match. Not only does this prevent the same request from being applied multiple times, it prevents losing updates when multiple different requests are submitted at the same time. Google cloud storage has a good example of this: https://cloud.google.com/storage/docs/generations-preconditi...

The only exception is on resource creation (POST) since there's no existing version/generation number to match on. You can get around that by letting the client generate the id with a random id and reject if the id already exists. I haven't decided if this is a good idea or not since the client can potentially choose really weird vanity ids (especially if you're using uuid in hex or base64). But if you don't expose your internal ids in any UI that seems fine too.


I've designed quite some REST APIs, but I've come to the conclusion that all those semantic/HATEOS or other REST guidelines don't always apply or make sense, depending on the problem domain.

I worked in finance and designed a REST API, and besides the standard user/account object, basically ALL the data and operations were neither idempotent, cacheable, durable and often couldn't possible be designed using HATEOS et al.

Quotes, orders, offers and transactions carry lots of monetary amounts which are sent in user currency, which is auto-converted depending on the user requesting it (and currency conversion rates is part of the business). Most offers are only valid a limited amount of time (seconds to minutes) because of changing market rates. There is also no "changing" of object as in PATCH/DELETE, all you do is trigger operations via the API and every action you ever did is kept in a log (regulatory wise, but also to display for the end user).

There is some way to try to hammer this thing to fit with HATEOS et. al. and I put some effort in it, but I would have ended up splitting DTOs into idempotent/mutable and non-idempotent/mutable parts and spread them across different services, bloat the DTOs themselves (i.e. include all available currencies in a quote/offer) and have the validity/expiry of objects via HTTP caching (instead of the DTOs). That would have ended up in a complex and hard-to-read API, would have significantly worse performance (due to lot of unneeded data & calculations) and some insane design decisions (like keeping expired offers/quotes around just so they are still available at their service URL with an id, even though the business requirements would never store expired offers).

Sometimes you just need to use your own head, accept that the problem domain might not be covered by other "guidelines", and come up with a sane design yourself.


REST shines where there is a big path dependent interaction graph - ie. feature switches, permissions, plans/packages and other statefull stuff.


At MS, in our team no manager gave the slightest damn about being restful or any proper consistent API design. Everything was constantly rushed and just 'tacked on', random API versions were created, contracts were broken, nothing was consistent. It was a mess. That said, I believe most of these guidelines completely ignored in most teams.


Grammar nit: "That said," is usually followed by a contradictory statement, not a supporting statement. i.e. "That said, most folks outside of my team follow these guidelines".


As a fairly heavy consumer of the Azure DevOps REST APIs, this sounds pretty accurate.


There are many actually RESTful APIs out there. Some even have good documentation to accompany. I've heard Stripe given as an example that's especially notable.

Managers will care if your API's are a contract with your end customer (e.g., your product is providing APIs), and you're watching your NPS scores.


It’s almost as if there are different groups in Microsoft with different priorities. Why is it so hard to get 130k people in complete assignment across all areas? Seems like a simple problem.


> It’s almost as if there are different groups in Microsoft with different priorities. Why is it so hard to get 130k people in complete alignment across all areas? Seems like a simple problem.

I feel like this could have been said without the sarcasm and it would have been a great discussion starter. Instead a potentially valid point/retort will be obscured by the tone and discussion surrounding it.

For example:

> Unfortunately large organizations often struggle to get groups into alignment on issues like this, unless there's a strong mandate from the top down (e.g. Amazon).

No sarcasm or snark, same basic substance.


I am unconvinced that there is a meaningful conversation to have about this. Every time any topic comes up in the context of Microsoft, someone chimes in to state that it’s not their experience. Yeah, Microsoft is not a monolithic entity. It’s literally 130 thousand employees and tens of thousands of contractors. For literally any topic, there will be a subset of people who feel their immediate organization doesn’t match the direction the company intends.

Top down mandate cannot align 130k employees on every topic, even if that attempt were an appropriate thing for upper management to spend their time on.

There might be a more interesting conversation if context were given. If the comment about management not caring about RESTful APIs is coming from someone working as part of a team focused on public APIs, it’s a lot more interesting than someone working on desktop clients.

Edit: You know what, you’re right. I read the comment as a flippant “pft, that’s not my experience” and responded as such. I should have assumed better intent. Maybe the commenter was questioning why their experience doesn’t match up with this. Maybe they were asking why management isn’t pushing this effectively. I assumed a specific intent and should have considered different potential intentions.


And not correct either. As mentioned, if Satya said so it would happen within months.


There are a limited number of mandates that can come from the top. Upper management loses credibility when they issue endless mandates, even if all of the mandates are reasonable.

It’s also unrealistic that everyone underneath can focus on many mandates at once. Which leads to the loss of credibility when mandates start getting ignored out of necessity.


Good thing then this isn’t an endless mandate and only a single idea with a short list of requirements.


“Endless mandates” meaning an endless number of mandates, not that individual mandates are endless.

At Microsoft the number of simple mandates like this one are numbered at least in the hundreds.


It could be done easily if deemed a priority. Defeatism is not a compelling argument.


Yes, for any specific mandate like this, it could easily be accomplished. The point is that in aggregate there are too many of these to accomplish. Competent management will choose the highest value things to focus on. This isn’t defeatism. This is realism. Focusing on everything is the same as focusing on nothing.

Realistically, mandating RESTful APIs at the exec level is unlikely to be a big win for Microsoft. The teams working on APIs at scale are largely already doing this (and you’ll notice multiple groups represented by the authors). The teams that aren’t doing this are largely not building APIs that benefit a great deal from RESTful APIs, because they’re building internal APIs or similar and RESTfulness would be nice to have but not particularly impactful.


The primary innovation in REST is HATEOAS (which isn't mentioned in the document at all.) JSON isn't a hypertext, it just isn't a good format for REST-ful services.

http://intercoolerjs.org/2016/01/18/rescuing-rest.html

http://intercoolerjs.org/2016/05/08/hatoeas-is-for-humans.ht...

Doesn't anyone notice this? I feel like I'm taking crazy pills.

https://www.youtube.com/watch?v=HOK6mE7sdvs


The trouble with HATEOAS is that it requires extra work on the client. The client is supposed to start at the root of the API, request URLs, and cache them. Getting the data one wants requires multiple HTTP calls, many of which are supposed to be cached.

I tried implementing this and found it toilsome. It was far easier to use versioned URLs that followed a documented pattern.

When I checked about three years ago there wasn’t much in the open source community that I could build atop for clients. I also didn’t want to maintain an SDK client in addition to the API itself.


"Toilsome" is probably the most kindly yet still accurate thing one could say about exposing HATEOAS to the real world, at this particular point in time.


Every 1.0 web app developer in the world implemented HATEOAS to a great extent without even thinking about it.

It's only toilsome when you try to shoe-horn it into a traditional data API, rather than accept it as a unique descriptive aspect of the early web architecture.


It's all a category error. HATEOAS was descriptive, he was describing the early web architecture, and it was implemented without thinking about it in web 1.0 apps.

It only became problematic when we tried to shoe-horn it into traditional data APIs.


So that's what HATEOAS is. I had been wondering why the Github APIs had these absurdly large responses with a shit-ton of urls.

> JSON isn't a hypertext, it just isn't a good format for REST-ful services.

The point of REST was to take ideas from the browser and apply them to other services.

But HATEOAS is proving that REST only really makes sense if you've got a browser on the other end.

> Doesn't anyone notice this? I feel like I'm taking crazy pills.

The first comment I saw on this article was, _every single person writing a REST api should have to memorize this table_.

If you were trying to sell people on a new library and said, "everyone here has to memorize this table," you'd get laughed at as a crazy person.


No, HATEOAS is not for humans but was imagined for api consumers. All those links that you mentioned in the Github api responses are named and you as a client can use them, where the actual url can change. Also, you as an api consumer can know what actions are available.

That being said, I have never seen developing api client driven mainly by HATEOAS


> All those links that you mentioned in the Github api responses are named and you as a client can use them, where the actual url can change.

I think that's a good idea, just like REST and HATEOAS are both ideas trying to solve problems.

I'm just skeptical that it'd work in practice. It seems vanishingly unlikely that consumers will use it correctly enough that changing the URLs in that mechanism wouldn't result in making the breakage even more mysterious.

When I say "it only makes sense if you've got a browser on the other end," the issue is you need a client approaching the complexity of a browser to do all the redirection and such that makes this level of flexibility for the API implementer to work.

I believe something like that is possible, but this isn't the way.


Imagine an API returning HTML, too. I've built such APIs (not public). I consider it best practice. The HTML representation can be utilized as the frontend. At least, its the interactive documentation to the API. In fact, when I built such APIs, I usually started with the HTML representation.


But isn’t almost always the link just the current link + the link name?

Apart from being able to tell the client what functions they are allowed to use and a very superficial form of documentation I don’t really get the advantage of it.


Every time I see a discussion about REST guidelines, I get a little bit more happy about having switched to gRPC.


Man, I wanted to love gRPC. The disconnect between Protobufs and languages just felt too great though. You could do some weird things that just made every language feel, to some degree, non-idiomatic.

I switched to Twirp at one point to retain the simplicity of RPC + Protobuf, but avoid some of the complexity we didn't need via gRPC... but even that suffered, of course, from the Protobuf problem.

Finally I'm back to plain HTTP and JSON. We don't worry too much about REST fundamentals, and honestly we're more like an ad-hoc (JSON) RPC over HTTP, but it's simple.

The only problem is documentation. The one thing that I found perfect with Protobuf. Seems really hard to have everything here.


How would you articulate "the protobuf problem" to a gRPC novice like me?

Also re http/rest docs -- check out my open source project -- it's sort of like Git but for Rest APIs https://github.com/opticdev/optic


I go over it in a bit more detail here: https://news.ycombinator.com/item?id=21621592

But in short, Protobuf is inherently a language of its own. Like JSON or etc. But it's feature rich enough that it can cause a fair number of incompatibilities between a language's preferred style or usage of features.

Where the incompatibility shows up depends on the language. I found it to be very different between Rust and Go, for example.


> You could do some weird things that just made every language feel, to some degree, non-idiomatic.

Could you elaborate on this?


PREFIX: It's been ~1.5years since I've used Protobuf and Go together, so forgive my memory.

Sure, take Go for example. Protobuf to Go works well, but there are some features of the Protobuf language that just don't exist in Go. Enums, for example. While Go does have constructs that are similar to Enums, they're just different enough to make it a bit weird.

This basic problem gets worse when you try to use it though. The `One of` construct, for example, is sort of impossible in Go. Iirc the Go implementation had to use runtime type checking to give some resemblance to the Protobuf spec.

Rust (which I focus on now) was far better with Protobufs. As far as I remember, there wasn't much of Protobufs that broke idiomatic Rust. However, there was plenty of idiomatic Rust that broke Protobufs iirc. Things like complex data structures behind enum variants. Enum variants as structs, tuple values, etc. iirc it was bad enough that I usually used an abstraction library to write my idiomatic data structures and convert to/from the Protobuf structs.

Which, is how I left it all together. I realized I had a ton of glue code trying to make up for the incompatibilities in Protobufs + (Go|Rust), such that it would just be easier to drop it - at least as far as my code is concerned.

We now struggle with documentation, something Protobuf did excellently, but at least the code smell is gone.


Thanks, that's extremely helpful as I'm working on a language that needs to compile into usable objects. It does have a union type because they wind up being the least common denominator between enums, inheritance, switches, etc. I figured that was going to be tricky to translate into languages without explicit sum types, so I'll definitely take a look at protobuf and Go and see if there's a way to make it fit better.

> The `One of` construct, for example, is sort of impossible in Go. Iirc the Go implementation had to use runtime type checking to give some resemblance to the Protobuf spec.

It looks like they took a reasonable approach[1], and maybe the deeper issue is that protobuf assumes you want direct access to those structures. That's not an unreasonable assumption given its domain, but I can see a red flag: the code generator is solving the problem by generating a forest of types, but that's also something a developer would never do.

Another approach would be to make the oneof implementation more opaque and let you access things via methods. While you'd always want to allow the consumer to ask "which kind of avatar" is this, you could also let the consumer query "get me the avatar image url" and that could return either success or an error.

> I realized I had a ton of glue code trying to make up for the incompatibilities in Protobufs + (Go|Rust), such that it would just be easier to drop it

That's the acid test for whether it works. And it means you can figure out if your language is good by porting a non-trivial codebase using an existing API and see how much glue is required.

[1]: https://developers.google.com/protocol-buffers/docs/referenc...


I've seen that too, but it's really only a big deal if you're doing something else smelly: Carrying the codegen'd proto objects throughout your application. Better to handle proto messages the same way you should be handling any interface with the outside world: Translate it into your code's own domain model, which can (and should) follow any idiom you like ASAP, and get on with your day.


I think I agree with you, BUT I will say that it's a problem I just don't have with JSON.

I think it's because JSON is inherently smaller, and the spec primarily focuses on basic types that handle data.

With JSON I can write idiomatic code, in any language, and the translation to and from my code is correct. I don't need to abstract away my JSON code for arbitrary reasons, it works.

I'm not sure why that is TBH, I just know that it's a restriction of Protobuf I don't find myself running into with JSON.


The load balancing still seems like a headache for gRPC. What do you use?


Envoy proxy seems to solve it reasonably.


Why isn't this called "HTTP API Guidelines"? It doesn't seem to have much to do with REST at all. For example, it says that people should be able to construct URLs, whereas the REST style uses the URLs found in resources.

To be clear, I'm not saying that there is anything wrong with the practices they propose here, just that they're not what they're claiming they are.



Ideally, they wouldn't re-invent the wheel again and just stick with the ODATA protocol. They already have the platform to do so - https://docs.microsoft.com/en-us/odata/resources/roadmap


What do other folks here think of how MS handles query params, specifically stuff like filtering?

https://github.com/Microsoft/api-guidelines/blob/master/Guid...

When working with the Graph API for 365 I thought it was really weird how you had to pass some params

   GET https://api.contoso.com/v1.0/products?$filter=name eq 'Milk'


Aren't things like $filter, $orderBy etc. straight out of the OData standard[1].

[1] https://www.odata.org/


It looks weird when you write it that way, but the ' ' character gets translated into '+' when you encode the URI before making a request. If it was me, I'd prefer to see the '+' encoding than ' ' because it looks strange when it's not a valid URI.


At first look I think using spaces is a bad idea but maybe it’s ok.


yeah seeing spaces is kinda weird IMO


I find that MS are often quite good at developing good APIs and documenting them.

In this new doc I particularly like the Delta Queries section [0]. It's something that's difficult to get right but with this you can pretty much copy and paste their guidelines for your project.

0: https://github.com/Microsoft/api-guidelines/blob/master/Guid...


> An example URL that is not friendly is:

> https://api.contoso.com/EWS/OData/Users('jdoe@microsoft.com'...

A well-deserved dig at the sharepoint API.


This keeps bothering me:

>7.1 URL structure

>Humans SHOULD be able to easily read and construct URLs.

>This facilitates discovery and eases adoption on platforms >without a well-supported client library.

The URL is ephemeral on REST. That is because you create the documents on the fly. They can be linked or not to things that you store on the datastore. Allows you to easily change around as needed because the URL is not the API. The hyperlinks are the API. The URL is like a memory pointer. You shouldn't care about it.


Too much emphasis on interfaces than on programmability. I don't even know if they follow their own rules too.

Also, they define DELETE as idempotent, which is a little different from how some of us write APIs.


Note that idempotent in this context means “2 calls will end up with the same server state as one call”.

It does not mean for example that the second call can’t return a different response than the first.


The reason people like idempotency is because it is great for "at least once" systems (as opposed to "exactly once", which is hard to guarantee).

But REST's version of idempotency isn't good for this. If you retry your request multiple times (due to flaky connection or whatever), it only guarantees the same server state if your duplicate requests are bunched up.

For example if you do a DELETE then create it again with a POST, if there is a duplicate straggler DELETE floating around it will end up redeleting your new recreation.


You can make it idempotent again by requiring API users to use an If-Match header with the ETag of the state they're currently expecting.

Also allows you to implement optimistic concurrency. See: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/If...


I said this in another comment and got downvoted too. Can the downvoters explain what they are downvoting for? What's wrong with if-match?


Thats why you only should allow delete, patch and put with a UUID or similar ID. Exactly the same could be said for a DB query not running in a transaction.


That's because DELETE is only idempotent, not commutative. Commutativity is solved by conflict-free replicated data types, but they can be extremely hard to implement without redefining correctness.


If "idempotent" means "do the same thing", one would expect the same response on a subsequent call, no?

"Do the same thing" isn't specific enough. Does it mean the state of the system? Does it mean the response code? Does it mean the response body? Does it mean all of these things? All seem like reasonable interpretations.


In the context of REST it means the same server state (a database table state, a mailbox state, file system state etc)

It doesn’t mean “do the same thing” or “get the same response”, only “end up in the same place”.

You can do a delete x, get an OK response, and the important state is that x is now deleted on the server. Then the next call to delete x does nothing (which is different from the first which deleted x!). So to be idempotent it has to do something different. The response in the second case can be both “ok” (because the thing is deleted) but also e.g “x doesn’t exist”.


How are you thinking of doing it differently? DELETE is typically considered idempotent.


They don't define DELETE as idemptotent, the HTTP standard does.

It also feels very natural for DELETE to be IMHO, so I would be curious what strange thing you do


Also seems they never heard about versioning via content negotiation :)


Versioning through HTTP Headers was the spark for why I was looking around for guidelines. I think it is the better way to go other than URL versioning.


I think so too, but even in case one disagrees, they should have mentioned it. Otherwise you’d think they never had heard of it :)


Ok, this may be a dumb question but is there any risk to using/enforcing PATCH in your APIs in this day and age? I still avoid it but that's probably just cargo culting now, right?


Where's the section saying "so get rid of your SOAP WS already OK?" That would be lovely!


What's wrong with SOAP? SOAP is a lot more than REST, so you'd need to explain an additional technology beyond REST in order to recommend moving away from SOAP.


Cool guidelines!


„Some services MAY add fields to responses without changing versions numbers. Services that do so MUST make this clear in their documentation“

WTF


Remember doing an integration with a finance company's API. I read their documentation,write the code,all works fine, we are moving towards go live date. A week later, my integration fails. I try hell knows how many things, eventually contact their lead dev asking what am doing wrong...The answer was "Oh,emm,yes,we've kind of changed the response format slightly, it's not in the documentation"... I got that fixed.. It's working again.Then, we suppose to go live next week.2 days before go live date I get told we are cancelling all business with the company...All the integration code goes into a bin.


Assuming you’re working with JSON, it’s very easy to ignore extra fields and no reason to deploy any fixes for that :)


Where did it say it was an extra field? It says format change which could make for problems for sure!

edit: ah you must be referring to the GP. But the anecdote just says format change.


Yes, I think BstBln was using a context aware parser to formulate his reply.


Indeed :)


I'm not sure if your scenario is the same? An addition of a new field in a JSON response generally shouldn't cause breakage.

Though doing it without bumping a version should be rare, especially without a big documentation notice.


"Services MUST increment their version number in response to any breaking API change. See the following section for a detailed discussion of what constitutes a breaking change. Services MAY increment their version number for nonbreaking changes as well, if desired.”

Of course you want to mention that a new field was added and what it is for, but why do you need a big notice? Your clients should just continue to work.


Especially they are suggesting that if not mentioned in the docs, a non-breaking add of a field should result in a new api version...


I'm guessing this is the same as "adding another column to a db schema should not break your app". So basically don't iterate over objects (or iterate and only use keys you recognize) and you are fine.

That does not seem at all weird to me. The client side of it is the same as "don't SELECT *".


You should be validating inputs to your application anyway.


This is more a opinionated design document for HTTP than it is for REST. There is not a single word about semantic formats and HATEOAS. I really expected more from Microsoft in this area.

Had I the time to write such a design document, I would start with ressources, versioning, URI and semantic documents. I would write about entity models, linking (links, link templates) and actions. I would write about representations and about how represenations can support optimizations, embedding of resources and entity expansion, which would otherwise be addressed by inventions like GraphQL. And only afterwards, I would write about HTTP as a transfer protocol. But that part can be brief, because there is already the HTTP specification out there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: