Using code formatting for quoted text is not civil disobedience, it is an innocent mistake that should be gently pointed out and corrected.
Continuing to use code formatting after being made aware of the problem wouldn't be civil disobedience either. It would merely inconvenience the large number of people who read HN on mobile devices, for no purpose.
(Edited to remove a somewhat unrelated philosophical note.)
> Continuing to use code formatting after being made aware of the problem wouldn't be civil disobedience either. It would merely inconvenience the large number of people who read HN on mobile devices, for no purpose.
We all do it from time to time, even you did it few days ago [1], monospace have its purpose, maybe HN should simply fix it.
Oh come now. I used code formatting quite deliberately in that comment. I wanted to preserve the centering and single-spacing of the text on the back of the radio as best I could. The lines are short and render fine on a mobile device.
So yes, given HN's meager formatting options, monospaceisgood for that! :-)
You may note that later in that same comment there is a longer quote that I formatted in non-code italics.
But hey, if I encouraged you to create an HN account, that's great! Speaking as someone who writes code in a proportional font, we may have some interesting discussions ahead of us. ;-)
I propose using four underscores as a delimiter of a quote block, like asciidoctor, in addition to italics. I think it looks better than the markdown way of using ">" in front of a quote as it can get quite cumbersome and ugly when you're quoting multiple lines of something. However, I generally enjoy using markdown more if it works on a website. It just looks ugly IMO if it doesn't.
I’m rather young (on the border of Gen Y and Gen Z) so I guess I’m not old enough to have used either old Unix email or Usenet, so thank you for that tidbit.
Another rule to memorize: if one of the reasons for using code formatting is to preserve line breaks, simply add an extra blank line between paragraphs. Think of each line as its own paragraph. Then the text will be readable on any device.
As someone who mostly uses mobile, I agree with you, but in this case I couldn't figure out a way to make it readable without code formatting and without retyping it.
Thank you for the effort. I've updated my post with your text.
That's right, that's the same issue. But in both cases, you can just send your list of items to fetch (respectively delete) in the body of the GET or DELETE request. (Because AFAIK, it's HTTP compliant to put a body in a GET request, even if it's rarely used).
Edit: I was wrong. It's allowed to put a body in a GET request, but it isn't OK to use it according to the HTTP spec (which admittedly kind of weird). Source: https://stackoverflow.com/a/983458
Edit2: according to an edit int the SO response, it looks like I wasn't actually wrong: or at least not since 2014 when these RFCs[1][2] became the standard for HTTP.
A body is and has always been syntactically permitted on any HTTP request.
However, it is semantic nonsense to put it on a DELETE or GET.
DELETE is defined to remove the resource identified by the URL; GET is defined to fetch the resource identified by the URL. There no purpose for an entity.
This is the point made in the SO answer I quoted, and this was the official position of the rfc2616[1], but as I said in my second edit, this rule was relaxed in the new version of the specification: rfc7231 published in 2014.
> The GET method means retrieve whatever information (in the form of an entity) is identified by the Request-URI.
To :
> A payload within a GET request message has no defined semantics
Which, to me, sounds like: “it was forbidden to put info in the body, and now it's just not especially encouraged”. But maybe I'm over thinking the whole thing.
It doesn't really scale. There's a de-facto limit of 2,000 characters [0] in a URL. HTTP/1.1 is not opinionated about URL length, but practical implementations are.
If your ids are UUIDs, each has a length of 36. If your URL consisted of nothing but UUIDs, you could safely delete 55.5(repeating) of them.
We don't need a profusion of operations in the core protocol. Do you need idempotence or not? Pick from one of these two things and elaborate on top of that.
Having "create many", "update many", and "delete many" is three additional operations. Profusion may not be the word I'd use. Also, I will say they are very commonly requested operations, and they've lead to POST becoming overloaded in meaning.
Except now your server has to handle a lot more connections.
Was recently dealing with a list of 100+ simple text fields each with unique identifying meta info.
Most usage of the form would only involve updating a few at a time, but on initial entry or a large update all of them could be updated or deleted.
A RESTful approach would have me submit 100+ individual PUT requests. However for a smaller instance of our server it would likely be affected by 100s of requests at a time and it would significantly complicate the client code to make all of these requests at a time and do the error handling for it.
It’s actually a small payload, even with 100 items, so it’s ridiculous to send them individual in order to be “correct” if it
- leads to a worse user experience.
- increases load on the server.
- complicates the codebase.
Http2 has efficiency gains at the transport later but not typically at the application tier. Having a single http request lets you things like handling authentication, fetching common state (user object), etc a single time for a batch request. Http2 doesn’t help you here.
A lot of sites might have an HTTP/2 upstream server, but everything downstream is still communicating via HTTP/1.1. I would imagine you can only the benefits of HTTP/2 multiplexing if you fully utilize HTTP/2 end-to-end.
That means your CDN, your load balancers, and finally your actual applications-- any any other intermediaries -- all communicating via HTTP/2.
Not to say that HTTP/2 isn't the solution, but it's going to be slower than looking up "number of sites using HTTP/2", since you can't easily inspect the behavior of the intermediary servers.
What I mean is, typically, there are many layers between an end user's network request and reaching the server where it will be fulfilled. For example, you might have your site fronted by Cloudflare. They provide DDOS protection, etc. That Cloudflare server might accept HTTP/2 requests, and then open a new request to your load balancer server. Then, your load balancer opens another network request to the server fulfilling the request. The connections between these servers may only support HTTP/1.1. So you lose the benefits of HTTP/2 since you can't multiplex the network request end-to-end.
In this particular example (S3), I don't think this is really a concern.
Anyway, I'm curious... what components do you run into that don't support this yet? At least in your examples I can't think of any major players that don't do HTTP/2 out of the box, except, well, old versions.
Is this more of a hypothetical? In my experience HTTP/2 is pretty much ubiquitous for anything current.
The load balancer could append a header though with a session ID that is the same for the same HTTP/2 session, and then the server can batch based on the session ID and timeframe.
Have you ever heard the concept of the common antipattern of n+1 queries that the back-end developers try to avoid? What you are suggesting would create that pattern on the front-end too.
Yes, I understand the concept. I totally get that certain situations call for batching operations, including things that need to work transactional.
I think though that most operations don't need this. It's not an optimization I would blindly add to any operation, unless there's a specific case that warrants it.
I don't think that this is an anti-pattern. Most APIs do '1 thing at a time'.
The backend can batch deletes made in the same timeframe in the same HTTP/2 session, like how you can batch queries with Facebook Dataloader in GraphQL APIs to avoid the n+1 query problem.
I haven't yet encountered the horrible, deplorable organization that both MITMs its SSL traffic and filters on HTTP method--but I know it's out there, just waiting to ruin my day and diminish my faith in humanity.
If you want to be purely restful, you'd POST your request with all the parameters, which would create a new query resource, and then return the query resource ID. You would then make a second GET request with just the query ID.
The GET would be cacheable because they query would be repeatable and idempotent. In theory you'd never have to make the POST more than once unless you are creating a new query, which would make a new ID.
I use POST for complex queries for this reason. Some things aren't easily URI encoded. Easier to just post JSON. Not pure REST, but I'm not really seeking purity...
It's not an oversight in REST, it's one of the ways of trying to faithfully implement REST over HTTP, which has a gaping hole in the standard set of methods that messes with that use case. While HTTP (specifically 1.1) is an implementation of REST principles, that use case wasn't front-and-center at the time (PATCH is also a recent fix to another hole.)
There have been proposed standardized fixes such as generalizing the HTTP SEARCH method that first appeared in WebDAV[0], but none have yet gone far.
Or complex nested queries, such as those frequently created in a CRM specifying many key value pairs with different comparators: =, >, <, !=, LIKE. Add complex sorting instructions, pagination instructions, etc.
How do any of these HTTP verbs interact with HTML? When you have a pure HTML site (no JavaScript) how do you use these HTTP verbs when all you have is <a> links?
For example, how do you have an up-vote link? What verb is that supposed to be? And links are all GET aren't they? Up-vote isn't idempotent. How do you make a correct up-vote link?
How do we have this disconnect between HTTP verbs and HTML? How did it work before AJAX?
POST is effectively the catch-all verb. There's nothing that violates HTTP spec in having a site that only has one endpoint, `POST /`, that specifies the actual JTBD somewhere in the request body. It's simply nasty and unidomatic.
This is before me being old enough to follow technology, so below is a good portion of conjecture and hearsay (I'd love to be corrected by someone who actually remembers it):
Tim Berners-Lee initial web browser was intended as an editing tool as well as a viewer, and these ideas came back when they went to write up HTTP as a a "proper" IETF standard. Not uncommon (bit less so today, but back then especially) to design specs with what they thought it should do, not what they'd tried/done in practice. Making an official spec is the chance to get this stuff in.
At some point, a bunch of the file-specific stuff was split off to WebDAV as an extra thing (work on that started 1996, HTTP 1.1 was finished 1997) - or rejected and WebDAV created as a new home for those ideas? But PUT among some others survived.
You’d use forms with buttons in them, and style them to look however you like. I used to do this a lot when building pages using progressive enhancement; it’s less common now.
An “upvote” is probably an idempotent action. Calling an “upvote” endpoint moves an object into an “upvoted” state, regardless of what it’s previous state was. However, it’s probably semantically a PATCH, since it’s a partial update of a resource.
I see things like <a> GET links for destructive operations all the time - are they in violation of the HTTP spec? Do browsers speculatively load these resources and delete things without the user interacting?
It’s definitely a problem. You should never have any destructive or modifying actions of any sort done from a GET request. In addition to browser prefetching, users might share a link without the receiving user realizing they are taking an action as soon as they click the link, which can be a source of security issues.
That being said, I too have seen things like this on the Internet. I would assume Google has some sort of heuristic to determine whether or not they can safely prefetch a link, but who really knows? You should absolutely avoid making this mistake in your own code.
I agree REST APIs should be built this way, however I disagree any end user should ever have to think about APIs this way. I think a good SDK should hide most of this complexity away, and no real human should ever have to know the difference between a PUT or a POST. APIs should think more like people; people shouldn't have to think more like APIs.
EDIT: Since I didn't make my case well enough, here's an example. Let's say you want to charge a credit card. Are you doing a `POST /charge`? Or would it be better to say `card.charge(500)`? My point isn't that there shouldn't be a proper REST API below it, but rather that we should offer abstractions so normal semi-technical people can use APIs and don't have to think too much like a computer.
“SDK” is a much less useful hard boundary than “Web API” for where I stop providing service. Should I provide an SDK in every language a client might use? If they want to access my service in a language I don’t support, should I tell them to piss off? I could offer for them to just hit the API themselves, but then we’re back where we started.
On the other hand, if I offer a web service and some someone asks “but what if I can’t make HTTP requests?” I’m _perfectly_ happy to tell them to piss off.
A human should know because it affects user agent behavior. Over use of POST where it is not needed causes pointless friction with confirmation dialogs that could be avoided with PUT.
I feel like, if we're not writing data-interfacing applications with PUT and POST as specifically different actions in mind, we're doing something wrong.
People think in ambiguity; adding a new item or replacing an existing item, as in this example, shouldn't be something an application needs to guess at.
The client SDK exposes just the actual API and not the web API which is typically much more complex.
I really don’t want to worry about setting http headers to make an add-if-not-exists call for example. The API in a client library with a method “TryAdd(...)” is understandable as an API, and the setting of headers and 4XX code you might receive on a duplicate is just returned as false.
When this happens, it’s usually meant as a RPC endpoint, yes. However, it can be an abstraction, especially if you call it “POST /my-account/charges”, in plural form: you’re sending a new object to the “charges” resource (this architectural technique is usually called reification)
You can always do your own custom thing, but you give up on leveraging anything that anyone has written and anyone trying to implement your thing has to figure out what’s going on from scratch instead of being able to understand it as a variation of something s/he has seen many times before.
that is true. people overcomplicate things. but i went the other way, because the number of articles about how people get rest wrong is staggering, so all my api calls are post requests no matther what. what matters is the rpc method being invoked and i truly don't care if someone cannot use plain web browser to perform requests. thats what documentation and api client is for. i have spoken. this is the way.
I'm glad you haven't described your API as RESTful.
You're perfectly fine to design an API around RPC calls and POST. There are all of the issues around RPC that you will have to deal with (long running operations, versioning, serde and marshalling of arguments, argument and response typing and interface definitions, codegen of the IDL to product client and server stubs that will require modification and back/forward porting of implementation).
But an API definition like this one, covers a lot of that as part of the protocol, most of which is defined by HTTP anyway. You also get encryption, compression, caching, tooling "for free".
You can make anything idempotent if you add a "If-Match" header that returns a 409/412 if the current resource version doesn't match. Not only does this prevent the same request from being applied multiple times, it prevents losing updates when multiple different requests are submitted at the same time. Google cloud storage has a good example of this: https://cloud.google.com/storage/docs/generations-preconditi...
The only exception is on resource creation (POST) since there's no existing version/generation number to match on. You can get around that by letting the client generate the id with a random id and reject if the id already exists. I haven't decided if this is a good idea or not since the client can potentially choose really weird vanity ids (especially if you're using uuid in hex or base64). But if you don't expose your internal ids in any UI that seems fine too.
I've designed quite some REST APIs, but I've come to the conclusion that all those semantic/HATEOS or other REST guidelines don't always apply or make sense, depending on the problem domain.
I worked in finance and designed a REST API, and besides the standard user/account object, basically ALL the data and operations were neither idempotent, cacheable, durable and often couldn't possible be designed using HATEOS et al.
Quotes, orders, offers and transactions carry lots of monetary amounts which are sent in user currency, which is auto-converted depending on the user requesting it (and currency conversion rates is part of the business). Most offers are only valid a limited amount of time (seconds to minutes) because of changing market rates. There is also no "changing" of object as in PATCH/DELETE, all you do is trigger operations via the API and every action you ever did is kept in a log (regulatory wise, but also to display for the end user).
There is some way to try to hammer this thing to fit with HATEOS et. al. and I put some effort in it, but I would have ended up splitting DTOs into idempotent/mutable and non-idempotent/mutable parts and spread them across different services, bloat the DTOs themselves (i.e. include all available currencies in a quote/offer) and have the validity/expiry of objects via HTTP caching (instead of the DTOs). That would have ended up in a complex and hard-to-read API, would have significantly worse performance (due to lot of unneeded data & calculations) and some insane design decisions (like keeping expired offers/quotes around just so they are still available at their service URL with an id, even though the business requirements would never store expired offers).
Sometimes you just need to use your own head, accept that the problem domain might not be covered by other "guidelines", and come up with a sane design yourself.
At MS, in our team no manager gave the slightest damn about being restful or any proper consistent API design.
Everything was constantly rushed and just 'tacked on', random API versions were created, contracts were broken, nothing was consistent.
It was a mess.
That said, I believe most of these guidelines completely ignored in most teams.
Grammar nit: "That said," is usually followed by a contradictory statement, not a supporting statement. i.e. "That said, most folks outside of my team follow these guidelines".
There are many actually RESTful APIs out there. Some even have good documentation to accompany. I've heard Stripe given as an example that's especially notable.
Managers will care if your API's are a contract with your end customer (e.g., your product is providing APIs), and you're watching your NPS scores.
It’s almost as if there are different groups in Microsoft with different priorities. Why is it so hard to get 130k people in complete assignment across all areas? Seems like a simple problem.
> It’s almost as if there are different groups in Microsoft with different priorities. Why is it so hard to get 130k people in complete alignment across all areas? Seems like a simple problem.
I feel like this could have been said without the sarcasm and it would have been a great discussion starter. Instead a potentially valid point/retort will be obscured by the tone and discussion surrounding it.
For example:
> Unfortunately large organizations often struggle to get groups into alignment on issues like this, unless there's a strong mandate from the top down (e.g. Amazon).
I am unconvinced that there is a meaningful conversation to have about this. Every time any topic comes up in the context of Microsoft, someone chimes in to state that it’s not their experience. Yeah, Microsoft is not a monolithic entity. It’s literally 130 thousand employees and tens of thousands of contractors. For literally any topic, there will be a subset of people who feel their immediate organization doesn’t match the direction the company intends.
Top down mandate cannot align 130k employees on every topic, even if that attempt were an appropriate thing for upper management to spend their time on.
There might be a more interesting conversation if context were given. If the comment about management not caring about RESTful APIs is coming from someone working as part of a team focused on public APIs, it’s a lot more interesting than someone working on desktop clients.
Edit: You know what, you’re right. I read the comment as a flippant “pft, that’s not my experience” and responded as such. I should have assumed better intent. Maybe the commenter was questioning why their experience doesn’t match up with this. Maybe they were asking why management isn’t pushing this effectively. I assumed a specific intent and should have considered different potential intentions.
There are a limited number of mandates that can come from the top. Upper management loses credibility when they issue endless mandates, even if all of the mandates are reasonable.
It’s also unrealistic that everyone underneath can focus on many mandates at once. Which leads to the loss of credibility when mandates start getting ignored out of necessity.
Yes, for any specific mandate like this, it could easily be accomplished. The point is that in aggregate there are too many of these to accomplish. Competent management will choose the highest value things to focus on. This isn’t defeatism. This is realism. Focusing on everything is the same as focusing on nothing.
Realistically, mandating RESTful APIs at the exec level is unlikely to be a big win for Microsoft. The teams working on APIs at scale are largely already doing this (and you’ll notice multiple groups represented by the authors). The teams that aren’t doing this are largely not building APIs that benefit a great deal from RESTful APIs, because they’re building internal APIs or similar and RESTfulness would be nice to have but not particularly impactful.
The primary innovation in REST is HATEOAS (which isn't mentioned in the document at all.) JSON isn't a hypertext, it just isn't a good format for REST-ful services.
The trouble with HATEOAS is that it requires extra work on the client. The client is supposed to start at the root of the API, request URLs, and cache them. Getting the data one wants requires multiple HTTP calls, many of which are supposed to be cached.
I tried implementing this and found it toilsome. It was far easier to use versioned URLs that followed a documented pattern.
When I checked about three years ago there wasn’t much in the open source community that I could build atop for clients. I also didn’t want to maintain an SDK client in addition to the API itself.
"Toilsome" is probably the most kindly yet still accurate thing one could say about exposing HATEOAS to the real world, at this particular point in time.
Every 1.0 web app developer in the world implemented HATEOAS to a great extent without even thinking about it.
It's only toilsome when you try to shoe-horn it into a traditional data API, rather than accept it as a unique descriptive aspect of the early web architecture.
It's all a category error. HATEOAS was descriptive, he was describing the early web architecture, and it was implemented without thinking about it in web 1.0 apps.
It only became problematic when we tried to shoe-horn it into traditional data APIs.
No, HATEOAS is not for humans but was imagined for api consumers. All those links that you mentioned in the Github api responses are named and you as a client can use them, where the actual url can change. Also, you as an api consumer can know what actions are available.
That being said, I have never seen developing api client driven mainly by HATEOAS
> All those links that you mentioned in the Github api responses are named and you as a client can use them, where the actual url can change.
I think that's a good idea, just like REST and HATEOAS are both ideas trying to solve problems.
I'm just skeptical that it'd work in practice. It seems vanishingly unlikely that consumers will use it correctly enough that changing the URLs in that mechanism wouldn't result in making the breakage even more mysterious.
When I say "it only makes sense if you've got a browser on the other end," the issue is you need a client approaching the complexity of a browser to do all the redirection and such that makes this level of flexibility for the API implementer to work.
I believe something like that is possible, but this isn't the way.
Imagine an API returning HTML, too. I've built such APIs (not public). I consider it best practice. The HTML representation can be utilized as the frontend. At least, its the interactive documentation to the API. In fact, when I built such APIs, I usually started with the HTML representation.
But isn’t almost always the link just the current link + the link name?
Apart from being able to tell the client what functions they are allowed to use and a very superficial form of documentation I don’t really get the advantage of it.
Man, I wanted to love gRPC. The disconnect between Protobufs and languages just felt too great though. You could do some weird things that just made every language feel, to some degree, non-idiomatic.
I switched to Twirp at one point to retain the simplicity of RPC + Protobuf, but avoid some of the complexity we didn't need via gRPC... but even that suffered, of course, from the Protobuf problem.
Finally I'm back to plain HTTP and JSON. We don't worry too much about REST fundamentals, and honestly we're more like an ad-hoc (JSON) RPC over HTTP, but it's simple.
The only problem is documentation. The one thing that I found perfect with Protobuf. Seems really hard to have everything here.
But in short, Protobuf is inherently a language of its own. Like JSON or etc. But it's feature rich enough that it can cause a fair number of incompatibilities between a language's preferred style or usage of features.
Where the incompatibility shows up depends on the language. I found it to be very different between Rust and Go, for example.
PREFIX: It's been ~1.5years since I've used Protobuf and Go together, so forgive my memory.
Sure, take Go for example. Protobuf to Go works well, but there are some features of the Protobuf language that just don't exist in Go. Enums, for example. While Go does have constructs that are similar to Enums, they're just different enough to make it a bit weird.
This basic problem gets worse when you try to use it though. The `One of` construct, for example, is sort of impossible in Go. Iirc the Go implementation had to use runtime type checking to give some resemblance to the Protobuf spec.
Rust (which I focus on now) was far better with Protobufs. As far as I remember, there wasn't much of Protobufs that broke idiomatic Rust. However, there was plenty of idiomatic Rust that broke Protobufs iirc. Things like complex data structures behind enum variants. Enum variants as structs, tuple values, etc. iirc it was bad enough that I usually used an abstraction library to write my idiomatic data structures and convert to/from the Protobuf structs.
Which, is how I left it all together. I realized I had a ton of glue code trying to make up for the incompatibilities in Protobufs + (Go|Rust), such that it would just be easier to drop it - at least as far as my code is concerned.
We now struggle with documentation, something Protobuf did excellently, but at least the code smell is gone.
Thanks, that's extremely helpful as I'm working on a language that needs to compile into usable objects. It does have a union type because they wind up being the least common denominator between enums, inheritance, switches, etc. I figured that was going to be tricky to translate into languages without explicit sum types, so I'll definitely take a look at protobuf and Go and see if there's a way to make it fit better.
> The `One of` construct, for example, is sort of impossible in Go. Iirc the Go implementation had to use runtime type checking to give some resemblance to the Protobuf spec.
It looks like they took a reasonable approach[1], and maybe the deeper issue is that protobuf assumes you want direct access to those structures. That's not an unreasonable assumption given its domain, but I can see a red flag: the code generator is solving the problem by generating a forest of types, but that's also something a developer would never do.
Another approach would be to make the oneof implementation more opaque and let you access things via methods. While you'd always want to allow the consumer to ask "which kind of avatar" is this, you could also let the consumer query "get me the avatar image url" and that could return either success or an error.
> I realized I had a ton of glue code trying to make up for the incompatibilities in Protobufs + (Go|Rust), such that it would just be easier to drop it
That's the acid test for whether it works. And it means you can figure out if your language is good by porting a non-trivial codebase using an existing API and see how much glue is required.
I've seen that too, but it's really only a big deal if you're doing something else smelly: Carrying the codegen'd proto objects throughout your application. Better to handle proto messages the same way you should be handling any interface with the outside world: Translate it into your code's own domain model, which can (and should) follow any idiom you like ASAP, and get on with your day.
I think I agree with you, BUT I will say that it's a problem I just don't have with JSON.
I think it's because JSON is inherently smaller, and the spec primarily focuses on basic types that handle data.
With JSON I can write idiomatic code, in any language, and the translation to and from my code is correct. I don't need to abstract away my JSON code for arbitrary reasons, it works.
I'm not sure why that is TBH, I just know that it's a restriction of Protobuf I don't find myself running into with JSON.
Why isn't this called "HTTP API Guidelines"? It doesn't seem to have much to do with REST at all. For example, it says that people should be able to construct URLs, whereas the REST style uses the URLs found in resources.
To be clear, I'm not saying that there is anything wrong with the practices they propose here, just that they're not what they're claiming they are.
It looks weird when you write it that way, but the ' ' character gets translated into '+' when you encode the URI before making a request. If it was me, I'd prefer to see the '+' encoding than ' ' because it looks strange when it's not a valid URI.
I find that MS are often quite good at developing good APIs and documenting them.
In this new doc I particularly like the Delta Queries section [0]. It's something that's difficult to get right but with this you can pretty much copy and paste their guidelines for your project.
>Humans SHOULD be able to easily read and construct URLs.
>This facilitates discovery and eases adoption on platforms >without a well-supported client library.
The URL is ephemeral on REST. That is because you create the documents on the fly. They can be linked or not to things that you store on the datastore. Allows you to easily change around as needed because the URL is not the API. The hyperlinks are the API. The URL is like a memory pointer. You shouldn't care about it.
The reason people like idempotency is because it is great for "at least once" systems (as opposed to "exactly once", which is hard to guarantee).
But REST's version of idempotency isn't good for this. If you retry your request multiple times (due to flaky connection or whatever), it only guarantees the same server state if your duplicate requests are bunched up.
For example if you do a DELETE then create it again with a POST, if there is a duplicate straggler DELETE floating around it will end up redeleting your new recreation.
Thats why you only should allow delete, patch and put with a UUID or similar ID. Exactly the same could be said for a DB query not running in a transaction.
That's because DELETE is only idempotent, not commutative. Commutativity is solved by conflict-free replicated data types, but they can be extremely hard to implement without redefining correctness.
If "idempotent" means "do the same thing", one would expect the same response on a subsequent call, no?
"Do the same thing" isn't specific enough. Does it mean the state of the system? Does it mean the response code? Does it mean the response body? Does it mean all of these things? All seem like reasonable interpretations.
In the context of REST it means the same server state (a database table state, a mailbox state, file system state etc)
It doesn’t mean “do the same thing” or “get the same response”, only “end up in the same place”.
You can do a delete x, get an OK response, and the important state is that x is now deleted on the server. Then the next call to delete x does nothing (which is different from the first which deleted x!). So to be idempotent it has to do something different. The response in the second case can be both “ok” (because the thing is deleted) but also e.g “x doesn’t exist”.
Versioning through HTTP Headers was the spark for why I was looking around for guidelines. I think it is the better way to go other than URL versioning.
Ok, this may be a dumb question but is there any risk to using/enforcing PATCH in your APIs in this day and age? I still avoid it but that's probably just cargo culting now, right?
What's wrong with SOAP? SOAP is a lot more than REST, so you'd need to explain an additional technology beyond REST in order to recommend moving away from SOAP.
Remember doing an integration with a finance company's API. I read their documentation,write the code,all works fine, we are moving towards go live date. A week later, my integration fails. I try hell knows how many things, eventually contact their lead dev asking what am doing wrong...The answer was "Oh,emm,yes,we've kind of changed the response format slightly, it's not in the documentation"... I got that fixed.. It's working again.Then, we suppose to go live next week.2 days before go live date I get told we are cancelling all business with the company...All the integration code goes into a bin.
"Services MUST increment their version number in response to any breaking API change. See the following section for a detailed discussion of what constitutes a breaking change. Services MAY increment their version number for nonbreaking changes as well, if desired.”
Of course you want to mention that a new field was added and what it is for, but why do you need a big notice? Your clients should just continue to work.
I'm guessing this is the same as "adding another column to a db schema should not break your app". So basically don't iterate over objects (or iterate and only use keys you recognize) and you are fine.
That does not seem at all weird to me. The client side of it is the same as "don't SELECT *".
This is more a opinionated design document for HTTP than it is for REST. There is not a single word about semantic formats and HATEOAS. I really expected more from Microsoft in this area.
Had I the time to write such a design document, I would start with ressources, versioning, URI and semantic documents. I would write about entity models, linking (links, link templates) and actions. I would write about representations and about how represenations can support optimizations, embedding of resources and entity expansion, which would otherwise be addressed by inventions like GraphQL.
And only afterwards, I would write about HTTP as a transfer protocol. But that part can be brief, because there is already the HTTP specification out there.
GET - Return the current value of an object, is idempotent;
PUT - Replace an object, or create a named object, when applicable, is idempotent;
DELETE - Delete an object, is idempotent;
POST - Create a new object based on the data provided, or submit a command, NOT idempotent;
HEAD - Return metadata of an object for a GET response. Resources that support the GET method MAY support the HEAD method as well, is idempotent;
PATCH - Apply a partial update to an object, NOT idempotent;
OPTIONS - Get information about a request, is idempotent.
Most importantly, that PUT is idempotent.
Credit to arkadiytehgraet for retyping the table to be readable. Please give them an upvote for the effort.