Hacker News new | past | comments | ask | show | jobs | submit login
What exactly is RESTful programming? (stackoverflow.com)
121 points by klochner on Feb 1, 2012 | hide | past | favorite | 109 comments



REST is a scam.

For most people, REST is simply the negation of SOAP. They should really use some term like POX (Plain Old XML) or POJ (Plain Old JSON) for simple RPC protocols implemented over http.

If I still was interested in being a maintenance programmer, I could make a pretty good living fixing broken systems based on people who drank Fielding's Kool Aid.

For instance, I once saw a Silverlight app that took 20 minutes to initialize because it traversed a tree of relationships using REST. It started out O.K. but as the app grew more complicated it took tens of thousands of requests and an incredible amount of latency.

I reorganized it so that it got all this information in one POX call that took 2 seconds. A bad architecture slowed the application down by a factor of 600 and made the difference between something that would have worked and something that failed.

People who are building toy applications can blubber about "the decoupling of the client from the server" but the #1 delusion in distributed systems is that you can compose distributed operations the same way you compose function calls in a normal program.

The issue is that the latency involved in a distributed call is about 100 million times more than that involve in a function call, and the failure rate is more than 10 trillion times more. All of the great distributed algorithms such as Jacobsen's heuristic for TCP and Bittorrent have holistic properties possesed by the system as a whole that are responsible for their success.

Hypothetically you could deal with the reliability problems of REST applications by using distributed transactions, but so far as performance and latency goes, this is like putting out a fire with gasoline.

Security is another problem with REST. Security rules are usually about state transitions, not about states. To ensure that security (and integrity) constraints are met, REST applications need to contain error prone code that compares the before and after states to check if the transition is legal. This is in contrast to POX/RPC applications in which it is straightforward to analyse the security impacts of individual RPC calls.


REST is not a scam, it's THE term for the underlying architectural style of the web.

Your straw-man interpretation is a scam.

The crappy silverlight app is badly designed and the performance issues you were seeing can be optimised away trivially. (create a new composite resource which can represent the entire tree in one go). That is nothing to do with REST.

"the #1 delusion in distributed systems is that you can compose distributed operations the same way you compose function calls in a normal program" - This is exactly what REST addresses as a style.. i.e. REST is not RPC.

Latency is another issue which REST addresses directly. This is why caching is an explicit constraint.

I have no idea what your point is re: security, sorry. Modelling state transitions is not that difficult, many languages have existing tools to help you do this.


People say stuff like "[REST is] THE term for the underlying architectural style of the web", but that assertion is pretty much back-rationalized onto the web. I know I'm not the only HN'er who was there at the beginning for HTTP, and this conceptual purity that RESTians allude to just wasn't there.

For as long as there has been a web, there has been at least one (often more than one) conceptual ideal claimed to be the heart of the World Wide Web. The reality is that the whole system is hacked together. Things that work well tend to be discovered, re-discovered, and perpetuated --- but that doesn't make any of them the web's "underlying architectural style".

I tend to buy the idea that REST-ish-ness makes for better, clearer, more usable APIs than RPC. But I also think REST needs to win on the merits, not by waving some imaginary "REST is the web" flag.


The reality is that both are true - there was an "HTTP Object Model" that TBL and Roy Fielding had in mind in the 90s, but it was never formalized until Roy's thesis. Not everyone on the various IETF and W3C committees understood or agreed with it, clearly, and there are a lot of hacks in practice. There were attempts like HTTP-NG to supplant HTTP with CORBA-like Object RPC, but the arguments against this were basically informal arguments in favour of the uniform interface of URI + HTTP + MIME. Even today, the current HTML5 leads don't particularly seem to have much appreciation for the style.

That said, starting with the Apache web server, Roy had a lot of control with its architecture and approach to supporting particular HTTP features, using his mental model of the "Web's architectural style" to guide it. After 2000 and the thesis, there are many people that started implementing their clients and servers with the style as a philosophical guide.

The style also tends to be widely misunderstood and buzzword-ified, which is scammy.


I'm not sure I buy the argument that being a key contributor to Apache gave Fielding a huge amount of influence over the architectural style of web applications. The barn door on that was opened with CGI, and since then, the architecture of web applications mostly been up to the market and the lowest common denominator of programmers.


I was not referring to the internal architecture of web applications inside an origin server. I was referring to the architecture of a web application across the various components (agent, proxy, gateway, server), connectors (client, server, cache, resolver, tunnel), and data elements (resource, identifier, representation, metadata) in the style.

The architecture of those interactions has remained fairly stable over the past 15 years, even specs that route around REST like WebSockets are trying to at least adopt the HTTP upgrade & tunnel connector (HTTP CONNECT) to ensure it integrates consistently with the style.


are you saying you are not convinced that the dissertation holds water?

afaict, REST is (right now) the undisputed way to interpret the web's architecture.


I'm saying what I said in my comment.

Do you disagree with any of it? I might be wrong, so a specific disagreement would be interesting.

An appeal to authority: less so.


honestly, I'm not sure what you were trying to say.

Yes it is a post-rational analysis of the web.. yes the web was not 'designed' - it evolved.. but do I think, because it evolved, that means it doesn't/can't have an underlying architectural style? no I don't. Do I think Fielding's analysis is sound? yep - and so do most others who have read it (which is a lot of people).

There is a big missing part of the dissertation though - it doesn't go into much detail about hypertext. Apparently Fielding had intended to include it but just ran out of time.


I'm saying something simple: that claims about REST being "the underlying architecture of the web" do not matter, and are moot. The web doesn't have an "underlying architecture", or if it does, it's much simpler than even REST.

The web of 1995 clearly did not work the way Fielding describes REST.

That doesn't make REST bad, but it does mean you have to advocate for it on the merits, not by waving a flag.


Tom, in what ways did the web not work the way Roy describes REST? I'm honestly curious why you think this.

All of the elements he describes in the thesis were deployed by 1994 and codified in the first internet draft for HTTP 1.0: http://tools.ietf.org/html/draft-fielding-http-spec-00

I am generally in agreement that it's best to advocate REST on the merits, but I also think you may be making light of the constant barrage of attacks against the Web architecture through the 90's (with CORBA) and early 2000's (with SOAP) that were defended against by a small core of people, some of whom (Mark Baker) did so at great personal and health expense.

There's a reason people wave the flag - we can't keep arguing complicated topics from first principles if we are ever going to progress.


I think what you're hearing me express is skepticism that the (self-appointed) elite that makes standards or writes dissertations about architecture is in any way the final word on what constitutes the web. Without the hundreds of thousands of crappy applications that actually populate the web, it would be an academic exercise.

Regardless of what the RFC says, the CGI interface that provided the foundation for the original wave of web apps was just a series of getenv()'s. It was absolutely not the case that web developers in 1995 were careful to make sure that they represented state consistently or provided discoverable resources. Then as now, most web applications (and APIs, where they existed) were black boxes with parameters defined by expediency.

Also, even Fielding doesn't claim that RPC interfaces are wrong. He seems to have a much bigger (and more reasonable) problem with people pretending that RPC interfaces are REST. There are plenty of interfaces that make just as much sense in RPC as they would in REST; it's not unlikely that there are plenty of them that make more sense expressed as RPC.

I also don't understand the emotional appeal. CORBA vs. REST isn't a moral issue; people who sacrifice their health to make sure we're not using SOAP have their priorities screwed up.

(Thomas, please, btw).


Thomas, sorry.

I would actually think Roy (and the actual elite that built HTTP in the 90s) would agree with you on your first point - he defines REST, but that doesn't define the Web, which is hundreds of thousands of scrappy applications. REST was intended to cover the common case of the Web so that the standards themselves will be optimized to the common case. The "self appointed" elite are wrong to suggest that REST is the Web, but to me it's just as wrong to suggest that REST doesn't provide a useful abstract model of what makes the web work.

Regarding CGI, yeah, it was a hack, and it worked. But look at how website design evolved -- sometimes those websites sucked because they changed URIs all the time breaking bookmarks, search engines, etc. These practices weren't thought of in academic terms, no, but the point is that there was a theory underlying what emergent properties made websites good and popular (stable URIs for example, which led to better search, and bookmarking), which led to mainstream aphorisms like "cool URIs don't change" in the 97-98 timeframe. This was eventually codified academically in Roy's thesis a few years later. My point is that this wasn't completely accidental, there were people actively advocating "good practices" in the 90s, even without a formal published theory - it was informally understood by those who were on the mailing lists.

And I agree, RPC isn't wrong, the point is that it is fundamentally flawed approach if your goal is interoperability at a global scale. If you want client A to talk to server B, have at it, just don't call it REST or assume you'll get the desirable properties. Many so-called REST APIs aren't globally interoperable due to this conceptual baggage, but some are better than others.

As for the emotional appeal, my point is... these debates have been discussed ad nauseam for nearly 15 years, with many cases involving personal sacrifice (justified or not). Debating on the merits is fine, but having ot go back to first principles every 5 years for a new generation means we are just spinning our wheels. That's why there often is appeal to history or authority. i.e. "Read the mailing lists, they're archived", etc. That's not a great situation, I admit. But most new technologies have traditionally had a "vendor engine" behind them pushing marketing, training, books, etc. to perpetuate the meme. That seems to be less common with the open source / internet vendor world, or maybe it's just that REST is still too new and misunderstood.


I can't find anything to disagree with here (well, I could pick nits about RPC not scaling). Damn you.


can you really get 'much' simpler than REST already is? it's only 5 constraints.

Why is it significant that the web evolved between 1995 and when Fielding defined REST?

To most people, the paper is well-reasoned and REST is observable in the web where the constraints appear to produce their supposed beneficial effects. I think it's reasonable to see REST as the underlying style of the web, and the web has some very beneficial properties (scalability, evolvability) which therefore stand to REST's merit.


None of this has anything to with what I said, or with the criticism of REST that you replied to originally. I don't agree with his take on REST (at least not entirely) but it wasn't lazy. This argument, which uses the string "Fielding" at least once per comment, is lazy. That's all I'm saying.


(create a new composite resource which can represent the entire tree in one go)

This is the problem I face with REST all the time. I have yet to see someone pull this off gracefully. Take the hierarchy Authors/Books/Characters. If I wanted the full list of all Authors, with all Books, with all Characters what are you suggesting I do? Now what if there was another level after that? And after that?

This simple example could work with the use of a 'depth' value which has been suggested. But it doesn't work all the time. Especially when there are forks in the hierarchy and you want to go deeper in one and not the other. Basically I've determined that it seems impossible to have a pure 'model' of your data and an efficient API.


    Take the hierarchy Authors/Books/Characters. If I wanted
    the full list of all Authors, with all Books, with all
    Characters what are you suggesting I do? 
The service could have a /characters resource, which returns a list of all characters, along with the author/book.


Or the top level document looks something like (in JSON):

{ "characters": "<uri>", ... }

Doing a GET on the supplied characters URI gets you a list of characters, each with a mixture of relevant properties and a URI for the character itself so you can interact with it directly.


odata is one solution, I think.


(1) The "composite resource" which initializes the application is bound specifically to the application. This codesign greatly simplifies the implementation of an application that uses async comm (Javascript, GWT, Silverlight, etc.) because you'll never get communications choreography right unless you stick to the "one user action -> one request -> update UI" paradigm.

(2) Cacheing is part of the problem as much as it is part of the solution in making reliable apps. Anyone who's actually written AJAX apps knows that you frequently need to use tricks to disable cacehing to get things working right.

(3) As for security, this isn't just state transitions in the Turing sense but could involve numberic ranges and other kinds of variables. Generally code that compares composite objects is the kind of code that hides errors, particularly given the fact that for most languages there is something funky about the equals operator. (This is certainly true about C++, Java and PHP)


Is a particularly noisy HTTP API really the fault of a RESTful design? The principles as I understand them should have little effect on this [1].

If something requires multiple API calls when it could just require one, this points to the the resource being too finely-grained and ill-fitting to the behavioural domain of the client.

The resources attached to URIs do not need to be directly related to the underlying data model implementation. The interface which you provide to a client-side developer should closely match their domain and not just your implementation [2]. Provide useful abstractions.

> "To ensure that security (and integrity) constraints are met, REST applications need to contain error prone code that compares the before and after states to check if the transition is legal."

Can you expand on this? I've not run into similar problems myself.

[1] RESTful Web Services correctly inherit and use HTTP as their interface.

[2] http://en.wikipedia.org/wiki/Domain-driven_design


The resources attached to URIs do not need to be directly related to the underlying data model implementation. The interface which you provide to a client-side developer should closely match their domain and not just your implementation [2]. Provide useful abstractions.

Great point. I think this is where I get hung up a lot. Its just too much work sometimes to try and imagine how people will consume your API. In a perfect world people should have to ability to mash up your data to create any kind of application they want and so we tend to drill straight down to the fine-grained model. It feels easier since we have already figured out that model, now all we need is a URL structure to also represent it.

But, when I explore creating a model which represents the consumer's domain I feel like its really dang close to RPC. Basically I end up slipping the opposite direction I did before. Why not just make every API call have it's own resource representation? FullCatalogWithAuthors might as well be GetFullCatalogWithAuthors.


Exactly, this is just the case of a poorly-designed, lazily-implemented REST API. Good APIs generally can't be auto-generated from some ORM model definition, which is what it sounds like the one mentioned in the OP did.


Regarding your experiences with the Silverlight app, I don't see that as a case against REST at all. IMO, it's perfectly reasonable to have a view which returns some kind of aggregate resource, while still having endpoints for each individual resource. Or including "sub-resources" as full objects instead of arrays of URIs, or letting the consumer specify their preferred depth of recursion.


HATEOS[1] is the one part of REST that I don't understand. I can see that it's great for discoverability, but the reality of the web is that a request-roundtrip takes a non-negligible amount of time. Also, it seems to fly in the face of persistent IDs, with the URL being the ID.

1: The reason the app in question had to traverse a tree and the principle that the poster probably breached by reorganising.


One trouble with discoverability is the problem with precise semantics. Even if an automated system can discover the existence of a service, to know ~exactly~ what to do with it requires progress in upper and middle ontologies that still isn't here in 2012.

I hate to defend SOAP, but WSDL does a good job of documenting web services. My jaw dropped when I pointed Visual Studio at Salesforce.com's WSDL file and it correctly created a set of statically typed stub functions for a very complex API... I'm so used to this stuff not working that when it does it's like WOW.

HATEOS addresses just a fraction of what WSDL does.


My experience is limited, but I've never had a WSDL provide any tangible benefit. In one instance, the WSDL for a single method (+login) SOAP service generated a 26,000 line Apache Axis2 stub that didn't compile without manual attention. I was able to replace it with about 30 lines of custom code once I had the wire-trace and could figure out the weird undocumented incantations required for it to work.

I suspect that .NET WSDLs (of which this was one) work well with .NET clients - but in that case it's might was well be any proprietary RPC protocol.

Correct me if I'm wrong, but REST discoverability isn't meant to provide machine discoverability, but human discoverability.


The applications that tend to get pentested most are for a variety of reasons more likely to have SOAP interfaces than apps in general, so we spend a fair bit of time with WSDL files, and this just isn't my experience; for the most part, given a WSDL for a service, a Ruby RPC binding for that service is mostly painless.

That doesn't mean I like SOAP (I don't), but I don't find that this particular critique of it rings true.


In 11 years of working with SOAP that is definitely not my experience.


HATEOS is a design constraint, WSDL is a spec. HATEOAS could accomplish a lot more than what WSDL does with appropriate specs and a widely adopted programming model or two. This has taken a long time to improve.

So far we've only see a few like AtomPub and its extensions, or the discovery protocol stack (http://hueniverse.com/2009/11/the-discovery-protocol-stack-r...).

The challenge has been, IMO, a programming model that fits the Web, which can then fit into a media type spec or two that adopts it (similar to how HTML was codified for the experience of a web browser). WSDL basically is the procedural programming model, where networked interactions are usually mapped procedure calls. This model a whole bunch of long discussed problems when dealing with wider scale interoperability.

I've tried to outline this challenge in detail in my keynote at last year's WS-REST workshop on the "Write Side of the Web": http://www.infoq.com/news/2011/03/web-write-side

Is WSDL richer and more productive than documenting a bunch of URI patterns for a complex API? I'm not sure about that. WSDL tends towards code generation for very specific sorts of interaction scenarios, and ignores things like cacheability of results, the safety or idempotency of the request, and the ability to access data across endpoints without a lot of a priori knowledge in the client. Many of those properties are easier to implement with plain HTTP used in a RESTful manner.


Identifiers like URIs are persistent, but the representations it provides you are not persistent over time.

So, if I provide a URI that means "a list of links to various resources with interesting data", those linked URIs can be bookmarked (they're persistent), but the original URI has every right to change the links later on if it thinks there are more relevant ones (e.g. newer, more relevant, etc.). Think of how a news site's main page changes with "Top News Stories", for example. That doesn't mean the older URIs it pointed yesterday to are necessarily bad.

For a RESTful machine-to-machine interface, this HATEOS approach is arguably one of the more effective ways to do extensibility and versioning: http://www.mnot.net/blog/2011/10/25/web_api_versioning_smack...


That just sounds like a badly designed interface - hardly a problem with REST as an approach.


REST is not RPC; well, I don't think of it that way anyway.

RPC gives server parameters and asks it to actually do something, whereas REST asks for a representation of some resource.


I keep hearing that, but I don't understand exactly what it means. As far as I know, RPC in this case means "remote procedure call". So could someone provide simple examples of one thing that would be valid for RPC and a different thing that would work for REST?


> For instance, I once saw a Silverlight app that took 20 minutes to initialize because it traversed a tree of relationships using REST. It started out O.K. but as the app grew more complicated it took tens of thousands of requests and an incredible amount of latency.

Not that I agree with that particular architecture, but my first question is: why did the Silverlight app discard the knowledge that it had worked hard to discover? If a technician spent 20 minutes figuring out how a thing worked, would he just willfully forget it?

> People who are building toy applications can blubber about "the decoupling of the client from the server" but the #1 delusion in distributed systems is that you can compose distributed operations the same way you compose function calls in a normal program.

The whole idea behind object oriented programming is that you don't compose functions, you let objects pass and respond to messages and record their observations (state).

> All of the great distributed algorithms such as Jacobsen's heuristic for TCP and Bittorrent have holistic properties possesed by the system as a whole that are responsible for their success.

Meaning that the "objects" observe and record information about the world and respond appropriately based on their (recorded) observations. Interetingly enough, Alan Kay said that TCP was one of the few things that was designed well from the start. (Or maybe it was IP, I'm trying to find a source.)

>Security is another problem with REST. Security rules are usually about state transitions, not about states. To ensure that security (and integrity) constraints are met, REST applications need to contain error prone code that compares the before and after states to check if the transition is legal. This is in contrast to POX/RPC applications in which it is straightforward to analyse the security impacts of individual RPC calls.

That's a good point about security rules and state transitions but I don't understand the problem, objects are supposed to be responsible for some kind of state and we should let them worry about their state transitions. Can you provide and example of error prone code that needs to compare the before and after state?

------------------------------------------------------------------------------------------------------------------------

REST isn't a scam but the tooling (programming languages and frameworks) just aren't there. Our languages, even the so-called object-oriented ones are still based around the idea of calling procedures. REST works when you're dealing with objects that respond to messages, not imperatives.

Alan Kay made the case for something like REST in his 1997 OOPSLA keynote[1] (preceding Fielding's dissertation by 3 years). The idea is that in really big systems (distributed over space and time), the individual components need to be smart enough to learn how to interact with the other components because the system is simply too big to accomodate static knowledge (and I say this as a fan of statically typed languages).

[1]http://video.google.com/videoplay?docid=-2950949730059754521 at 43:00


> REST works when you're dealing with objects that respond to messages

You mean GET PUT, POST, DELETE ? This is my criticism of REST. If we transform all our processing objectives into nouns which we can address with these verbs, we will arrive straight away in the http://steve-yegge.blogspot.com/2006/03/execution-in-kingdom...

> The whole idea behind object oriented programming is that you don't compose functions, you let objects pass and respond to messages and record their observations (state).

But this was SOAP in the first place?


There is nothing in REST that suggests you can't have more verbs. The restriction is that they need to be potentially applicable to all resources. They can't be definitionally specific to a subset of resources, or you lose the benefits of a uniform interface.

The point of all of this is interoperability. Communication requires shared understanding, which is impossible if everyone invents their own nouns and verbs at whim.


This just sounds like a repeat of XML and similar debacles, where people tried to address design decisions outside the problem domain. XML is no more interoperable than a well-documented binary protocol. I'm sure REST is great for some applications, but this attitude that we can do all our design upfront is bad for the web. I mean, the browser just recently rediscovered interrupt-driven programming with the introduction of websockets. That's pretty embarrassing if you ask me.


REST is the opposite of upfront design. The big idea is that objects can, in an ad-hoc fashion, discover what services another object offers (HATEOAS). If an object chooses to remember the state and list of services it discovers (caching), the remote object can offer advice about how long to store the list (cache control).

In fact, REST specifically lets systems grow organically since each object never assumes knowledge about either state or state transitions, all knowledge is contingent and empirically discovered.


Do you know of a real-world working example of a HATEOAS system? This strikes me as an AI-Complete problem.


The intelligence driving the object doesn't have to be artificial. Usually the local object is acting as an agent for a human user. Common examples are REPLs and browsers.


You just described a bunch of design constraints. The basic OS facilities and the design of the Internet already make some design decisions for you. REST is adding more. That is designing things up front. People said the same thing about XML, and of course it's only advantage is some level of self-documentation (that is going to break down pretty quickly when things get complicated).


> Using REST over a real network is not equivalent to object-oriented programming. Don't be asinine.

Okay. I'll let Alan Kay argue the exact same point then.

http://video.google.com/videoplay?docid=-2950949730059754521

The whole video is worth watching but the relevant part starts at 43:00. This video was made three years before Fielding submitted his dissertation.

(note: this isn't an appeal to authority fallacy since he's making an actual argument.)


I am not interested in what Alan Kay has to say. I asserted that you specified a bunch of design constraints. You did. You tried to counter by pointing out that it's just OO design. However, I'm not interested in arguing over what categories things fit into. The point is that the network imposes limitations and you often need a domain-specific design to get around them. For example, HTTP is awful for soft real-time applications.


I was not countering the claim that REST doesn't entail design constraints since at the time I believed OOP itself is a (useful) design constraint (the biggest being no access to state variables, "getters" and "setters" are bad).

But now that I think about it, I realize why people have a hard time understanding REST. It's not a "design" constraint any more than OOP is. It's an implementation constraint. In the case of OOP I think people confuse those notions because they conflate designing a system with designing class heirarches. But OOP doesn't have anything to do with classes. A system is object oriented when the state is private, objects communicate by passing messages and methods are late-bound. Similarly, you don't really design a system to be RESTful, you simply commit to the idea that you never know — a priori — what methods will be available on a resource. You determine them by asking the object and those methods are only valid as long as the cache-control header says they are. Just about everything else follows from that.

> the point is that the network imposes limitations and you often need a domain-specific design to get around them

That's why REST emphasizes caching, stateless communication, the appropriate use of status codes, and the appropriate use of VERBs (idempotent vs non).

> for example, HTTP is awful for soft real-time applications.

I wouldn't write an OS kernel in Ruby either.


>But now that I think about it, I realize why people have a hard time understanding REST. It's not a "design" constraint any more than OOP is. It's an implementation constraint.

Your attempt to enforce some dichotomy between design and implementation is hopelessly misguided.

>That's why REST emphasizes caching, stateless communication, the appropriate use of status codes, and the appropriate use of VERBs (idempotent vs non).

These don't solve the major problems. If I want caching I can trivially implement it myself. It's a useless thing to implement at the architecture level.

Similarly for idempotency etc. It's all a huge academic wank. An appropriately designed protocol will solve all these problems without the constraints.

>I wouldn't write an OS kernel in Ruby either.

What smug yet clueless response. You're begging the question.

I'm frankly bored of talking to someone who knows nothing about protocol design, but defends HTTP as a "proven" technology. It clearly isn't, it fails on many levels, and the whole web stack is completely fucked. Good day.


> These don't solve the major problems. If I want caching I can trivially implement it myself. It's a useless thing to implement at the architecture level.

Legions of computer scientists and computer engineers disagree with you. Caching is fundamentally an architectural concern.

>I'm frankly bored of talking to someone who knows nothing about protocol design....

Please point to your widely commercially deployed protocols before accusing others of ignorance.

> ..but defends HTTP as a "proven" technology. It clearly isn't, it fails on many levels, and the whole web stack is completely fucked. Good day.

You've demonstrated an astounding lack of respect for pretty much everyone on this topic here, and I'm finding it incredulous myself to have to tolerate the amount of bitterness being emanated from you. Why are you participating here it all, unless you just want to tell us we're all clueless, and you can feel superior?


My last statement was rude. Please accept my apologies.


I've described object-oriented programming[1] where each object in the system has a URL.

[1] The Smalltalk variety of OOP which is about encapsulation, message passing, and extreme late binding.


Using REST over a real network is not equivalent to object-oriented programming. Don't be asinine.


REST's problem domain is the design of a distributed hypermedia system on a global scale. It's pretty general because it needs to be.

A "well documented protocol" (binary or not) by definition is interoperable within a certain context. The question is, how big is the scope of that context?

The point of the Web protocols is to provide a framework for evolving bits of agreement - not "big design up front", except for the essential bits that have proven themselves, or are essential to bootstrap communication. HTTP, MIME, and URI are those essential bits of agreement at the moment. Witness how HTML isn't as essential as it used to be, given the growth of JSON APIs and mobile native apps.


Except HTTP hasn't proven itself, except to a growing set of specialist programmers who don't know any better. The web is hideously unreliable. People have become accustomed to reclicking links and reloading pages. Aside from hyperlinks, the only "innovations" of the web already existed in a more efficient form in operating systems decades ago.

As for REST, I'm sure it's nice for some things. However, if you think interoperability is going to magically spring out of it you're ignoring the lessons of XML, which that having an (ostensibly) human-understandable format is not going to magically allow machines to become more interoperable. All it will do is allow people to make bad guesses about the meaning of documents instead of reading the documentation.


There's nothing magic about interoperability - its all about the architecture and the documentation. XML isn't an architecture, it's a tagged data format. I cannot understand your line of argument about how the two are related.

Similarly, to suggest that HTTP hasn't proven itself (compared to what?) or that "innovations" of the web existed decades ago (really!?) is nonsensical to me. It's the most widely deployed application protocol on Earth. It handles billions of dollars of transactions.

You claim the web is unreliable. I think that's a layer error. The issue is that networks are unreliable. That's not something one can paper over.


You put XML and REST in different categories. So what? They are both attempts to provide systems interoperability for a huge domain. What we go out of XML: some ability to browse the format with standard tools, and some human-readability . That is all we will get out of REST. If that's the point, so be it. If you want to use it, so be it. I might use it for some projects. However, to put it forward as something new and special is being willfully ignorant of history.

>Similarly, to suggest that HTTP hasn't proven itself (compared to what?)

Compared to writing your own protocol that is appropriate for your application. The fact that people think HTTP is good enough has meant the browser has only recently acquired the ability to have a proper duplex channel without polling. Sorry, but that's just pathetic.

>or that "innovations" of the web existed decades ago (really!?) is nonsensical to me.

Name one thing that can't be constructed more efficiently and flexibly on the desktop, aside from web links.

>It's the most widely deployed application protocol on Earth. It handles billions of dollars of transactions.

So what? Because it's popular that means it's good? I used to be dismissed because as long as 7 years ago I was telling people we needed sockets and proper client/server in the browser. Now I'm getting the last laugh as I watch the browser vendors take decades to slowly do this stuff. At each step, it is labeled "innovative" and interesting and hyped up beyond all comprehension. The progression of this trend is retarded by the insistence of inventing stupid premature optimisations like caching as a basic architectural feature.

>You claim the web is unreliable. I think that's a layer error. The issue is that networks are unreliable. That's not something one can paper over.

Then you're ignorant of the potential of low-level protocol design. Sorry, but I absolutely can tailor my network protocol to my particular application. The web takes the position that everyone should use HTTP. It is a terrible protocol for many things. How are you going to design your soft real-time applications using HTTP? The answer is you can't, because it is impossible to get the right guarantees. HTTP has only "proven itself" in the same sense that Windows has; it was good enough at the time, so now it's blown up and everyone's using it. So what? I want more.


> So what? They are both attempts to provide systems interoperability for a huge domain [...] However, to put it forward as something new and special is being willfully ignorant of history.

Firstly, they were not both attempts at systems interoperability for a huge domain. XML was about building a specific format. REST is an entire architecture. It's the difference between designing a car and designing the interstate freeway system.

Secondly, let me get this straight. It's been a long standing goal for decades of both academia and industry to build a global scale interoperable distributed system. That this was done with distributed hypermedia, bridging languages, graphics formats, operating systems and computer architecture is not "new and special"?

> [HTTP isn't proven] Compared to writing your own protocol that is appropriate for your application.

Most hand-written protocols suck.

It's also these days rarely required given the strength of existing application protocols, and the widespread desire for interoperability, but clearly you don't value with that.

> The fact that people think HTTP is good enough has meant the browser has only recently acquired the ability to have a proper duplex channel without polling. Sorry, but that's just pathetic.

No, that's not pathetic, it's a consequence of the economics of scale. It is very difficult to economically sustain an event-driven internet scale system. e.g. http://roy.gbiv.com/untangled/2008/economies-of-scale

> Name one thing that can't be constructed more efficiently and flexibly on the desktop, aside from web links.

Firstly, most of what we do with computers these days is communication, processing, and commerce over web links.

Secondly, most data management applications are vastly more flexible, interoperable, and efficient with web technologies than they were with desktop technologies such as Access or Powerbuilder.

> I used to be dismissed because as long as 7 years ago I was telling people we needed sockets and proper client/server in the browser. Now I'm getting the last laugh as I watch the browser vendors take decades to slowly do this stuff.

I wouldn't be laughing yet. WebSockets is a sideshow that's not going to change a whole heck of a lot of how web apps are built. There will be interesting uses for it, but it's ultimately limited in scale and scope due to a lack of shared design constraints.

> Sorry, but I absolutely can tailor my network protocol to my particular application.

Sure you can, but you're basically making a value judgement that interoperability is of no concern to you, and that reuse is of no concern to you.

I mean, why use TCP, when we can just roll our own transmission layer on UDP? There are times that's needed (RTP), but we get a lot of productivity benefit with TCP.

> HTTP has only "proven itself" in the same sense that Windows has; it was good enough at the time, so now it's blown up and everyone's using it.

Sorry, that's just nonsense. The web has been a vast success story for global interoperability, and that can be directly attributable to the design constraints embodied in the main protocols of the web (HTTP, URI, and MIME).

I highly suggest you read Roy's thesis and reflect before postulating your opinions on this subject, since you really don't seem to have any appreciation for the amount of thought and engineering that went into the Web.

I would be more than happy to read to any sources you may have of what other protocols and/or techniques are clearly superior to the Web protocols (presuming I retain interoperability at scale as a major value).


SOAP wants you to call procedures and therefore you need a static list of procedures to call. An object, on the other hand, simply responds to messages. If you send it a message it doesn't understand it can respond with something like METHOD_MISSING which nicely translates to HTTP as 404.


Yes I want a static list of procedures to call. GET, PUT POST and DELETE are to coarse grained.

Method missing is in my book completely differrent from Page Not Found. Method != Resource.


404 doesn't mean "Page Not Found". It simple means "Not Found".

This isn't pedantry, the result of a request (message) for an object is the representation of the state of that object. 404 is an application level error saying that the remote object has no method for dealing with the message. Obviously that also means it's not going to be able to send you a representation of the state of some object that was the response to the message.

Also, REST doesn't preclude having a static list of messages. It's just prescribes that it's up to the remote object to report the messages to which it can respond and up to the local object to decide to statically record those messages somewhere so that it doesn't have to unnecessarily ask again. The remote object can even helpfully tell you when that list of "procedures" might expire so that the local object can know when it should ask again.


404 means the requested URI is not found, not "I can't/won't use this method".

For "I won't use this method on the resource you specified" you need a 405

For "I do not understand this method" you need a 501.


Well, this is a textbook case of a semantic divide: deciding what the codes should mean for your application. This is similar to figuring out whether NULL should mean "not applicable" or "not known" for a particular database table.

Now, I don't really think it's worth arguing and I can certainly see why you would use those codes that way but for me I've always treated 4xx codes as being about the resource/object and 5xx codes as being about the object's environment (i.e. the server). Said another way, I treat 5xx as runtime exceptions and 4xx as application error messages.

I'd never write code that returned a 5xx error from within my application for instance, that's for the server to handle.

405 means that the object knows about the method but for some reason won't allow you to call it. In OOP parlance, you would use it if someone tried to call the "Drive()" method without first calling the "StartEngine()" method. Indeed, RFC-2615 prescribes that you return a list of valid methods if you return a 405.

Thus, the only real candidates for something like METHOD_MISSING are 400 and 404. But 400 seems like it's for something that the application can't even understand. Like if the client is using some weird encoding in the host headers or something.


Are you saying SOAP won't tell you it doesn't understand if you send a bad message?


> I don't see any reason why SOAP can't support dynamically changing sets of procedure calls. Furthermore, I think that is a bad idea since it will break backward compatibility. No amount of web-buzzwording will get around that problem.

Of course enclosing a message in a SOAP envelope does not preclude RESTfulness. If the set of messages to which an object responds is a function of its current state then it's RESTful (or at least lead to a RESTful design).


So what you're admitting is that REST doesn't add any new novelties in terms of being discoverable?


REST doesn't offer new novelties because the basic idea is at least as old as Smalltalk.

REST isn't even about discoverability. That's just a natural consequence of calling methods that might only exists when an object is in a particular state (i.e. the object creates the methods at runtime).

The difference between REST and (usual) SOAP is the difference between late-binding and "extreme late-binding"[1]

[1] http://www.google.com/#sclient=psy-ab&hl=en&source=h...


I didn't mean to say that (but I guess it looks like I did). Obviously there are other ways if singnalling receipt of a bad message (throwing exceptions for instance). METHOD_MISSING seems more natural to me at least.

What I meant to say is that in REST, the set of messages to which an object will respond is a function of the object's current state. With SOAP, a URL endpoint always takes the same set of messages (until the code is rewritten or something).


I don't see any reason why SOAP can't support dynamically changing sets of procedure calls. Furthermore, I think that is a bad idea since it will break backward compatibility. No amount of web-buzzwording will get around that problem.


> I once saw a Silverlight app that took 20 minutes to initialize because it traversed a tree of relationships using REST

This is why clients should cache REST requests.


What, so it only takes 20 minutes every {TTL_expiry} seconds? How does that help? The cache becomes a pastiche of what the service should have had in the first place - a "GET /starting_tree" handler.


I think you have gravely misunderstood me. It's not necessary for the client to expire their entire cache all at once - this would be incredibly foolish.

If you make a GET request against a REST interface then you should cache the results of that request and note the datetime you made it.

If you need to make a subsequent request against the same resource then you can first make a HEAD request to get only the headers. By comparing timestamps you can see whether or not the resource has changed since you last accessed it.

If the REST interface is appropriately designed then the "last changed" value will propagate upwards from the leaves of the tree towards the API root. So caching the results of requests can save you from making an immense number of requests.


It will still be hideously slow on the first run due to overheads.


Yes, the first run will be slow. But subsequent runs (which presumably are very large in number) will be much quicker.


The first run will be intolerably slow. Caching will not be sufficient.


REST is an anti-anti-pattern where the anti-pattern is WS-* and SOAP. If there had never been SOAP, I strongly doubt there would have been a strongly opinionated REST movement.

"Little-r REST" is very little on it's own. It's mostly just common sense as settled by about a decade of trial-and-error. I remember dabbling in PHP-RPC about 10 years ago, and most of the takeaways aren't lightyears from what REST is today.


I write RESTful services, to the best of my ability. Half the tools I use enforce or encourage it.

Confession: I don't understand what the advantages are supposed to be. I want to believe it isn't just arbitrary dogma. I really do.

Applications don't care about the method or protocol a web service is using. Just get the data to and from.

Humans should care about how easy it is to maintain and implement the web services and anything they choose to do to make that goal a reality is good.

Oh, it makes for cute looking URIs, I guess.


In my experience, the biggest win is that it helps me as a developer to create clean interfaces. Whereas a homerolled RPC solution would encourage me to write controllers for different paths, taking arguments in various ways, when writing a REST interface I can create some kind of object:

  {
    index: function (req, res) { ... }
    create: function (req, res) { ... }
    retrieve: function (req, res) { ... }
    update: function (req, res) { ... }
    delete: function (req, res) { ... }
  }
And then map that whole resource under a certain path, "/resources".

Further, it is built on sound principles such as, you can't destroy or change anything using GET, you can issue requests other than POST any number of times and nothing will go wrong.

Other things like using HTTP headers to decide in which format the consumer wishes to consume the resource, HTTP status codes for error handling, and HATEOAS, gives me the feeling of working with the system instead of building cruft.

I argue that REST in fact makes it easier "to maintain and implement the web services". YMMV.


Here are some benefits:

* A RESTful API is discoverable. With no background knowledge, a client can do a GET request on the root URL to get a list of resources with URLS. It's easy to dive right in and quickly figure out what is available.

* Developers understand HTTP. REST is essentially just HTTP, and we already understand HTTP. REST means the API developer doesn't need to waste time and energy (badly) reinventing/re-implementing what HTTP does on top of HTTP; and the client doesn't need to waste time and energy learning a bunch of ad hoc protocols.

* Our tools understand HTTP. Just about every client language/framework on earth already understands HTTP. Without jumping through any extra hoops, a client working in Python or Ruby or C# or Java or JavaScript or PHP or even VBScript can execute an HTTP request on an API resource and understand the response.

* REST keeps us honest. A RESTful API architecture forces the API developer to be clear about what is a resource and what methods clients can execute against each resource. This makes for a better-organized, more orthogonal API.


>> A RESTful API is discoverable

So is SOAP.

>> Developers understand HTTP

SOAP runs on just about anything including HTTP or even SMTP.

>> Our tools understand HTTP

SOAP doesn't care what the clients or servers are written in or run on.

>> REST keeps us honest

I disgree, you can change something in a JSON packet and the client will only know it broke when it's run. With SOAP, if your client uses a compiled language then you as a developer will know first before you give it to a customer to run, you can fix it then.


> So is SOAP.

No it's not. In theory, you can just consume the WSDL and have all the methods available to you, but in practice, there are huge and unbridgeable issues with interoperability. A C# SOAP client will have trouble consuming a Java WSDL, and so on.

> SOAP runs on just about anything including HTTP or even SMTP.

SOAP runs on top of HTTP but merely uses HTTP as a tunnel. All the stuff that should be in HTTP - the resource, the method, the data - is inside the XML envelope.

If you want to consume a SOAP web service, expect to get your hands dirty.

> SOAP doesn't care what the clients or servers are written in or run on.

In theory, no. In practice, interoperability is by no means a given.

> you can change something in a JSON packet and the client will only know it broke when it's run.

That's not really what I was talking about, but if you make a breaking API change in any system, you should have a way to do it so clients aren't caught flat-footed when their app breaks.

Neither REST nor SOAP inherently solves this problem, but REST does allow for custom media types that include versioning.


>> A C# SOAP client will have trouble consuming a Java WSDL, and so on.

I have had no issues getting a C# client on Windows to work with a Java SOAP server running on Tomcat using AXIS.

I even have an Android client talking to that SOAP server with no issues.

>> In theory, no. In practice, interoperability is by no means a given.

That's because it's contract driven, with REST any changes on the server in the API will not break the client's compile so you won't know it broken until the customer calls at 2 AM. Unless of course you have 100% unfailing testing processes.

>> if you make a breaking API change in any system, you should have a way to do it so clients aren't caught flat-footed when their app breaks.

Cool yes right, SOAP.

>> REST does allow for custom media types that include versioning.

If REST ever gets to that level of specification we will have SOAP.


> No it's not. In theory, you can just consume the WSDL and have all the methods available to you, but in practice, there are huge and unbridgeable issues with interoperability. A C# SOAP client will have trouble consuming a Java WSDL, and so on.

This is something I have a hard time understanding. Your hypothetical generic REST client will have a hard time understanding my HATEOAS, discoverable, fully REST-ified service, if it has no idea what the service is supposed to do contextually. If my 'blog' resource has links to 'metadata' and 'related' things, without knowing what those are, how does a 'generic' REST client do anything with them?

How is that significantly different than SOAP?

I would love to see an actual example of the difference, rather than abstract discussion, as this is the point I have the hardest time understanding.


The only difference between REST-in-practice-today and WSDL is that you aren't expected to map each network interaction to a single procedure call. This makes a number of things easier: cache the results, GET data across many different endpoints if you know the format, enable data and metadata extensibility through the payload, and follow hyperlinks if you know the link relation and target format.

The Facebook Graph API is a great practical example of this ease of use in practice: https://developers.facebook.com/docs/reference/api/

You're quite right that things like "related" and "metadata" link relations mean nothing if you don't know what those mean - they are hooks for future bits of agreement. The difference between REST and WSDL in the long run, in my opinion, is the practical composability of small bits of agreement (or specs). The Web Services stack (SOAP, WSDL, WS-*) has had a very difficult time composing a bunch of specs together as the kernel of their agreement is basically the XML Infoset. With the RESTful Web, the kernel is HTTP, URI, and MIME (for now), which has endured a number of extensions (Flash, HTML5, Atom/RSS, etc.) and evolutions (the rise of mobile devices).


Caching at the architecture level has caused more problems than it has solved. How many times have you heard the phrase "empty your cache, then try again..". Utterly stupid. This is an optimisation that should be implemented by the application designer as needed, and if it is within their capabilities.


Experience shows that your benefits are actually myths. A 'RESTful API' is not more discoverable, understandable and 'honest' than any conventional, local or remote API.


> Developers understand HTTP.

Actually, prior to REST, most developers didn't even know about PUT/DELETE/etc. which are key operators that RESTful services are supposed to exploit.

Most developers also have shockingly poor understandings of all kinds of aspects of the HTTP protocol (headers, which ones are important, semantics of the various verbs, pipelining, multipart MIME/server push/etc., they typically know at best a handful of error codes that make sense in a RESTful context...).


> With no background knowledge, a client can do a GET request on the root URL to get a list of resources with URLS. It's easy to dive right in and quickly figure out what is available.

Where is that specified for REST? I don't remember it from Fielding's paper, and I've seen plenty of self-described RESTful services that aren't even remotely discoverable.

More than that though, service discovery is a very old concept and not anything new that REST brought to the table.


> Where is that specified for REST?

I'll be the first to acknowledge that the REST specification, at least in Fieldings Ph.D. dissertation, is unfriendly. Here's Fielding trying to explain what he meant:

"A REST API should be entered with no prior knowledge beyond the initial URI (bookmark) and set of standardized media types that are appropriate for the intended audience (i.e., expected to be understood by any client that might use the API). From that point on, all application state transitions must be driven by client selection of server-provided choices that are present in the received representations or implied by the user’s manipulation of those representations."

http://roy.gbiv.com/untangled/2008/rest-apis-must-be-hyperte...

In general, this goes by the tortured acronym HATEOAS:

https://en.wikipedia.org/wiki/HATEOAS

And REST advocates wonder why people have a hard time understanding it.

> I've seen plenty of self-described RESTful services that aren't even remotely discoverable.

You're right. Most services that call themselves RESTful aren't. I think the REST movement itself deserves a share of responsibility for how widely REST is misunderstood.

> service discovery is a very old concept and not anything new that REST brought to the table.

What REST brings to the table is the ability to discover your way through a service with no required background knowledge other than HTTP - which will be the transport protocol in (almost) any case - and some common media types.


That's a navigable interface, not a discoverable one. This is one of the key differences between a hypermedia style interface and a traditional directory service. Navigable interfaces tend to be easier for people to find what they are looking for, but for automated service discovery, a directory service is far more straight forward.


Theoretically if done correctly then your REST service can be accessed by generic REST clients that don't know about your particular service.

Though generic client isn't going to know really what to do with your service. Aside from performing basic CRUD operations, a generic client would have no understanding if what any of it actually means. But, for example you can use code libraries that are designed to work with REST services in your client and not have to write as much boring "plumbing" code.


Though generic client isn't going to know really what to do with your service.

This is where hypermedia comes in. A rich set of standard media types allows your generic client to do useful things without knowing about any particular service.


Each of the constraints brings some advantages to the system. To me, the most important is the decoupling of the client from the server.

I think this doesn't seem that important because we still code each client against particular API, but we could adopt true uniform APIs - by not only using the HTTP methods and following HATEOAS, but by also standardizing on data formats that are widely adopted¹ - we would be able to write much more generic clients that could work with different APIs, perhaps even with ones not considered by the developer, without wasting hours upon hours writing code against each service.

¹ No, JSON isn't a standard data format, but more a standard serialization format. You still have to manually code against each service's response types.


For me, it's just a helpful description. I know something about an API if it's RESTful, and I know what others expect from me when I say it. And practically, Backbone.js expects REST conventions from the server API, which is a (small) gain in setup time.


>> I know something about an API if it's RESTful

With SOAP, you know everything, just ask the server for its WSDL.


I'm currently working on a project where the WSDL is broken. The Java application that constructed the WSDL was unaware it was running behind a HTTPS proxy. This led to it importing HTTP URLs that didn't actually work.

The solution was to have Suds re-route specific URLs to resolve correctly. The fact that Suds is designed this way is testament to his common this problem is.

Yes, you can get HTTP servers that don't respect the safety of Get, but it's a much less common issue.


I run my Tomcat server with AXIS providing SOAP capabilities on localhost:8080 and front-end it with Apache using ProxyPass and ProxyPassReverse.

My SOAP web app runs fine, my Android test client also hits it with no complaints.

My Apache server can be hit with both HTTP and HTTPS which lets me use Apache to handle certificate management while I cna run any type of app server on localhost that I want.

So far no issues with this scenario. I used this for a contract I was just on and it also worked fine for both the SOAP and REST clients.


I can appreciate making web services a lot simpler than horseshit like SOAP. If all we want to do is get data in and out, let's do it simply and without a bunch of extra ceremony. I can appreciate the notion of using concepts we've already had forever in HTTP rather than reinventing them, poorly.

Beyond good common-sense things like this, REST seems to involve a lot of people yelling about whether something is or is not REST.


How about SOAP does computational distribution and REST does data distribution and they are both needed today.


I am beginning to gather from this SO post as well as the other HN link to ShareFile, that REST is not something that can be summed up in a paragraph on HN or SO.

This is why the arguments over it are getting heated without seeming to go anywhere. All of the descriptions of REST are met with "That's X, that's not REST. X May be part of REST, but it's not all that REST is".

Turning the SO article into a community wiki is probably the way to go, and allow the discussion to expand until someone who reads it is able to get the concept of REST even though they'll never be able to explain it in a paragraph.


My $0.02. In general, I've found many people describe something as "RESTful" but not actually know what RESTful truly means. Instead, they use it as an acronym to say "A web-based API", which is not necessarily the same thing as RESTful.

The best follow-up question to ask someone is: "What level on the Richardson Maturity Model is your RESTful API?" If they can't answer that, they're probably using the term RESTful to describe something else.

Richardson Maturity Model:

    Level Zero Service  - URI API. Usually just POST. SOAP, XML-RPC, POX.
    Level One Service   - URI Tunneling via URI Templates. GET/POST.
    Level Two Service   - CRUD (POST, GET, PUT, DELETE). Amazon S3.
    Level Three Service - "Hypermedia as the Engine of Application State" (HATEOAS).
The higher up the level, the more "RESTful" your API (you can argue about the order of level's 0 and 1), but the more complex it will be. Not everything fits perfectly in those levels either; you're free to borrow concepts and mash-up your own custom API.

HATEOAS is the most complicated. Imagine you're browsing the internet. You go to a webpage, and it loads with lots of text, links, images, and videos. You can interact with it, click on links, watch videos, submit a form, and go elsewhere. You have no idea what appeared on the webpage until after you visited it, and the next time you visit it might change entirely. The link might become broken and you get 404'ed, or you need a username/password otherwise you get 401'ed. HATEOAS is similar to that, but instead of a "person" browsing a "website", it's "client software" browsing "XML/URIs/Resources" received from a server API.

REST is not easy to understand or explain, as it entails learning about many different concepts, ideas, and custom-building something based on them. This is probably why there's no single-sentence which easily describes REST. Saying that something is RESTful is like saying something is Drivable. You can drive a car, truck, motorcycle, bicycle, or boat, but they're all very different from one another. There are similar concepts between them, like accelerating, steering, breaking, but how they do each of those things is very different.

A good book I recommend is "REST in Practice" by Jim Webber, Savas Parastatidis, and Ian Robinson (http://www.amazon.com/REST-Practice-Hypermedia-Systems-Archi... or http://shop.oreilly.com/product/9780596805838.do). That helped me grasp what REST means.


HATEOAS is similar to that, but instead of a "person" browsing a "website", it's "client software" browsing "XML/URIs/Resources" received from a server API.

A person browsing a web site is not "like" HATEOAS, it is the canonical example. The client software, in that case, is a web browser.


In the accepted answer, what's so bad about the "non-restful" approach the author describes? Who cares if I use GET/DELETE/PUT/POST vs putting the "verb" in a query string argument or the name the file that's being called?


RESTful, as opposed to REST, seems to be clear about the use of HTTP and all of its verbs. It is not RESTful if you do it any other way just by definition.

REST itself does not concern itself with the protocol you use. Theoretically you could build a REST protocol over HTTP that does not use HTTP verbs, using it as only a transport layer. Though I'm not sure what you would gain by implementing a REST protocol over another REST protocol.

Pragmatically, if you choose to use HTTP, you need to think about the infrastructure that exists. Each verb comes with a very specific meaning, and systems like proxies and other software that come between you and the client will act in different ways depending on which verb you choose. To meet the goals outlined in the REST dissertation, you need to be aware of those behaviours.


I would say the real power is the simplicity of stateless programming. It's easier to debug and easier to code, thus stronger and faster development.

I think a lot of devs assume it just about "using HTTP to its true potential" which is incidental. Other protocols could be implemented in a restful manner.


Producing RESTful applications is more of an art than a science.

There are two key objectives to bear in mind when designing a web app to be RESTful:

- your app should produce client/server interactions which complement the semantics of HTTP. (this makes the interaction visible and helps with intermediate processing e.g. caching)

- expose your application in a way that minimizes the assumptions clients can make about it - aka 'HATEOAS' (this will reduce risk when making changes to your application and therefore improve the evolvability of your application)


Just thought I'd add that a lot of people are raising the issue of discoverability in restful services. From what I have read this is dealt with with three key constraints: mediatypes, link relations and HATEOAS (hypertext as the engine of application state).

Mediatypes provide a standardised way of processing the representations provided to and from a resource (HTML is processed in particular way, as is JSON, as are PNGs, etc). REST has a constraint that requires you use registered mediatypes. On the web, this is the IANA (http://www.iana.org/assignments/media-types/index.html). If you mint your own mediatypes and use them on the public web, they must be registered with IANA for a service to be considered RESTful.

Link Relations again provide a standardised way of manipulating resources. They describe how resources at the end of a link can be manipulated. Examples include the ATOM edit relation, others include the stylesheet relation.

HATEOAS is a constraint that demands that the the client application should move between states by processing the hyperlinks contained within a representation via the relations provided.

There are two things you should pick up from this: a general REST client will not be able to do much with a service without knowing how to process the mediatypes of the representations returned by resources, and will not be able to move between states on the client without understanding how to interact with the service via the relations the service uses. Think about the most common REST clients out there: web browsers. If I mint a new mediatype that represents "a visual representation of a kitten with a comical caption", I cannot realistically expect a web browser to understand how to process this. The knowledge of how to process my custom mediatype must be baked into the web browser, much like web browsers know how to process image/* media types.

Much the same problem exists with link relations. Most of the time Link Relations are defined with a mediatype (think how a web browser must move to the state of processing a stylesheet by understanding that the link rel "stylesheet" dictates that the user-agent must retrive the stylesheet resource by performing a HTTP GET on the linked resource).

RESTful client applications must be able to understand the mediatypes and link relations used in a RESTful web service.

These three separate constraints, constraints that must be adhered to to be considered RESTful allow a REST client to navigate a REST service. The beauty here is that the service may change the url space of a service at any point without breaking a client, as the client knows all it needs to know to safely move around the client application state, as the rest is provided by the service on a per request basis.

As others have pointed out, caching (another constraint) cuts down on the constant round trips that such a setup would suggest, but I'll leave that constraint for another commenter to flesh out :)

Finally, two points: HTTP headers are part of the representation. This means that mediatypes that may at first appear to violate the hypermedia constraint can be made into hypermedia representations via the link header (http://tools.ietf.org/html/draft-nottingham-http-link-header...). So PNGs can link to resources that contain metadata with a link relation such as "meta" (for example).

Second, many people mention the common HTTP verbs but ignore the OPTIONS verb. Calling OPTIONS on a resource can provide a lot of information back to the client about how they can interact with a resource. I've built systems that provide a list of HTTP verbs that a logged in user may perform on the resource, for example. Different users with different roles may be able to POST or DELETE, whilst others would see that they can only GET.

Hopefully this sheds some light on how RESTful services are made discoverable.


What syntax should one use to define link relationships in a RESTful service?

Most of the examples I've seen extolling the virtues of HATEOAS use XML, specifically the <link uri="..." rel="..." /> element. That's great for a single media type (XML), but what about JSON, which is by far the dominant media type used in RESTful APIs?

A google search for "hateoas json": https://www.google.com/#q=hateoas+json&fp=1

reveals...a bunch of people asking how to format link relations in JSON, without a definitive answer that I can see.

HATEOAS and discoverability are principles that sound great in high-level discussions but I have yet to see them implemented in a meaningful, helpful way.

Anyhow, you seem quite knowledgable on the subject so perhaps you can enlighten me. Usually it's my own ignorance/obstinateness that's to blame rather than a flaw in the concepts themselves...


I would argue that you don't. JSON isn't a format for HATEOAS, for the reason you state. I'm leaning towards saying HTML 5, with some of the new semantic markup, though it's not perfect.

You can use both, and simply use the Accept HTTP header to determine which format you want. It gives you both discoverability of HTML (with <a>), and data transfer that is easy to parse.


I agree, I too have seen lots of discussions of how to format links in JSON. As you noted, JSON is not a hypermedia format per se, but if you use the Link HTTP header you can add hypermedia to non-hypermedia mediatypes. Here is a quick example:

HTTP Response:

  HTTP/1.1 200 Ok
  ...
  Link: <http://example.com/some-resource>; rel="edit";
         title="Edit some resource"
  Content-type: application/json

  {"prop":"val"}

You should be able to see that the link header contains a target url, a relation and optional title. The Link header maps almost exactly to the HTML Link element.

I agree, I havent seen a "true" RESTful service either, except for very simple examples. As others have noted, REST is quite a difficult architectural style to summarise in a few short pithy statements, which leads a lot to the confusion on implementation. I also think that the fact that the architectural style is defined by constraints, which traditionally flies in the face of all previous architectural styles I have seen, which emphasise features and what you CAN do. REST is about what you DONT do. Its these restrictions that combine to make something very powerful, flexible and scalable. Think chess: its a stupid game with a board limited in size, the pieces all move in complicated but specific ways. It cant possibly be an interesting and deep game with all these constraints!

If you are interested in discussions on the subject, I recommend joining the rest-discuss mailing list on Yahoo Groups. Sure the conversation veers into hand-wavy, ivory tower territory, which can be forgiven seeing as the discussion group is for exactly this, but you get some very comprehensive and detailed explanations. Definitely search the archives because a lot of stuff has been answered in great detail before. Roy Fielding even pops on sometimes to correct a few points, and you dont get much more authoritative on the subject than that :)


There are two aspects to discoverability: A) listing all of the operations, and B) getting detailed method signatures for a particular operation.

A) is a noble idea but so far few people need to automatically discover and consume web services. B is the real issue: something like SOAP was necessary to make RPC possible over the web for typed languages like C# and Java.

REST was a counterargument made by some people coming from a dynamic languages standpoint who really didn't need as much information as the statically typed language people. Its too bad that people have taken A so seriously though and gotten confused or misled (probably deliberately) about what SOAP was for and then tried to make REST into that.

The most convenient and useful API, even over the web, is one that is as close to normal programming as possible. In other words, RPC-style (or some improvement on that like NowJS). That doesn't mean that you can or should forget that you are making remote calls, it just means that you have less other unrelated plumbing to think about.

If you want to do RPC in a dynamic language over the web, NowJS is your model. The problem is that dynamic languages and statically typed languages have different requirements. I don't think that either SOAP or REST are a good solution to making the two styles work together on a web RPC. I think we need to work on that more.

I guess one more thing complicating this is the fact that most APIs (just like most software systems in general) are in fact dealing with the standard create, read, update, delete operations on data, with some variations. Batch updates are the most common variation, which you could actually easily extend the CRUD model for. The thing is, you almost always have important deviations from that where the CRUD model doesn't fit. But regardless of whether you are doing CRUD or something else, you still want your API to be as simple and normal as possible and that means RPC.

Maybe as some point we can focus more on languages or frameworks that better integrate the CRUD concept and things like calculated fields and aggregation, and transparently handle tasks like persistence and shared state.

Just about all of the programs I write need to persist their data and transmit/share their state. Maybe we want a semantic data oriented networking programming system (or more of them). This would be inspired by dynamic languages like JavaScript but necessarily incorporate some kind of typing more integrally (inferred?).

Maybe just start with NowJS, bake in some kind of CRUD generation based on nested schema definitions, put that on the server, add a way to indicate that operations need to be batched (for transactional consistency and practical network performance) and make the nice MongoDB criteria and now aggregation stuff baked into the client-side also. That would be cool (much cooler and more useful than me thinking or caring about HTTP PUT or DELETE).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: