Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I appreciate the conceptual analogy, but that's not really HATEOAS. HATEOAS would mean your browser/client would be entirely responsible for the presentation layer, in whatever form you desired, whether it's buttons or forms or pages or not even a GUI at all, such as a chat interface.




The Web is not only true HATEOAS, it is in fact the motivating example for HATEOAS. Roy Fielding's paper that introduced the concept is exactly about the web, REST and HATEOAS are the architecture patterns that he introduces primarily to guide the design of HTTP for the WWW.

The concept of a HATEOAS API is also very simple: the API is defined by a communication protocol, 1 endpoint, and a series of well-defined media types. For a website, the protocol is HTTP, that 1 endpoint is /index.html, and the media types are text/html, application/javascript, image/jpeg, application/json and all of the others.

The purpose of this system is to allow the creation of clients and servers completely independently of each other, and to allow the protocols to evolve independently in subsets of clients and servers without losing interoperability. This is perfectly achieved on the web, to an almost incredible degree. There has never been, at least not in the last decades, a big where, say, Firefox can't correctly display pages severed by Microsoft IIS: every browser really works with every web server, and no browser or server dev even feels a great need to explicitly test against the others.


It's a broader definition of HATEOAS. A stricter interpretation with practical, real-world benefits is a RESTful API definition that is fully self-contained that the client can get in a single request from the server and construct the presentation layer in whole with no further information except server responses in the same format. Or, slightly less strictly, a system where the server procedurally generates the presentation layer from the same API definition, rather than requiring separate frontend code for the client.

It is the original definition from Roy Fielding's paper. Arguably, you are talking about a more specific notion than the full breadth of what the HATEOAS concept was meant to inform.

The point of HATEOAS is to inform the architecture of any system that requires numerous clients and servers to interoperate with little ability for direct cooperation; and where you also need the ability to evolve this interaction in the longer term with the same constraint of no direct cooperation. As the dissertation explains, HATEOAS was used to guide specific fixes to correct mistakes in the HTTP/1.0 standard that limited the ability to achieve this goal for the WWW.


> HATEOAS would mean your browser/client would be entirely responsible for the presentation layer, in whatever form you desired, whether it's buttons or forms or pages or not even a GUI at all, such as a chat interface.

Browsers can alter a webpage with your chosen CSS, interactively read webpages out loud to you, or, as is the case with all the new AI browsers, provide LLM powered "answers" about a page's contents. These are all recontextualizations made possible by the universal HATEOAS interface of HTML.


Altering the presentation layer is not the same thing as deriving it from a semantic API definition.

Altering the presentation layer is possible precisely because HTML is a semantic API definition: one broad enough to enable self-description across a variety of domains, but specific enough that those applications can still be re-contextualized according to the user's needs and preferences.

Your point would be much stronger if all web forms were served in pure HTML and not 95% created by JS SPAs.

I think the web itself would be stronger if it was served in pure HTML and not 95% created by JS SPAs.

That's a little picky, maybe it's HATEOAS + a little extra presentation sauce (the hottest HATEOAS extension!)

It's not. The whole point of HATEOAS is that the presentation can be entirely derived from the API definition, full stop.

That is just wrong.

https://ics.uci.edu/~fielding/pubs/dissertation/net_arch_sty...

The server MUST be stateless, the client MAY be stateful. You can't get ETags and stuff like that without a stateful client.


Deriving a presentation layer from an API definition has no bearing on whether the client has to be stateful or not. The key difference for 'true' HATEOAS is that the API schema is sufficiently descriptive that the client does not need to request any presentation layer; arguably not even HTML, but definitely not CSS or JavaScript.

https://ics.uci.edu/~fielding/pubs/dissertation/rest_arch_st...

> any concept that might be the target of an author's hypertext reference must fit within the definition of a resource


Dude, he literally mentions Java Applets as an example (it was popular back then, if it was written today it would have been JavaScript). It's all there. Section 5.1.7.

It's an optional constraint. It's valid for CSS, JavaScript and any kind of media type that is negotiable.

> resource: the intended conceptual target of a hypertext reference

> representation: HTML document, JPEG image

A resource is abstract. You always negotiate it, and receive a representation with a specific type. It's like an interface.

Therefore, `/style.css` is a resource. You can negotiate with clients if that resource is acceptable (using the Accept header).

"Presentation layer" is not even a concept for REST. You're trying to map framework-related ideas to REST, bumping into an impedance mismatch, and not realizing that the issue is in that mismatch, not REST itself.

REST is not responsible for people trying to make anemic APIs. They do it out of some sense of purity, but the demands do not come from HATEOAS. They come from other choices the designer made.


I will concede the thrust of my argument probably does not fully align with Fielding's academic definition, so thank you for pointing me to that and explaining it a bit.

I'm realizing/remembering now that our internal working group's concept of HATEOAS was, apparently, much stricter to the point of being arguably divergent from Fielding's. For us "HATEOAS" became a flag in the ground for defining RESTful(ish) API schemas from which a user interface could be unambiguously derived and presented, in full with 100% functionality, with no HTML/CSS/JS, or at least only completely generic components and none specific to the particular schema.


It happens.

"Schema" is also foreign to REST. That is also a requirement coming from somewhere else.

You're probably coming from a post-GraphQL generation. They introduced this idea of sharing a schema, and influenced a lot of people. That is not, however, a requirement for REST.

State is the important thing. It's in the name, right? Hypermedia as the engine of application state. Not application schema.

It's much simpler than it seems. I can give a common example of a mistake:

GET /account/12345/balance <- Stateless, good (an ID represents the resource, unambiguous URI for that thing)

GET /my/balance <- Stateful, bad (depends on application knowing who's logged in)

In the second example, the concept of resource is being corrupted. It means something from some users, and something to others, depending on state.

In the first example, the hypermedia drives the state. It's in the link (but it can be on form data, or negotiation, for example, as long as it is stateless).

There is a little bit more to it, and it goes beyond URI design, but that's the gist of it.

It's really simple and not that academical as it seems.

Fielding's work is more a historical formalisation where he derives this notion from first principles. He kind of proves that this is a great style for networking architectures. If you read it, you understand how it can be performant, scalable, fast, etc, by principle. Most of the dissertation is just that.


Yes, which is exactly true of the Web. There is no aspect of a web page that is not derived from the HTML+JS+CSS files served by a server.

...which are a presentation layer and not a semantic, RESTful API definition.

No, they are a semantic layer for the browser-server communication. They encapsulate human-readable content in a machine interpretable definition.

From what I read on wiki, I'm not sure what to think anymore - it does at least sound inline with the opinion that the current websites are actually HATeOAS.

I guess someone interested would have to read the original work by Roy (who seems to have come up with the term) to find out which opinion is true


I worked on frontend projects and API designs directly related to trying to achieve HATEOAS, in a general, practical sense, for years. Browsing the modern web is not it.

I think you are confusing the browser with the web page. You probably think that the Javascript code executed by your browser is part of the "client" in the REST architecture - which is simply not what we're talking about. When analyzing the WWW, the REST API interface is the interface between the web browser and the web server, i.e. the interface between, say, Safari and Apache. The web browser accesses a single endpoint on the server with no prior knowledge of what that endpoint represents, downloads a file from the server, analyzes the Content-Type, and can show the user what the server intends to show based on that Content-Type. The fact that one of these content types is a language for running server-controlled code doesn't influence this one bit.

The only thing that would have made the web not conform to HATEOAS were if browsers had to have code that's specific to, say, google.com, or maybe to Apache servers. The only example of anything like this on the modern web is the special log in integrations that Microsoft and Google added for their own web properties - that is indeed a break of the HATEOAS paradigm.


I'm not confusing it. I was heavily motivated by business goals to find a general solution for HATEOAS-ifying API definitions. And yes, a web page, implemented in HTML/CSS/JS is a facsimile for it in a certain sense, but it's not self-contained RESTful API definition.

Again, you're talking about a particular web page, when I'm talking about the entire World Wide Web. The API of the WWW is indeed a RESTful API, driven entirely by hyperlinks. You can consider the WWW as a single service in this sense, where there is a single, and your browser is a client of that service. The API of this service is described in the HTTP RFCs and the WHATWG living standard for HTML, and the ECMAScript standard.

Say I as a user want to read the latest news stories of the day in the NYT. I tell my browser to access the NYT website root address, and then it contacts the server and discovers all necessary information for achieving this task on its own. It may choose to present this information as a graphical web page, or as a stream of sound, all without knowing anything about the NYT web site a priori.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: