Sometimes, I feel that we ought to have a simple protocol, on top of HTTP, to simply do remote procedure calls and throw out all this HTTP verbs crap. Every request is a http POST, with or without any body and the data transfer is in binary. So that objects can be passed back and forth between client and server.
Sure, there is gRPC, but it requires another API specification (the proto files).
There I said it. HTTP Verbs constrained REST APIS are the worst thing ever. I hate them.
They introduce un-necessary complexity, un-necessary granularity and they almost always stray away from the "REST principles". To hell with "Hypermedia" stuff.
I find it such a joy to program in server rendered pages. No cognitive overhead of thinking in "REST".
But, of course, all this is only where the client and server are developed by the same person / company.
For publishing data and creating API for third party use, we have no serious, better alternative to REST.
As someone who has spent a decade working with APIs, I 100% agree. The use cases that are a good fit for “RESTful” APIs pale in comparison to those that would benefit from RPC.
What is the point of having your client translate an action to some operation on a document (read or write), only to then have your server try to infer what action was intended by said document operation.
It pains me that this article doesn’t mention any of the trade offs of each suggestion (POST vs PUT vs PATCH and expandable objects, especially) or of using REST APIs generally.
+1, each time I return 404 for an object which is not found in the DB, the customer gets a red error message in their UI as if something failed more severely than an object being unavailable, and the metrics believe something is unavailable.
I bit my fingers every time I have mapped HTTP verbs and neither return codes, to REST verbs and codes.
Also, error codes at the API level often need a remapping when used in user context, for example if the OAuth token expires, we don’t say it the same way for an action of the user (then it’s mandatory) than when displaying data passively (in which case it shouldn’t be too red because the user may not care).
This answer is correct, but lacks context. REST wasn't conceived with APIs in mind. In fact, it's an awful fit for APIs, as many of the other comments point out. Rather, REST today is a buzzword that took on a life of its own, bearing only superficial resemblance to the original ideas.
HATEOAS is a generalization of how something like a website would let a client navigate resources (through hyperlinks). It requires an intelligent agent (the user) to make sense. Without HATEOAS, according to Roy Fielding, it's not real REST. Some poor misguided API designers thought this meant they should add URL indirections to their JSON responses, making everything more painful to use for those unintelligent clients (the code that is consuming the API). Don't do this.
If you must do REST at all - which should be up for debate - you should keep it simple and pragmatic. Your users will not applaud you for exhausting HTTP verbs and status codes. The designers of HTTP did not think of your API. You will likely end up adding extra information in the response body, which means I end up with two levels (status code and response) of matching your response to whatever I need to do.
If something doesn't quite fit and it looks ugly or out-of-place, that's normal, because REST wasn't conceived with APIs in mind. Don't go down the rabbit hole of attempting to do "real REST". There is no pot of gold waiting for you, just pointless debates and annoyed users.
> The central idea behind HATEOAS is that RESTful servers and clients shouldn’t rely on a hardcoded interface (that they agreed upon through a separate channel). Instead, the server is supposed to send the set of URIs representing possible state transitions with each response, from which the client can select the one it wants to transition to. This is exactly how web browsers work
The classic book on the subject is RESTful Web APIs[1], and it spends a while explaining HATEOAS by using the example of the web as we've come to expect it as the exemplar REST API using HATEOAS. I also have this essay[2] on HATEOAS in my open tabs, and it uses the example of a web browser fetching a web page.
This should come with a big warning for people looking to do real work. This is not what most REST APIs are like in practice, nor what they should be. The vast majority of REST APIs are RPC-like, because that's the pragmatic way to deal with the problem 99% of the time. The "REST" branding is just for buzzword compliance.
I used some API recently that returns URLs in the response body. It's really useful because they maintain those URLs and we don't have to rewrite our URL building code whenever the server side rules change. Actually we don't even have to write that code. It saves time, bugs, money.
I don't remember which API was that, I'll update the comment if I do.
Better yet, those URLs communicate what you may do.
Instead of building the logic to determine if, say, a Payment can be cancelled, based on its attributes, you simply check 'is the cancel link there'.
I find this a critical feature. Because between the backend, and various mobile clients, react, some admin, and several versions thereof, clients will implement such businesslogic wrong. Much better to make the backend responsible for communicating abilities. Because that backend has to do this anyway already.
I had a new hire on my team criticize the API we built for our product because we don't use put or patch, and we don't allow a GET and POST to share the same path. He said "it's not very RESTful"
I pointed him at HATEOAS and suggested if he wasn't familiar with it, he probably hasn't ever seen a truly RESTful API.
I don't think I convinced him that our approach is good (I'm not sure I am convinced either, but it works well enough for our purposes)
I do think I convinced him that "it doesn't correctly conform to a standard" isn't necessarily a useful critique , though. So that's a win.
I feel like the whole of the 1990s was devoted to this. How to serialize an object and then what network protocol should be used? But increasingly over time, between 2000 to 2005, developers found it was easier to simply tunnel over port 80/443. In 2006 Pete Lacey wrote a satire about SOAP, which is funny but also accurate, and look at how late people are to discover that you can tunnel over HTTP:
I was puzzled, at the time, why the industry was cluttering HTTP in this way. Why not establish a clean protocol for this?
But people kept getting distracted by something that seemed like maybe it would solve the problem.
Dave Winer used to be a very big deal, having created crucial technologies for Apple back in the 1980s and 1990s, and he was initially horrified by JSON. This post is somewhat infamous:
He was very angry that anyone would try to introduce a new serialization language, other than XML.
My point is, the need for a clear a clean RPG protocol, but the industry has failed, again and again, to figure out how to do this. Over and over again, when the industry gets serious about it, they come up with something too complex and too burdensome.
Partly, the goal was often too ambitious. In particular, the idea of having a universal process for serializing an object, and then deserializing it in any language, so you can serialize an object in C# and then deserialize it in Java and the whole process is invisible to you because it happens automatically -- this turned out to be beyond the ability of the tech industry, partly because the major tech players didn't want to cooperate, but also because it is a very difficult problem.
While I totally agree with the overkill that REST can be, I really do NOT agree with your statement:
> but it requires another API specification
This implies API-specs are part of the problem; and I think they are not.
Specs that have generators for client-libs (and sometimes even sever-stubs) are verrrrry important. They allow us to get some form of type-safety over the API barrier which greatly reduces bugs.
One big reason for me to go with REST is OpenAPIv3: it allows me to completely spec my API and generate clients-libs for sooo many languages, and server-stub for sooo many BE frameworks. This, to me, may weight up to the downsides of REST.
GraphQL is also picking up steam and has these generators.
JSON-RPC (while great in terms of less-overkill-than-REST) does not have so much of this.
My current company has settled on "We use GET to retrieve data from the server and POST to send data to the server, nothing else" because it was causing quite a lot of bikeshedding style discussions where people were fussing over "Should this be a post, put or patch"?
It all came to a head when someone wrote an endpoint using a PATCH verb that some people were adamant should have been a PUT.
It was among the most silly nonsense I have ever been a part of and these discussions have thankfully gone to zero since we decided on only GET and POST
Active use is really irrelevant if you only plan on using it inside a company, because you can implement a client and server within 30 mins to an hour, no external tools needed. The spec is clear and readable. It's excellent. We use it with a TypeScript codebase and just share the interfaces for the services in a monorepo. The spec is so simple it doesn't really need an update.
Arista Networking uses it in their eAPI protocol. It let's you have machine parsable outputs and avoid ye olde days of screen scraping network device outputs to view interface status and other details.
I believe most users make use of it via an open source json-rpc python lib. You can find a few examples online if you'd like to know more.
Agreed 100%. Slapping a REST api over a software is like reducing that software to a set of resources and attribute updates over those resources. And that never feels like the right way to talk with a software. That could be convenient for the majority of crud apps out there, but not everything we build is a crud system. For example how would you design operations on a cloud word processor as REST apis?
A better perspective would be, most softwares can be viewed as a set of domain specific objects and the set of operations (verbs) that can happen to those objects. These operations may not be a single attribute update, but a more complex dynamic set of updates over a variety of business objects. If you try to model this with a REST api, it either quickly becomes chatty or you end up compromising on REST principles.
GraphQL seems to make much more sense than REST, IMO.
I’m not sure what issue the verbs are creating, can someone help me get through my thick skull what this persons issue with them is? I don’t see how they add much complexity, just check the API docs and see what verb you need to use to perform a certain action.
In practice though, the tooling is cumbersome enough that you can't readily sub in some other protocol besides protobuf, json, and allegedly flatbuf. I've had little success finding ways to e.g. use msgpack as the serde. Maybe it's out there but I haven't found it.
>Sometimes, I feel that we ought to have a simple protocol, on top of HTTP, to simply do remote procedure calls and throw out all this HTTP verbs crap. Every request is a http POST, with or without any body and the data transfer is in binary. So that objects can be passed back and forth between client and server.
So the problem with “Data transfer is in binary” is that it really requires both the source and the recipients to be running the same executable, otherwise you run into some really weird problems. If you just embrace parsing you of course don't have those problems, but that's what you are saying not to do... Another great idea is for a binary blob to begin with the program necessary to interrogate it and get your values out, this has existed on CDs and DVDs and floppies and tape forever but the problem is that those media have a separate chain of trust, the internet does not, so webassembly (plus, say, a distributed hash table) really has a chance to shine here, as the language which allows the web to do this quickly and safely. But it hasn't been mature.
The basic reason you need binary identicality is the problem that a parser gives you an error state, by foregoing a parser you lose the ability to detect errors. And like you think you have the ability to detect those errors because you both depend on a shared library or something, and then you get hit by it anyway because you both depend on different versions of that shared library to interpret the thing. So you implement a version string or something, and that turns out to not play well with rollbacks, so the first time you roll back everything breaks... You finally solve this problem, then someone finds a way to route a Foo object to the Bar service via the Baz service, which (because Baz doesn't parse it) downgrades the version number but does not change the rest of the blob, due to library mismatches... Turns out when they do this they can get RCE in Bar service. There's just a lot of side cases. If you're not a fan of Whack-a-Mole it becomes easier to bundle all your services into one binary plus a flag, “I should operate as a Bar service,” to solve these problems once and for all.
> So the problem with “Data transfer is in binary” is that it really requires both the source and the recipients to be running the same executable, otherwise you run into some really weird problems.
I think you're misinterpreting "data transfer is in binary" with something like "a raw memory dump of an object in your program, without any serialisation or parsing step".
Sure, there is gRPC, but it requires another API specification (the proto files).
There I said it. HTTP Verbs constrained REST APIS are the worst thing ever. I hate them.
They introduce un-necessary complexity, un-necessary granularity and they almost always stray away from the "REST principles". To hell with "Hypermedia" stuff.
I find it such a joy to program in server rendered pages. No cognitive overhead of thinking in "REST".
But, of course, all this is only where the client and server are developed by the same person / company.
For publishing data and creating API for third party use, we have no serious, better alternative to REST.