API Design REST, GraphQL, or gRPC. Which one to choose?

While designing an API we need to make a lot of choices. Among others, we need to decide which protocol to use to communicate between services. Common designs consist of many services, so we need to choose the protocol for all of them.

In the past, in the monolith architecture era, we had an API only for Frontend apps or/and for mobile apps. Nowadays, with the common microservices approach, we also need to set the internal communication.

In this article, I’ll attempt to make an overview of the three most popular protocols working at the HTTP layer: REST over HTTP, GraphQL and gRPC (gRPC vs Rest vs graphQL comparison) and point out the advantages and disadvantages of each of them. Queues and Message brokers, which are also sometimes an option, are taken into account in the article.


The history of REST starts in 2000 when it was whitepapered, but was first used ‘on production’ in 2004, according to Google Trends (https://trends.google.com/trends/explore?date=2004-01-01%202022-10-31&q=GraphQL,REST%20API,gRPC)

It is still the most popular of those three. However, it is not a real standard. REST is more of an architectural style for building APIs, which contains only 6(!) points that you should stick to while designing or implementing REST-based API:

  1. Uniform Interface
  2. Client-server
  3. Stateless
  4. Cacheable
  5. Layered System
  6. Code on Demand

REST makes use of URI and HTTP verbs to make an API centered on resources. So an action itself is defined in the URL, which is different to GraphQL and gPRC where the action is in the payload. HTTP verbs (GET, PUT, POST, DELETE and PATCH) define the kind of operation to perform on resource which is defined in the URI. Such an approach makes a URI self descriptive and it consists all of the necessary information to execute an action.

An example of REST endpoints:
GET /movies – takes a list of movies

GET /movies/1 – takes a movie with ID 1

POST /movies – adds a new movie (defined in  payload) to the data storage

PUT  /movies/1 – updates a movie with ID 1 and the data from payload

DELETE /movies/1 – deletes a movie with ID1

GET /movies/1/actors – responds with a list of actors in a movie 1


REST also wants to break HTTP’s limitations by supporting Hypermedia as the engine of application state (HATEOAS). In short, it extends Hypertext with new media such as video or images and allows us to access resources we didn’t know about during performing an action.
There is a detailed description of all the constraints you can find here: https://restfulapi.net/rest-architectural-constraints/ 


GraphQL (Graph Query Language) was created by Facebook in 2012 and was open-sourced in 2015. The idea behind this standard was to make it possible to fetch exactly the amount of data the client needs to process in one query. To fulfill that need, the authors of the specification created a language where we can specify a tree of entities/properties we need at a particular moment and the server responds to the same tree but with values. It is exactly what RPC means – server defines what can be remotely invoked by the clients, which places GraphQL and gRPC in the same “RPC-like” group.
An example query:

query HeroNameAndFriends($episode: Episode = JEDI) {

  hero(episode: $episode) {


    friends {





could be fulfilled with JSON data:

  "data": {

    "hero": {

      "name": "R2-D2",

      "friends": [


          "name": "Luke Skywalker"



          "name": "Han Solo"



          "name": "Leia Organa"






GraphQL usually uses one API endpoint where the queries are sent and proper resolvers process the data from the datasources to prepare a response. Both sides: the client and the server, need a schema, which API offers and there are tools which can help to prepare a valid query. For example, GraphiQL editor is quite often servered along with the API so we can test our data or see the Schema the server provides.

So a simple architecture consists of a few parts: the HTTP Layer, Schema and resolvers. 

HTTP Layer

Graphql needs a HTTP layer to work properly. As was mentioned before, it usually exposes a single endpoint where queries are sent. All of the HTTP logic is done there (such as  authentication/authorization) and any parsed query is sent to the GraphQL server. 

There is no use toURIs at all. Any endpoint works even on a single domain. In some cases, when a service offers data separation, the servers provide one endpoint per client – but as has been said, it’s not necessary to do that. 

GraphQL offers the making of multiple operations during a single HTTP call, which is described below.


The Schema is an important part of the architecture. It defines exactly how data is shaped and it needs to be defined beforehand. The first thing that the server does after start up is to validate the schema and whether all of the implementations are correct. Static typed languages, such as Scala, have an opportunity to check this also during compilation. On the client side, the exposed schema also helps a lot, it shows the entire API and data structure, which also helps with the validation of queries before it’s sent.

When the HTTP layer sends a query – the server compares it to the schema and checks whether it’s valid or not.


A Query sent from the client has a JSON like structure, and represents the shape of the data it expects in response. Actually, word Query is too generic here, because one payload could consist of many separate queries. Along with those payloads, it could also have mutations and subscriptions.

The difference between them is that the query reads the data from the server but mutation writes to it. Subscription opens a stream where repeatedly the data is sent through. 


Each entity in the schema needs to have a resolver in a pair. Many of them are very simple but others are more complex, which is most important, as each of them could communicate with a separate datasource. For example, some data could be fetched from a database, some from the service with an HTTP call. GraphQL could work as a facade for many HTTP REST API’s for example.

An example of where we need users data, we could have a resolver for fetching that user entity from the database and then resolvers for each field that the entity has. Because GraphQL schema is strongly typed – we know exactly what type of response is expected and then the resolver makes the proper conversions if needed.

For the Full GraphQL specification you can see here: https://spec.graphql.org


The popularity of gRPC has been rising since 2015, as well as GraphQL. The reason was the same – it was opensourced in that year.

Before that date it was used internally by Google, from around 2001. It was invented to resolve scaling problems in loosely coupled distributed systems and was known as a Stubby framework.

gRPC uses HTTP/2 protocol which empowers its features such as: bidirectional binary communication, compression, flow control, compiled and strongly typed,  along with others.

Communication with the client-server could be achieved in a traditional way – with synchronous request and response, but it strips a protocol from its power features. gRPC’s strong side is to stream data, and actually this could be done in three ways: client streaming, server streaming and even bidirectional streaming within a single TCP connection.

For communication, gRPC uses HTTP/2 but data is serialised and compiled to Protocol Buffers which also makes a contract for both sides. The disadvantage is that HTTP/2 is not supported everywhere, and in such cases we will need to use a kind of proxy to make a proper conversion – there is no option to choose from the protocol side.

Protocol Buffers

This is an open sourced project language and a platform independent library for serializing structured data. Similar to GraphQL, the schema’s advantage is on type safety and it is possible to validate a client/server during compilation. It also forces developers to take a Schema-first approach – to make a contract with the data structure before proper implementation. Protocol Buffers itself offers a safe versioning of the entity, so it is easier to evolve an API. 

You can find more about gRPC here: https://grpc.io/docs/what-is-grpc/core-concepts/

gRPC vs Rest vs graphQL, so which one?

As was said in the beginning, in the microservices world we can use even all of them in one project – it is all about the context. Now we try to show some different roles for API and point out the strong and weak aspects of both of them.


REST is the easiest to learn, I assume every developer has come across it at least a few times,  perhaps even having created exercises of any language when building the first CRUD app. There are so few constraints to remember, that we can assume there is no learning curve here at all. On the other hand, GraphQL needs the most effort. It requires not only learning the protocol but also changing your mindset on how to see data as a graph model. Somewhere between these two there is gRPC, but it also needs some effort to learn. However, it uses more table-like structured data which is easier to understand if the user is only familiar with REST.  If, however, the service exposes gRPC API for frontend calls, frontend developers need to go an extra mile with their learning, because HTTP/2 is supported by a lot of browsers that have limitations. It needs to use some kind of proxy to HTTP/1. Of course there are a lot of libraries to manage this, but it’s worth understanding what is going on under the hood.

Ease of development

Again, REST is the easiest here, it doesn’t need any additional libraries for either the server or the client. However, others need more effort, without a doubt,  both need to establish a contract between the client and the server of the API. GraphQL as a contract uses Schema definition gRPC – .proto definitions. With these, neither the server or client will work. For REST API, establishing a contract in the shape of an OpenAPI definition is recommended,  but not obligatory.
GraphQL usually uses the GraphiQL console, therefore while working, the developer can check the datastructure he/she needs. GraphQL idea as resolvers will also help a lot with developing, because every entity’s datasource can be written separately – but this is not an issue to work on the same API with any big team and in parallel.

gRPC firstly compiles proto definitions and produces interfaces which we will use on both the API sides. However,  there are a lot of API Testing tools such as Postman which support this and there is no issue with testing.

In the case of tooling for API testing – REST has the most options to choose from, while GraphQL and gRPC are limited in options, but all of them have at least a few.


A good caching solution can dramatically improve the efficiency of the API,  either on the client or the server side, therefore, it’s important to take this into account while designing the architecture.
Again, the best support here is for REST. Probably because its conventions hasn’t had any competitors for a long time. In addition, one of its constraints talks about adopting HTTP’s caching possibilities, so it also has strong browser support. Actually, REST has strong possibilities to cache on any point of request, starting on the HTTP server, going through the server mesh/load balancers/api gateway to the browser itself.

GraphQL uses only one endpoint, but the schema supports schema templating, so the client is able to send a query and variables for a query separately. In this way, we’re able to cache it. The pProtocol is still young and not all implementations use this out of the box, but it’s possible to do- however, it needs additional effort. Another common solution is to cache the entity calls in the resolvers instead of the entire queries, but it is still not as efficient as in REST.

Similar to GraphQL – in gRPC the caching possibilities are deep in the implementation and not every library uses it out of the box. Queries aren’t split to separate a fixed query and dynamic variables as in its competitor above. This makes the approach to caching more difficult to design.

API composition

The REST protocol is so resource-centric that making the multi resource endpoint is even against convention. Of course, you could merge a request on the API Gateway, but is this still a REST convention? 

However, others have no such limitations. The gRPC method is free to take data from many sources if so needed, everything is in the implementation.

GraphQL in these areas seems to be the most well-prepared. The concept of Resolvers where each entity could be taken from different datasources and possibly exchanged without the client’s knowledge, makes the protocol the most flexible here. It’s also quite common where, while migrating API from REST to GraphQL, it resolves firstly fetching the data from the old REST API and then one-by-one replaces it with a more efficient datasource.

Streaming support

REST constraints tell about synchronous communication online. So there is no possibility to use it directly. Of course, you could extend an API with the concept of Webhooks for bidirectional communication or even web services which are also quite popular.

GraphQL specifies subscriptions for asynchronous communication, but it’s not implemented in a lot of libraries so the client is limited to synchronous queries and mutations in many cases.

The last of the group – gRCP is the strongest player here. I would even say it was designed for this. The protocol is fully bidirectional and this feature isn’t forbidden by the implementations. 

Inter-microservices communication

In designs nowadays, asynchronous communication is one of the most important factors, as well as stability and effectiveness of exchanging data. This makes gRPC the strongest here. Firstly, it encourages their communication mechanisms, secondly, compiled and serialized .proto data are more efficient than serialized and deserialized text payload for both REST and GraphQL implementations.

API Versioning

REST has no specification so there are as many ways to version API as developers. On the internet you can find never-ending discussions about the best approach. Either URI based or Headers, but there is no one particular standard for this. Versioning in REST mostly depends on the team who developed it.

GraphQL, on the other hand,  has been ready to manage evolving API since the beginning. The specification defines how to manage this. Actually, it’s really easy to manage. Adding new fields doesn’t break the API, because the client needs to be updated manually to use this. In the opposite direction, the schema supports a ‘@deprecated’ annotation which tells the client that this entity could disappear in the future. Such an approach allows for evolving the API and updating the server and the client sides separately.

The protocol buffers support versioning and backward compatibility out of the box too. In addition,  the client and the server are generated from .proto definitions. But this is not as fluent an approach as with GraphQL, although it is easily achievable. 


As you can see, there are many factors you need to consider when choosing protocols, and everything depends on the role the API will have in your architecture. To recap, let’s point out the strong sides of each.

REST has almost a complete lack of limitations, it has strong tooling and is easy to understand, as well as being well supported in a lot of libraries. Therefore, its best role is communication between the frontend and backend. gRPC shines in interservice or asynchronous communication, with weaker tooling but stronger type safety. So it’s better placed on the backend side. GraphQL is somewhere in the middle but the best role it could have is in API composition or the frontend – or backend communication where there are multimodal data-structures which could cover many cases (such as public APIs). 

See also



Mariusz Nosiński

I’m an experienced developer who has acquired a broad knowledge. I’m always ready for new challenges and learning new skills.

Latest Blogposts

15.09.2023 / By  Daria Karasek

Partners in Code: Scalac’s Approach to Scala Outsourcing

Think of Scala development as the art of gourmet cooking. While many can whip up a decent meal, crafting a Michelin-star-worthy dish demands expertise, precision, and passion. Similarly, while a host of developers might dabble in Scala, achieving true mastery requires a depth of experience and understanding. This is where the magic of scala outsourcing […]

07.09.2023 / By  Howard Gonzalez

TestContainers in Scala: Use Integration Tests for building your services

TL;DR Integration tests are frequently seen as the most expensive tests in most environments. The reason is that they usually require a higher level of preparation and procedures to make them appropriate for your particular infrastructure dependencies.  In addition, the time invested to make them work properly on the developer’s continuous integration/development environment is not […]

05.09.2023 / By  Daria Karasek

Scala 3 Data Transformation Library: Automating Data Transformations with ducktape

In the ever-evolving landscape of software development, there are tasks that, while necessary, often lack the thrill of innovation. One such task involves managing data transformations, a critical aspect across various domains. While JSON APIs serve as a prime example where such transformations are crucial, the focus of this discussion transcends JSON and centers on […]

Need a successful project?

Estimate project