3 methods to resolve GraphQL endpoints

Published on September 25, 2018

Pros and Cons of GraphQL Backends 2

The main purpose of GraphQL is to provide flexible access to the underlying data through composition, selection, and mutation. This blog post is a comparative journey of resolving endpoints within various architectures.

What's the purpose of GraphQL?

GraphQL is a specification that defines how to fetch data from a backend system. In many ways, it is similar to REST and often uses the same HTTP(s) transports as REST. However, rather than using various path-based URIs and HTTP verbs, it uses a single endpoint with a defined schema that specifies how to not only fetch data but also mutate, or change, data. GraphQL schemas are the heart of GraphQL and provide a much richer interaction over the data. GraphQL is at times seen as a competitor to REST-based frameworks, but GraphQL also goes hand-in-hand with those frameworks.

The main purpose of GraphQL is to provide flexible access to the underlying data through composition, selection, and mutation. Rather than having to fetch multiple documents via REST — only to use a handful of data from each of those requests — GraphQL allows specifying precisely which fields to select and then composing them together. This allows for clients to reduce network cost and latency by avoiding multiple trips. However, GraphQL can lead to more complexity than simple REST applications. For this reason, GraphQL is seen as a better alternative in systems with multiple types of clients, each with their own set of requirements, with very in-depth data sets.

GraphQL schemas

At the heart of every GraphQL specification is the schema. The schema is the contract between the server and the client. It specifies what data is available, what types of data they are and how they relate. Every field has either a primitive type (such as int, string, float, boolean, etc) or a complex type. This helps to ensure type checking within client applications as a first-class citizen rather than purely a documentation or validation-based tool such as JSON schemas. Schemas are composed of classes made up of one or more fields. Clients may select those classes, precisely choosing the relevant fields needed. The following is a simple example of a movie-based schema using interfaces, types, and enumerations.

Example of querying the schema:

Schemas also support arguments within field selections to further define customization to clients. As an example, a particular numerical metric field may choose to provide a unit argument that specifies what data unit to output the value in. This is in contrast to typical systems that output a value in a single standard unit and rely on documentation to express what unit it is—putting the unnecessary onus on each client to manage the conversions. With GraphQL, the client can specify the precise unit as an argument to the data selection. The GraphQL resolver can then manage the conversions and return the appropriate value to the client. Ultimately, this customization allows the logic and control to happen server side, which is often more effective and easier, removing the stress from each client application.

The following is an example of using arguments, specifically units of measurement (UoM) for lengths.

UoM (Unit of Measurement) for length

Below is an example of querying this particular schema.

GraphQL schemas are incredibly expressive with far more features than this article can explain, including directives, which can provide expressive conditional support. The schemas are ultimately the capability that separates GraphQL from any other REST-based framework. The schemas, however, are purely a specification and the implementation of the schemas are backed by data resolvers.

Resolvers

Resolvers are the key to GraphQL implementations since every object, field, argument, etc is backed by a resolver. The resolver contains instructions on how to resolve a particular field based on the active context. Resolvers are also only invoked when a user requests the data rather than on every request, making for highly efficient data processing.

Using the previous movie schema and query, we may end up with a movies query resolver such as:

This class and method will be assigned to the movies field of the Query type. This assignment happens as part of the bootstrapping process or configuration in the GraphQL server. The request handler in GraphQL maps the query node to the root Query resolver, and then maps the movies field to this fetchMovies resolver. This process continues until all fields have been resolved. For example, GraphQL would next map the actors field selection to a fetchActorsByMovie method declaration.

The basic signature of a resolver is : fetchData(data, args, context, info)

  • data provides the previously-fetched data from the parent field and is useful for creating associations or context to fetch the requested data.

  • args provide a map of key/value pairs corresponding to the arguments, if any, passed to the field.

  • context is specific to a given request and provides the state information shared by resolvers.

  • info field provides various metadata about the request including the selection context. This is often used to traverse the parent objects to provide contextual awareness to a given field.

Resolvers are responsible for using the context and active state to fetch the underlying data and return the data back to the server. The server then maps the returned data to the requested fields while calling any other children resolvers. Once all resolvers have completed, the entire document is returned to the client in the requested structure.

Three methodologies

Choosing how to implement the resolvers and what to back them with is often the most critical decision in the design process of the GraphQL server. Often, it is highly dependent on existing systems and how to interoperate with them. Other times, it depends on organizational boundaries and ownership. There are endless methodologies for resolving data access. The three listed here are common variants typically used. These include REST, direct data access via DAOs, and compositional access.

1. REST

REST is a common method to back GraphQL. Rather than rewriting entire REST stacks to convert to GraphQL, organizations often just stack GraphQL on top and resolve schemas through RESTful API calls. This is a good strategy that allows bootstrapping a GraphQL schema quickly and effectively. It essentially provides the customization and data selection process through GraphQL, enabling more effective clients, while maintaining the integrity of the RESTful system.

It also allows that architecture to happen under organizational boundaries. API services are typically owned by data or backend engineering teams, which may not wish to build and support GraphQL, whereas the frontend teams may want to leverage GraphQL and its flexibility. By using the APIs already established by those teams, the frontend teams can easily build resolvers and establish their own GraphQL framework. This also allows the GraphQL instance to be backed by multiple, distinct APIs and managed by multiple, distinct teams via a single interface into the entire organization.

The example below uses pseudo-code to map to the movies schema above in order to resolve movies, the characters, and the actors. In this example, there are two distinct backends, which can also be completely separated and managed individually without impacting the GraphQL service:

  • movies-backend provides RESTful API access to the movie catalog

  • actors-backend provides RESTful API access to the actors catalog

However, REST can also be detrimental in many aspects. One of the main driving forces of GraphQL is the ability to precisely select what data is needed, allowing highly efficient data resolutions. However, when the resolvers are backed by REST, then the entire request must be fetched via REST and only certain fields selected from the response. This causes the backend REST system to fetch all the data even though it may not all be needed, leading to slight inefficiencies in the stack. In this particular example, the movie catalog may provide expanded data for the distribution company, musical tracks, etc. This data would be fetched by REST but unused by GraphQL.

Another way REST becomes a hindrance on GraphQL is the N+1 problem. In order to avoid the inefficiencies of selecting large JSON documents, APIs may fragment themselves into smaller data sets, allowing resolvers to fetch less data to become more efficient. However, it also requires making an API call for every resolver and could potentially lead to hundreds or thousands of API calls, which even under high parallelism quickly becomes problematic. This replaces the N+1 database selection anti-pattern into a GraphQL anti-pattern.

Using the above example, we can see the characters for each movie are separately fetched, and then the actor for each character is also separately fetched. If the main movies query resulted in 5 movies, each with 10 characters, we would end up making 56 total REST calls. Due to the inefficiencies in REST and HTTP, this has the potential to create higher latencies. The primary solution to overcome this issue would be batching.

Overall, the hardest part of any GraphQL implementation is choosing the most efficient data resolution handlers. When using REST, requests should be batched together as much as possible by using the active context and state to determine what types of data need to be fetched and resolving them all at once. Batching also automatically collapses requests to the same endpoint in order to avoid making the same call twice. This leads to more complex situations, yet more efficient implementations. In this particular example, we could batch each actor into the active context state and then fetch all 50 actors in one query. This also avoids making the same calls twice in the same request, such as when the same actor appears in multiple movies.

2. DAO

If REST is one end of the spectrum for resolving data queries, then direct data access would be the other end of that spectrum.

Using direct data access to resolve data involves placing the GraphQL implementation nearest the data source. Data philosophy says that the closer to the data source the logic lives, the more efficient the logic will be. If logic is needed to aggregate different types of data together, then selecting and aggregating the data within the database will be much more efficient than doing so at the client level. The client tier may have to make several requests to fetch the data just to aggregate specific fields together. Typically the closer to the data source, the faster the access—in other words, querying a database is faster than querying an API and this effect is compounded the more tiers that exist. This same ideology holds true with GraphQL resolvers, which is why direct data access is more efficient than REST—the number of tiers is reduced and data gets moved closer.

To use direct data access within GraphQL data resolvers, you attach DAO-based calls to the resolvers. For example, the application may have a MovieDAO that knows how to fetch movies by various criteria such as getMoviesByActor, getMoviesByGenre, etc. The GraphQL schema may then provide data selection within those contexts such as the following:

The data resolver will wire up the appropriate DAO to fetch the data. The DAOs themselves may communicate to varying data stores, independent of each other.

Direct data access may also result in the N+1 problem. However, the N+1 problem to the database tier is far more performant than to an API tier. Even still, an implementation must be highly cautious of invoking this type of behavior. It is still preferred to attempt to group together queries where possible. For example, rather than invoking a select statement for movies and a select statement for actors, the context can be used to wire up a single select statement that selects both movies and actors together. The big advantage to direct data access is that it is more forgiving of poor implementations than an API tier due to the more efficient querying into the data store.

The pseudo-code below demonstrates using DAOs with wired up database objects to query a data store. These resolvers could be backed by any database including both relational and non-relational. This example is similar to the REST example, but is typically more performant and more capable being able to query the data stores directly. For example, we could easily add batching or compositional support selecting precisely how the queries are mapped.

The biggest issues with direct data access are organizational boundaries and ownership. Where REST-based architectures allow multiple teams to be backed by a single GraphQL server, doing the same with direct data access is not as straightforward. GraphQL can be backed by multiple data sources and works very well, but when those data sources cross-organizational boundaries, then ownership of the server becomes an issue and managing the relationships between those backend sources gets more difficult. For example, one team may own personalization data and recommendations whereas a separate team may own the movie data itself. In this particular example, one team may own the movieDb and another team the actorDb—these teams may not want applications directly querying their data stores, instead preferring access through REST, an SDK, or binary transport such as gRPC. As each tier is added to avoid these boundaries, the server becomes less flexible and less performant.

3. Composition

The final methodology of GraphQL is composition which can help resolve organizational boundaries. Composition is the process of stitching together multiple distinct GraphQL servers by defining relationships. This allows each organization to define their specific GraphQL instance for their specific data sets. The composition tier then maps relationships and data sets together. For example, the recommendation server may provide a movie identifier with its GraphQL server. The movie server would provide movie data based on a given identifier. The composition tier would be able to create the relationship from movie identifier to movie data. The resulting GraphQL schema would allow selecting the recommendations and movie data completely, automatically fetching the backend data from each GraphQL server. This selection process is also highly efficient and selects precisely each set of data.

The Apollo GraphQL server provides the best example of implementing schema stitching. The server resolves each backend schema provided and then uses rules provided to the server to stitch the schemas together with relationships. The following example demonstrates how we could stitch together the movie schema and recommendation schema if provided separately.

Composition still requires multiple hops to each backend microservice, which can lead to complex data distribution. It is more ideal to fetch data directly from the data source itself to minimize the hops, but for organizations built on microservices with several distinct teams, using composition helps to solve those boundaries.

The other part where composition breaks down is when not every system uses GraphQL. In these situations, you can not directly stitch together the GraphQL schemas. The best methodology, instead, is to manually stitch together relationships and use binary protocols or REST to fetch each data. Binary protocols, such as gRPC, allow for defining these relationships and stitching data together. The GraphQL server, then, provides the frontend process and schema for selecting the data while the transport tier allows fetching from each distinct microservice. This form of composition allows for a three tier architecture to exist.

Three tier architecture

In a three tier architecture, data is separated into a core data access tier, a business or product focused tier, and a presentation or view focused tier. This provides a very loosely coupled system with high flexibility, allowing applications the power to select their data needs without having to couple every data system to every other dependency.

The core data access tier allows one or more groups to expose their backend data systems with a data focused representation through either GraphQL or a binary-based transport, such as gRPC, using microservices. This tier merely provides the data and identifiers into other data sets managed by separate teams or microservices and each tier should use the architecture most suited for its needs. This means that one type of schema may rely on SQL such as movie data, while another schema type relies on NoSQL such as personalization or recommendations, while others are fronted by REST or gRPC to better abstract the backend systems. The more complex data systems may choose to use GraphQL and directly rely on schema stitching at the product tier.

The business tier uses GraphQL to create a common product-focused schema as well as defining the end relationships between the data sets. The business tier is meant to convert the data-focused sets into product-focused sets while applying business logic rules for the product. This allows the core data to remain agnostic and separated from any specific product, while allowing the products to be shared across multiple applications or views. This tier is important to create common alignment across all views of a particular product. The GraphQL server may choose any of the above methodologies depending on the particular architecture and backend systems. When both product and cores use GraphQL, then schema stitching is the best methodology. For cores that rely only on REST as an abstraction over the data, then REST can be used to map each relationship. For cases where the same team owns the data stores and the product tier, then using the appropriate data store DAO for each data store is more efficient. Typically, however, the end result will be a mixture of all three as systems grow and evolve over time.

The presentation tier represents each individual application or view of a product. For example, this may be a mobile application, web application, and TV-based application. These applications would utilize the product focused schemas from GraphQL providing the common data and relationships. The views would then map that data to its specific views providing any additional view-centric logic.

Ultimately, this type of architecture allows each tier to grow and evolve independently while ensuring flexibility for each product.

Wrapping up: GraphQL endpoints

GraphQL is incredibly powerful and flexible. It offers a wide assortment of possibilities when it comes to designing the most appropriate architecture. Deciding which architecture to choose is often the hardest, most critical decision.

The best recommendation is to first understand the organizational boundaries and ownership. Who will ultimately own the implementation and architecture? Who owns each of the data sets? How are or how will those data sets be exposed? These types of questions can help decide how to formulate each tier of the architecture.

For small organizations or organizations that own data to products end-to-end, it is recommended to stay simple and use direct data access to ensure high efficiency across products. For larger organizations built on several microservices, it is recommended to follow a three-tier architecture that allows microservices to grow independently as either their own distinct GraphQL server or using a binary transport and schema. Product distribution teams would then be able to own the GraphQL tier, connecting the relationships and data sets together. It is best to expose the resolvers nearest the data stores without crossing organizational boundaries. This means it is more preferred to use direct data access, then GraphQL stitching/composition, followed by REST. In general, REST should only be used when required by backend teams or legacy systems.

Regardless which architecture is finally chosen, allow GraphQL to grow and be as flexible as possible. Resolvers, field arguments, and even more complex capabilities such as directives, can allow a GraphQL schema to be highly flexible while remaining loosely coupled to its users. The more logic that can move to the server while remaining agnostic to clients, the more efficient and maintainable the end-to-end system will be. The resolvers and associated schema are ultimately the most critical components that define the implementation. Choosing how to effectively implement and manage those resolvers will make or break not only the server itself, but also the entire end-to-end architecture.

Read more about Contentful's GraphQL Content API.

Subscribe for updates

Build better digital experiences with Contentful updates direct to your inbox.

Related articles

Front end as a service (FEaaS) speeds up development and reduces infrastructure costs. Here's what FEaaS is, how it works, and a list of FEaaS platforms.
Guides

Understanding front end as a service (FEaaS)

October 18, 2024

Contentful launches two more hands-on courses for developers.
Guides

Contentful launches two more hands-on courses for developers

August 2, 2022

Developers, ready to get started with Contentful Studio? Find implementation instructions, code, design tokens, and more in this guide to the Experiences SDK.
Guides

Getting started with Contentful Studio and the Experiences SDK

May 29, 2024

Contentful Logo 2.5 Dark

Ready to start building?

Put everything you learned into action. Create and publish your content with Contentful — no credit card required.

Get started