What is serverless architecture?

Updated on December 11, 2024

·

Originally published on March 14, 2023

2023-03 serverless architecture

Serverless architecture is a cloud-based approach to the delivery of content (and online services) which enables developers to deploy the code that they write without worrying about managing the underlying infrastructure on which it runs.

How does it do that? Essentially, the serverless approach transplants the complexities of managing the infrastructure required for executing code to a third-party cloud service provider — just like a pizza place provides the ingredients, ovens, baking skills, and delivery for people that don’t want to bake their own pizzas. So, if you were throwing a party, all you’d have to do is order the specific pizzas that you and your guests wanted, and eat them when they arrived.

OK, it’s not quite the same experience as ordering (or eating) freshly baked pizza, but serverless is an increasingly popular way for organizations to facilitate application development. In 2023, the global serverless market was worth $12.08 billion and, with a compound annual growth of around 24%, is projected to reach $98.8 billion by 2033

In this post, we’re going to provide a high-level explanation of serverless architecture, and explore how it might be useful for your software development process.

Servers essentially serve by providing the resources and data necessary for applications to carry out their programmed operations.

If you’ve been anywhere near an office in the past two decades, chances are you’ve passed a “server room”, but the space required for servers changes depending on the scope and complexity of operations. In small businesses, a server could be a single computing tower housed under someone’s desk, or, in enterprise contexts, an entire building’s worth of machines complete with security and industrial cooling systems. 

The point is, servers can be difficult to deal with, not just in terms of the physical space they occupy but in the need to manage infrastructure and maintenance, and the sheer cost involved in running them. 

That’s where the serverless approach comes in.

By shifting server functionality to the cloud, serverless architecture enables functions to be executed virtually, without a need for developers to manage underlying hardware and software infrastructure. 

From a developer’s perspective, serverless architecture and serverless functions are a way to break down complex problems into smaller units of business logic that can be deployed separately. It's an approach that leans into the virtual possibilities of cloud computing and that lets teams and individuals work on code in parallel.

Putting the server in serverless

The term “serverless” is actually a little misleading because there’s still a server involved in the code execution process, and your application still needs to use it. Just like you’d always need an oven (of some stripe, at some point) to bake your pizza.

A more accurate framing of serverless would focus on the shift of the server “unit” from literal hardware hooked up to the internal network in a room somewhere, to an abstracted server hosted and managed in the cloud. 

Even then, the cloud provider would still have its own physical server somewhere, along with all the conventional server management requirements. But you get the idea: for all intents and purposes, your own software architecture would no longer be reliant on a physical server (and all the challenges that involves) because another organization is running that infrastructure on your behalf.  

Serverless and DevOps

DevOps is all about empowering teams to make the software development process smoother. With that in mind, DevOps teams often prefer the serverless approach because it  supports automatic scaling. When your apps receive a lot of traffic, the system scales up to meet demand, executing multiple functions or multiple instances of functions, and when demand drops, the system automatically scales back down again and can, if necessary, stop running altogether. That flexibility translates to a substantial saving on costs when servers are idle. 

By contrast, traditional physical servers require you to deploy all of your infrastructure up front, which comes at financial and time costs, and requires specialists to manage. This inflexibility means you’re paying for infrastructure to always run, even when it’s not in use.

Function as a service

The serverless approach is particularly well suited to applications that need to perform short-lived, single-function tasks. 

In fact, serverless computing goes hand in hand with functions as a service (FaaS), where a single, self-contained code snippet is deployed and executed in the cloud. The most popular examples of FaaS include AWS Lambda from Amazon or Azure Functions from Microsoft.

Think about our pizza party again, where we’re ordering food rather than running code. Even if you were a really good pizza chef, or could fit an authentic pizza oven in your kitchen, it might still make sense to order out to the pizzeria for time and cost reasons if you were hosting lots of guests, with different tastes, and who wanted a particular type of topping or crust. 

Only pay for what you use

Serverless functions can be triggered by anything, from websites to IoT and smart home devices. Each event starts an invocation of the relevant function which provisions the infrastructure, and runs the code in the background. The time that it takes to provision the infrastructure is called the “cold start.” 

This process, in which invocations are only triggered on a per-event basis, means that, with a serverless system, you only pay for what you use. When you order from your pizzeria, you wouldn’t be charged (directly) for the cost of keeping their ovens running, or the individual ingredients being used on the pizzas, but rather each order that you call in — in other words, each time you “invoke” a pizza-making function. 

serverless

Here’s another example. Say you want to build an application programming interface (API). Traditionally, you would run the API on a server behind an API gateway that would always be running and accepting requests, and which would incur ongoing costs. 

By contrast, if the same API was serverless, the infrastructure would only be up for the time required by each request. This would also mean that the provider would bill you on demand for the execution of the specific piece of business logic that handled the request, every time the request was made.  

The serverless process

Now that we know a little bit about serverless architecture, let’s go into more detail about the components that make up the serverless process: 

Invocation

The first step in serveless code execution is an invocation. This is the trigger that prompts your cloud server to begin a function. 

Cold start

If a function is being invoked for the first time, or if it’s been a while since the last time it was invoked, there will be a period of latency before the cloud server is able to get infrastructure up and running and start the function. This period is known as the cold start. 

Throttling

Cloud providers can impose a limit on the amount of functions that can run at the same time in one region, and will throttle functions if this limit is exceeded.

Duration

Each code function takes a certain amount of time to execute. This is known as its duration. 

Timeout

Most providers also limit the length of time that a single function can run. Once this limit is exceeded, the function is terminated. 

That introductory list sets out the key serverless concepts. There’s obviously going to be variation between different serverless environments, and individual cloud providers may impose different restrictions on the code functions that they host. 

The benefits of serverless architecture

There are plenty of reasons that serverless computing might work for you. 

Reduced costs

In traditional server architecture, you need to account for the maximum load upfront, which is cost effective only if your APIs are also constantly under load. If your APIs are only seldom called, downscaling to a serverless framework, where you only pay for the functions that you actually use, will reduce your costs. 

That’s not to mention the fact that you won’t need to pay for and maintain physical servers in a room somewhere. 

Scalability 

The operational flexibility that serverless systems provide is also going to help you out when your business grows. Whether you’re an established organization with shifting seasonal demand, or an ambitious startup still finding its feet, a serverless platform can help you scale to meet the traffic demands on your system. 

Engineer efficiency

Opting for serverless infrastructure can speed up software development because providers offer preconfigured code solutions that can shortcut certain parts of the process. This means engineers can be assigned to more value-adding strategic tasks, including optimizing code, rather than set-up and maintenance, and so help you get your app off the ground faster.

Furthermore, in a serverless framework, any infrastructure configuration is stored in config files along with the code, which not only simplifies code deployment but allows other, unfamiliar developers to gain an understanding of how the code is run. 

Automated convenience

The maintenance of serverless architectures is automated, in the sense that it’s the responsibility of the cloud provider. This adds a level of day-to-day convenience to running your website or other online channels, since there’s no need to implement updates and security patches, perform infrastructure maintenance, or any other server management chores.   

Speed of delivery

Deploying code to a serverless cloud environment is quicker and easier than it is in traditional server architecture. This makes it possible for engineers to experiment with new features and functions for serverless applications, or respond quickly to the shifting demands of the market. 

Multi-language

Serverless architecture gives your developers a wider choice of programming languages (Node.js, Go, Java, .NET, etc.), essentially letting them choose to code in the one(s) they’re most comfortable with. That choice also means flexibility for developers when writing code to perform an array of different tasks.  

Challenges of serverless architecture

So, what’s the trade-off with serverless architecture? Here are a few of the key challenges:

Security risks

With no infrastructure to manage, serverless security breaches can have different consequences than those involving traditional applications. For example, if they’re misconfigured, serverless architectures are particularly prone to denial of service attacks because hackers can exploit their automatic scalability, invoking functions in large volumes — and racking up your bill by doing so. 

Vendor lock-in

The freedom of having someone else manage your software infrastructure necessarily means you’re closely tied to that third party. While you’ll have functional flexibility for your code, if you need to change providers for any reason, you may encounter friction. For example, some serverless tools are bespoke to their providers and it could take time to find a suitable replacement. 

Managing technical issues

Serverless architecture also takes server management and server hardware maintenance out of your hands, which is convenient but can be a double-edged sword. For example, if your provider has a technical issue — maybe there’s a power outage at their server location — then you’ll be waiting on them to fix it, and reliant on them to provide timely updates about what’s going on. 

Constrained runtime and cold starts

Serverless functions usually have a constrained runtime and, in most cases, the upper limit is 10 to 15 minutes, which makes them impractical for long-running tasks.

You’ll also need to think about frequency of requests to the server. If serverless functions aren’t kept active at regular intervals, they can be de-provisioned — which means they’ll need to cold start again. If you get a request after de-provisioning, the function will take its usual amount of time to complete plus the cold start. That process will take a toll on the latency and performance of serverless apps.

Parallel processing

When a function is invoked multiple times in parallel, the server may start to throttle it. If that happens, your infrastructure might no longer be able to serve requests, and the servers won't be able to recover from the failure state.

Serverless use cases

The market is growing but what are people actually doing with their serverless architectures? 

Let’s go into some use cases. These examples are well suited to serverless architecture because they each follow an event, action, and scale process: a triggering event invokes a function, and the function executes an isolated action. Then, as demand increases, more instances of the function are triggered to meet demand. 

Autoscaling websites and APIs

With serverless applications, you can spin up as many instances of your website or API functions as necessary, and scale down as soon as web traffic wanes. You pay only for what you provision from your cloud provider, which means less worrying about overburdened servers and crashes, and a more consistent experience for your end users. 

Multimedia processing

Cloud services give you enough power to incorporate media processing capabilities. For example, if your organization allows card-based payments, you could integrate a real-time image recognition API that lets customers upload photos of their credit cards and instantly extract their information.

Events-based trigger

Serverless architecture is a useful option for activities in which a specific event (or series of events) is triggered. That could be a confirmation email sent after a user signs up for a service, a daily report posted at a specific time, or a delivery status that updates when a package reaches a certain location. 

RESTful APIs

Serverless architectures offer specific benefits for the development of RESTful APIs, including the possibility to scale endpoints independently to meet demand, and for  developers to implement changes to endpoints without disrupting the rest of the system. 

Behind-the-scenes tasking

Serverless architecture is good at asynchronous behind-the-scenes tasks, including the transcoding of video or the processing of images in an already-launched app. By going serverless, you can execute these types of tasks without adding frustrating latency for users. 

Continuous integration/Continuous delivery

It’s very easy to extend serverless architecture, and add more functions as you need to. In practice, this means that serverless architecture can contribute to your continuous integration and continuous delivery (CI/CD) pipelines by automating much of the process. Code commits, for example, would trigger an automated build, while a pull request could trigger an automated testing process. With that said, it may ultimately be easier to implement CI/CD with tools like CircleCI or GitHub, rather than building custom serverless solutions.   

Microservices

The extensibility of serverless architecture also lends it to the modularity of microservices. Combined with the cost-effectiveness of FAAS (only pay for what you use), it may just make more financial sense to build your application with serverless architecture. 

Build at the cutting edge of cloud computing

If you need cost efficiency from your software, and want to create an environment in which your developers can be at their most productive, serverless architecture is worth considering. 

But serverless potential extends beyond process automation and efficiency. One of the most interesting things about this kind of architecture is what you can build with it or, more specifically, how you can build with it. Going serverless is a way for organizations to create innovative microservice ecosystems at the cutting edge of cloud computing — just like you can with Contentful. 

On Contentful, your microservice ecosystem is brought to life via APIs, with serverless applications broken down into component functions in order to stay agile, efficient, and engaging for users. In this environment, you’ll be able to scale up and down effortlessly, future-proof your tech stack endlessly, and lean in to the possibilities of cloud computing to deliver truly unique content experiences.

Subscribe for updates

Build better digital experiences with Contentful updates direct to your inbox.

Bulent Yusuf

Bulent Yusuf

Managing Editor, Blog, Contentful

As the Managing Editor of the Contentful blog, Bulent collaborates with Contentful's customers, partners and users to publish articles that support and elevate the community.

Related articles

In this post, we’ll examine some tools and techniques for reducing the size of your Webpack bundle.
Guides

How to put your Webpack bundle on a diet

December 7, 2022

This guide will show you how to create a Next.js application using App Router to fetch knowledge base articles from Contentful via React Server Components.
Guides

How to Integrate Contentful and Next.js App Router

December 13, 2023

In this tutorial we'll integrate Commerce Layer, Cloudinary and Contentful to build a complete ecommerce experience for a product with several color variants.
Guides

How to build a product page with Commerce Layer, Cloudinary and Contentful

November 22, 2024

Contentful Logo 2.5 Dark

Ready to start building?

Put everything you learned into action. Create and publish your content with Contentful — no credit card required.

Get started