Published on December 5, 2019
From shipping lanes to power lines, the logistics chains of the world are the invisible strings that tie everything together. The food industry has spent decades figuring out the best way to get crates of avocados across the Atlantic; power grids are constantly evolving to make sure that wind power gets from the coast to coffee machines.
But if just one tiny step in the process breaks down, you end up with grouchy customers who have to start their day without breakfast.
Now that so many products are either purchased digitally or are purely digital from the outset, infrastructure has become the digital logistics chain. Your customers might not be looking for breakfast, but they expect their content to be served up hot, wherever they’re at, whenever they desire.
When you’re managing customer experiences at scale across many different touchpoints and end devices, infrastructure is a critical part of your strategy. Content must be contextualised by market, interface and user, and sent in the fastest and most reliable way to the right digital channel. To meet these requirements, modern content management and delivery demands infrastructure that can keep up with that complexity.
IT professionals know how much time and effort goes into managing infrastructure, and it’s no different for content infrastructure. The amount of moving pieces grows exponentially with increasing demand for new and bigger digital experiences. One of the biggest barriers to adopting cloud services in general is having to give up control — what if something goes wrong? But if your content infrastructure is flexible enough to adapt to this complexity, and the provider is trustworthy, you can save yourself the anxiety and focus on shipping excellent products.
Content infrastructure, like so many other parts of the stack, is not one-size-fits-all — that would be like comparing coffee to avocados. Here’s what to consider when deciding on your setup.
When teams first start to use a content management system, they might not know from the outset how widely it will be used and for which products. Often, they start with an individual project such as a knowledge base or marketing site. However, they can quickly hit limitations when they’re using a less flexible, traditional CMS and try to expand to multiple products or platforms.
Your content infrastructure needs to be able to scale with the scope of your projects as they develop. This means being able to modularly add content repositories or containers (what we call spaces) when you add another channel or product to the mix. Or further, when you start to implement a content layer as a central part of your stack, you might need purpose-built spaces for your content — for example, for product catalogs or design systems.
As spaces serve multiple purposes, they have different requirements. More technically complex projects might need multiple replicated environments for testing, staging and production of various components; a space for a marketing landing page may need many locales to serve versions in different languages. These requirements will also change over time as your projects grow.
Content infrastructure should be an adaptable system, down to the space level. A space or content repo should be able to flex with the needs of a project, by replicating environments, adding users, adding locales or adding more spaces when necessary. Ideally, the provider should be able to advise you on how to structure all of this to best suit your needs.
Getting closer to the metal, content infrastructure considerations are similar to any other infrastructure: you want low latency and excellent uptimes across the board, and a good balance between costs and availability depending on the needs of a specific product.
Having several options for type of infrastructure will help make sure you meet your business requirements as well as your budget — and have the flexibility to scale up with time. For smaller projects like landing pages or digital displays, you might only want to invest in multi-tenant infrastructure. A medium-density multi-tenant infrastructure with guaranteed fewer “noisy neighbors” helps bridge the large gap between a typical shared cluster and dedicated infrastructure.
For any shared infrastructure, effective load balancing makes the difference between acceptable and unacceptable downtimes — look for features such as autoscaling, built-in DDOS protection and low-to-zero downtime during maintenance.
Traffic fluctuation quickly becomes a primary concern when you’re undertaking bigger and more ambitious projects, and in this case, you may want more assurance against competing traffic by choosing dedicated infrastructure.
Major ecommerce platforms want to minimize the risk of an outage on Black Friday; gaming companies want to know their highly awaited titles will launch without a single hiccup for their players. Will your provider be able to handle an order of magnitude spike in requests within just a few minutes? Look for providers that have a proven track record of handling traffic spikes from big events like a new TV campaign, Black Friday sales or a major product launch. It’s beneficial when your provider can offer you hands-on assistance, like working with you to create caching strategies and improving request and query performance.
You can almost entirely eliminate the risk of downtime by choosing a provider that offers a 99.99% SLA based on multi-region dedicated infrastructure. Multi-region dedicated infrastructure protects against even the extremely unlikely case of a region-wide outage, and is a good choice for mission-critical applications or for highly regulated industries.
No matter the size of your products, you want to know that you can maintain business consistency in the unlikely event that something goes wrong. Ideally your vendor is planning ahead for all calculable situations, but outages on the server level do occasionally happen to even the most major providers.
The two key factors here are recovery time objective (RTO) and recovery point objective (RPO). RTO is the time needed to recover from a large-scale outage, and can be as short as a few minutes for multi-region setups. The investment in shorter RTOs should be balanced with cost and risk of outages for that specific application.
RPOs reflect the point in time from which your instance can be restored. This should be as recent as possible — your vendor should back up instances in real time and be able to recover from any time stamp.
Technical resiliency is only half of the story in maintaining service stability — it requires a high-functioning support team with established processes and a comprehensive system of alerts on applications and system metrics. The team should be able to respond to alerts and customer requests within SLA-guaranteed response times, 24/7.
Another aspect to consider is how issues are handled — can the support teams solve issues directly, and are the platform engineers on call? Is there a process for quick escalation in case of an emergency?
Finally, consider how much proactive support you’ll get from customer success managers, engineers and architects. Having access to experts can mean a massive increase in your team productivity and application performance, whether it’s through advice on designing your content models, structuring your content repositories and environments, or caching strategies to slim down response and query times.
It’s not just highly regulated industries that need to be concerned about security, privacy and data protection. When you look at the major data breaches of the last 12 months that have affected businesses across the globe, it’s clear: All companies should know where their data is stored, whether it’s encrypted and who has access.
Compliance certifications like ISO 27001, PCI and GDPR are more than just check boxes — they are required for certain regions and industries. They’re also a signal for how rigorous a provider is with the security principles behind the certs.
Beyond that, another good signal for security-conscious vendors is if certain features are built into the platform itself. Two-factor authentication and solid permissions management mitigate against attacks on a user-level; static webhook IPs and API white labeling can offer extra protection against exploiting individual apps built using the API.
You may be (understandably) hesitant to give up control of your infrastructure, but if the vendor is security-conscious and trustworthy, it can be a huge burden off your shoulders. Maintaining security patches and upgrades is a large time investment that doesn’t necessarily need to be part of your daily work.
Avoiding complexity in products is not realistic, considering how digital experiences are expanding across platforms and interfaces and becoming more personalized. But by offloading your infrastructure concerns to a provider you trust, you can focus on what matters to you — building a great product.
The keys are scalability, flexibility and trust — can a provider’s infrastructure options grow with you? Is it adaptable to your needs? Do you trust that provider to always protect you from outages, downtime and exploits? The answer to all of these questions must be yes if you’re concerned about providing your customers with experiences at the level they expect.
But enough about logistics — I’m off to get a coffee.
Interested in finding out if Contentful’s content infrastructure is right for your projects? Request a demo here.
Subscribe for updates
Build better digital experiences with Contentful updates direct to your inbox.