Skip to content
As a diligent investor of software firms, you want your portfolio companies to be lean, mean, growth machines. You certainly don’t want hundreds of thousands of dollars filling your cloud provider’s coffers when they could be going to your bottom line.

The cloud cost trap

graphic representing cash draining from cloud providers

As a diligent investor of software firms, you want your portfolio companies to be lean, mean, growth machines. You certainly don’t want hundreds of thousands of dollars filling your cloud provider’s coffers when they could be going to your bottom line.

So, beware of the cloud cost trap. You and your executive teams may not be getting the cloud efficiencies you bargained for. Here are three reasons why – and five ways to fix it.

Cloud platforms promise unparalleled speed and ease of application development and deployment. AWS, GCP and Azure offer a dazzling array of services designed to make life easy for their customers’ engineers – whether developers, DevOps or data scientists.

It’s easy to fall into the trap of thinking that, just because you’re using a scalable platform and their latest tools, resource efficiency is a given. This can be a long way from the truth.

To explain why, consider one of the recent innovations in cloud computing to go mainstream: “containerisation”. A container is a package of application code and all its operating dependencies, that can run quickly and reliably across different computing environments.

Containers offer a range of benefits: portability between different platforms and clouds, faster software-delivery cycles, easier implementation of modern, micro services-based software architectures and more efficient use of system resources.

This last point is true in theory. Efficiency gains arise because micros services used in containers are more resource-efficient that monolithic applications run on physical or virtual servers, they start and stop more quickly, and they can be packed more densely and flexibly on their host hardware. 

And in practice? That depends entirely on how effectively containers are deployed.

The problem

Containers require tools to manage and orchestrate where and when they run within and across servers — or in the case of cloud, virtual server instances.

However, this creates another layer of abstraction between DevOps and the underlying hardware, which ultimately, determines operational cost — and is contributing to creating an increasingly common DevOps problem: infrastructure utilisation is no longer being effectively managed.

Looking deeper, there are three causes of this that Soimplement sees in organisations in varying measures:

  1. Process gap: Resource scheduling carried out by container orchestration is based on the developer’s original guestimates of their applications workload — applied as limits — with no follow-up to check whether these estimates were accurate. Since developers’ estimates will be conservative – who doesn’t want their code to run smoothly? —the absence of a workload-validation cycle introduces inefficiency from the start.
  2. Capability gap: The decoupling of DevOps from physical-hardware planning has created a capability gap. Container orchestration tools often make it harder to figure out cause and effect. Engineers that have grown up with cloud computing, have never hit hardware limitations or had to delve into the detail of resource tuning. As a result, they can find it harder to identify and interpret causes of low utilisation — or even mistake low utilisation as a goal, not a problem!
  3. Accountability gap: Maximising development productivity and speed of deployment are now ingrained in DevOps’ psyche. Velocity rules — often over-shadowing a latent need or opportunity to realise operational efficiencies.

The impact

The financial implications can be substantial. 

It is not uncommon for us to find companies running cloud-container infrastructure that is over-provisioned by a factor of two – sometimes equating to hundreds of thousands, or even millions, of dollars per year. Wasted money that could, no doubt, be allocated to far better value-enhancing causes, not least your EBITDA.

The answer

So, how can this be avoided? Here are five actions software firms can take to address this:

  1. Ensure business priorities are aligned: if infrastructure cost matters, then ensure corresponding objectives are cascaded to the DevOps team. This should not necessarily be cost-down targets — and certainly not in isolation. Also consider including provision of workload-resource optimisation analysis and recommendations as a deliverable.
  2. Ensure resource-utilisation analysis is part of day-to-day operations: When building new applications, this will need to be run more frequently than with mature code bases where workloads are more predictable. Like many DevOps tasks, with the right tools and thinking, this can be run automatically and with minimal additional effort. 
  3. Address skills deficits: Assess your DevOps engineers’ ability to carry out these tasks and, if necessary, address any skills gaps.
  4. Apply technologies appropriately: While containerisation has benefits and advantages in many scenarios, it is not the best approach in all. Avoid using blanket approaches to deployments. For example, in complex environments the best architecture may be a mix of containerised micro services, instance workloads and serverless approaches.
  5. Get help: If you don’t feel that you have the expertise inhouse seek external support. Like many cloud technologies, container orchestration, is relatively new and can be difficult to deploy well — certainly for complex applications — without experience and specialist skills.

Ask these simple questions

If you are non-technical and this is starting to feel complicated, fear not. 

Cloud is all about matching compute power to demand. If you’re running more than a handful of virtual servers, there are few reasons why you should not be achieving 50%, or better, 75% utilisation of what you pay for.

Ask your engineering executives these questions:

  • Is resource utilisation measured? If so, is it above or below 75%?
  • Is cost efficiency part of your DevOps’ functional and personal objectives?
  • How is this effected?

Cost matters

Deploying applications to the cloud has never been easier. Doing it well — efficiently, securely, reliably and at scale — requires solid engineering disciplines.

It is all too easy for the sophistication of today’s cloud services to obfuscate the basic economics and allow costs to balloon out of control.

Cash is king. Be sure you’re not wasting it.

 

Investor protection

Download a detailed analysis of symptoms and implications of engineering distress

investor distress signals image

Want to accelerate?