A cloud-native approach enables an organization to do more with less, to be agile and build highly available and resilient applications. An organization's decision to adopt a cloud-native infrastructure is a business strategy as much as a technical one.
Cloud-native / Microservices: A shift to microservices comprises loosely coupled, API-first, application logic working together to deliver a service to the end users. If one component is changed, it eliminates the need to rebuild the entire application which was the case in monolithic applications.
Cloud infrastructure to support the applications: A cloud-native application needs an underlying infrastructure that scales on-demand flexibility for applications to get resources at run time and reliability. On-demand compute provided by cloud technologies is a critical factor to be truly cloud-native (Docker/Kubernetes/Hypervisor, etc.).
Orchestration of the cloud-native applications: In a cloud-native approach, applications share the underlying cloud infrastructure and consume resources from a common pool. Hence, the model begs for an Orchestration layer that can manage application orchestration and Day 0 / Day 2 lifecycle automatically (Kubernetes). These include capabilities to automatically orchestrate the applications on the right infrastructure to meet their resource requirements. And capabilities to manage Day 2 lifecycle operations such as scale in and out, snapshot, restore, backup, recovery and more.
The underlying cloud needs to scale on demand and deliver on the SLAs and KPIs for business-critical services to run uninterrupted.
Process: Process defines how software written for container/cloud-native is being built, tested and deployed. DevOps enables the shift to agility in process, which is continuous development and build.
A cloud-native strategy, therefore, is a service-oriented, API-driven, containerized application model built and deployed with DevOps and an orchestration layer to automatically manage the different containers. In addition, the underlying cloud needs to scale on demand and deliver on the SLAs and KPIs for business-critical services to run uninterrupted.
With such a fundamental shift of approach, the positive benefits a business expect can achieve include:
In typical technical terms, the performance of the Kubernetes platform is measured in optimal “resource utilization” and the ability to run workloads using a shared infrastructure without any network latency or storage issues. When capacity planning and capacity management are done right, a cloud-native strategy ensures the business does not incur proportionately increasing operational expenses with increasing application footprint.
For example, at any point in time, the cluster utilization dashboard should look like this:
As is observed above, a well designed cluster has an equal amount of spare capacity available on all nodes in the cluster and all of them run to 70%-80% of their capacity. And the performance of the applications continues to be optimal in spite of running on a shared infrastructure.
Cloud-native technology is an alternative to hypervisor-based technologies because it promotes the concept of microservices. Cloud infrastructure on which cloud-native runs is expected to be truly on-demand and a lower-cost alternative to proprietary technologies that dedicate workloads to servers.
The primary driver for lowering costs is directly related to utilization, which is to set up a cloud-native deployment for maximum utilization. Utilization means building an infrastructure and an operational methodology that is genuinely on-demand.
Therefore, the second focus of a cloud-native strategy should be to beat the upfront capex and the running opex required of traditional methods with significantly (30% at a minimum) lower costs.
How can we measure if the cloud-native approach is yielding cost benefits?
Another aspect of cost is how automated the cloud-native network is. Hyper automation of Day 0 and Day 2 capabilities, at scale, is required to control the operational expenses of a highly dynamic, growing cloud-native network.
Availability in cloud-native terminology is the uptime of the services running on the cloud. Services are built with enough high availability (at the application and infrastructure levels) to guarantee the desired SLAs.
Service Availability is directly related to the availability and uptime of the cloud infrastructure. Typically, the cloud platform is measured on an SLA of 99.99% of availability. This uptime indicates how well the business is prepared to host critical services and guarantees that in the event of a disaster, the company is well-equipped to come back up and continue to serve its customers.
As an ongoing discourse, we will discuss how to design, deploy and operate a native cloud network. Then, we will explore what can go wrong in a cloud-native strategy and provide a checklist to support failsafe cloud-native deployments. Finally, we will explore cloud-native application design principles, dimensioning and cloud operations as three main pillars for realizing meaningful ROI from your cloud-native investment.
Click here to continue to the next topic.