Edge cloud enables operators, enterprises, application developers and content providers to deploy predictable cloud-computing capabilities at the network’s edge, in the immediate proximity of their businesses, as well as mobile networks. As the industry moves into the subsequent phases of 5G rollout and Industry 4.0, “stateful edge” will be a key enabler for enterprises and operators who need to deliver efficiency at the edge, over a number of new services, with lower latency, strict Quality of Service (QoS) and high availability.
Before we discuss stateful, let’s first give an example of more simple, stateless applications. “Stateless” applications typically provide a single function, for example, a print server, a basic calculator, or an old-school web search. Furthermore, stateless application transactions do not need to understand or retain any information about a prior transaction to perform the current transaction. In other words, no preexisting conditions or states impact their function. On the other hand, a “stateful” application, such as a bank transaction, does care about preexisting conditions or states. Therefore, it needs to have a “persistent” relationship with its data and users as it scales, migrates, stops, starts and heals.
The list of stateful edge applications is growing as we see innovation in 5G, the Internet of Things (IoT), Industry 4.0, self-driving vehicles, smart cities, video analytics, customized content delivery, security and self-healing networks.
Stateful edge inherits all of the challenges we associate with orchestrating numerous edge clouds at scale to:
While all of these are well known, stateful edge applications provide additional challenges that others cannot support.
It is no secret that the vast majority of applications are being rolled out on Kubernetes (K8s). But what is not as well known is that vanilla K8s was not designed for stateful applications as they were originally targeted at simple web-scale applications. Stateful applications require more and must provide not only persistent relationships between users and applications but also persistent relationships between applications and the data they use.
In the past, relationships between data and applications were simple. Prior to K8s, there was a simple direct mapping between data and an application running on bare metal or in a virtual machine. But with K8s innovation comes added complexity. K8s applications are broken up into many micro-services and mapped to different containers, each with a different relationship to the data, as they grow, scale, migrate and heal. This means the data can have a state, but the condition of your K8s containers and micro-services also have a given state that can change the moment they are deployed.
To complicate the problem, most clouds are running off their legacy storage, where K8s applications and their data are mapped to one another via a “simple” Container Storage Interface (CSI). On one end is the data, for example, a storage array with a CSI, and it sees a generic connection to some downstream node. The array controller does not see or even comprehend the application or the K8s container constructs. On the other end, the application only sees a generic volume and has no notion of what kinds of media make up the storage, where they are located, or how they all fit together in the system.
It is all very generic, as neither side of the generic CSI has the visibility to understand the complexity of the other side. This not only severely limits efficiency and performance but also impacts how one can configure the combined solution for data protection, recovery, quality of service, workload-to-storage affinities and lifecycle automation. Each of these impacts user experience, platform efficiencies, as well as data protection and recovery automation. This disparity is even more exacerbated when one attaches to a volume that spans multiple media types. At the very least, vanilla K8s and a generic CSI hamstring the service designer and the user.
To sum things up to this point, while cloud-native K8s allows one to scale and enhance performance by leaps and bounds, the relationship between storage and application becomes more complex when it comes to data protection.
All of these have to relate to storage in their way, and it is constantly changing.
Rakuten Cloud-Native Storage (CNS) understands, auto-learns and auto-adapts to all application and data permutations. Backups, snapshots, cloning and DR are all applications, and K8s state aware.
Some other vendors claim K8s-application awareness, but they require manual-intensive tagging and marking over the lifetime of the application. They also need Kubernetes expertise. With CNS, we auto-ingest the application from its Helm chart, YAML file, or operator, then auto-discover it, auto-monitor it and adapt its changes over its entire lifecycle. Fully automated forever and way easier to use, CNS needs no Kubernetes expertise.
CNS provides programmable pre- and post-processing policies that auto-adjust to target environments and can even renumber IP addresses when cloning so there are no network clashes. Furthermore, there is automated storage placement based on easy-to-configure policies and IOPs-based storage QoS.
CNS includes industry-leading, software-defined storage that supports a comprehensive set of application-aware services, including snapshots, clone, backup, encryption and business continuity. All data services are application-aware, tracking not only data storage but metadata and the ever-changing Kubernetes application config, protecting a wide range of datasets for “application-consistent” disaster recovery of complex network- and storage-intensive stateful applications.
Rakuten Cloud's best-of-breed Kubernetes-based Cloud-Native Platform combines one-click application onboarding with declarative, context-aware workload placement, pinning your applications and services to automated policies. Just tell CNP what resources and supporting applications your service needs, and it will auto-discover and configure them for you, as per your policy, over the entire automated lifecycle of the service, providing all of the storage and IP address persistency needed. Add, stop, start, heal and migrate with automated ease.
With CNP, resources are modeled on numerous NUMA-aware options, including memory, CPU cores, Huge Pages, overlay/underlay networks, and redundancy, applying affinity and anti-affinity rules as needed. This also extends into the compute and storage placement and locality, with persistent addressing.
Rakuten Cloud-Native Orchestrator orchestrates and manages the lifecycle of any workflow, including bare-metal provisioning, cloud platform instantiation, network functions (NF) lifecycle management, network services (NS) lifecycle management, and Methods Of Procedures (MOPs). All of these can be triggered through a policy engine. Orchestrator’s automated workflows support cloud-native network functions (CNF), virtual network functions (VNF), and 3rd party physical network functions (PNF) simultaneously. All of this comes with a full stack observability suite and planning tools.
Orchestrator provides intuitive context-aware lifecycle management for your NFs, services, 3rd party applications and Kubernetes cloud platform. It also integrates those workflows with your physical platforms, like bare-metal servers and third-party appliances.
Rakuten Cloud-Native Platform solves all of the significant stateful edge cloud concerns, unifying edge to the core with the following capabilities:
For more information on our stateful edge solution advantages and other related automation and orchestration solutions, check out our Rakuten Cloud page here.