Spotlight on Tech

Rakuten Cloud white paper explores unleashing the power of AI/ML in the core and edge

by
Brooke Frischemeier
Head of Product Management, Unified Cloud
Rakuten Symphony
June 2, 2025
5
minute read

It’s hard to keep up with the rapidly evolving world of AI/ML, so we’re doing our part by sharing information in a white paper on the use cases and technologies available from Rakuten Cloud to build a cloud-native compute infrastructure.

Cloud-native containerization supports applications ranging from complex analytics in centralized data centers to real-time decision-making at the network edge. The ability of AI/ML to support both edge and centralized workflows presents unique challenges and opportunities. While core applications thrive on the robust infrastructure of the cloud, edge AI/ML demands efficient processing and immediate responses closer to the data source.

In the white paper, we explore the key cloud technologies driving AI/ML success, focusing on how the Rakuten Cloud-Native Platform empowers organizations to effectively deploy and manage core and edge workloads.

First, the white paper covers the main use cases that can thrive in a cloud native environment; here are a few of them:

  • Smart cities using analytics to optimize traffic flow and enhance public safety by analyzing data from traffic cameras and sensors.
  • Hospital patients with wearable devices and remote monitoring systems powered by edge analytics that allow for immediate detection of anomalies and timely medical intervention.
  • Predictive maintenance and quality control, using AI/ML to predictively detect equipment malfunctions before they lead to costly downtime.

The white paper explains more about how retailers, the energy sector, logistics companies, telecom and cloud providers, financial services firms, and agriculture can use these technologies to create smarter and more responsive systems.

Edge cloud challenges

Deploying AI/ML at the edge presents unique challenges. Managing a distributed network of edge nodes adds complexity, demanding sophisticated orchestration tools for deployment, monitoring, updates, and scaling.

Data management becomes particularly challenging with the need to distribute and synchronize large datasets across edge devices while ensuring data privacy and security.

Latency and real-time processing requirements further complicate edge deployments. Organizations must strike a careful balance between inference at the edge and training in the core to match workload to available compute resources.

Operational challenges like high availability and fault tolerance are also more pronounced at the edge. Effectively running AI/ML workloads on Kubernetes at the edge requires a comprehensive approach that addresses these challenges through advanced orchestration, optimized AI/ML frameworks, robust data management, and careful infrastructure planning.

Cloud-native platform makes a difference

Picking the right cloud-native container platform is the key to success in an edge deployment. This platform must support several key technologies including awareness of CPU and GPU features to ensure the right resources are available for the high-performance computing tasks required by modern AI applications.

Other requirements include:

  • Automated workload placement which optimizes resource allocation based on workload needs and streamlines deployment and lifecycle management.
  • Advanced networking capabilities ensure robust and flexible connectivity for AI workloads.
  • Granular multi-tenancy and role-based access control (RBAC) provide a secure environment to support multiple organizations or allow secure access for network engineers.
  • Comprehensive monitoring and logging facilitate efficient management.
  • Highly performant cloud-native storage is critical for rapid access to the vast amounts of data needed for training and inference.

A complete cloud-native solution

The Rakuten Cloud software suite addresses these challenges and empowers organizations to realize the full potential of AI/ML, both at the core and the edge.

Rakuten Cloud-Native Platform is an enhanced Kubernetes platform that prioritizes automation and self-service, featuring a user-friendly, intent-driven interface. The platform optimizes workload and storage placement, harmonizes VMs and containers, and offers enhanced networking and storage automation. Its low resource footprint and data optimization capabilities make it ideal for edge deployments. The platform also provides comprehensive observability across cluster, application, service, and infrastructure levels.

Rakuten Cloud-Native Storage provides high-performance block, file, and object storage, supporting any type of AI data. The storage platform is application-aware, managing the entire data lifecycle, including application configuration and metadata.

The Rakuten Cloud-Native Orchestrator offers lifecycle management and observability across multiple operational domains, from bare-metal servers to network functions and applications. It provides a scripting engine that enables large-scale management of any device or appliance, supporting diverse executors and scripting languages. All automation within Rakuten Cloud-Native Orchestrator can be triggered automatically or via a policy engine, allowing for the orchestration of complex, multi-domain workflows.

The Rakuten Cloud suite is designed to help organizations overcome the complexities of deploying and managing AI/ML workloads. By focusing on key technology areas like CPU / GPU support, automated resource management, advanced networking, and high-performance storage, the platform empowers businesses to build and deploy sophisticated AI/ML solutions.

The integrated Rakuten Cloud product suite provides a comprehensive and robust platform for managing the entire AI/ML lifecycle, from core to edge.

If you are ready to unlock the transformative power of AI/ML, download our white paper to learn more about the Rakuten Cloud-Native Platform and how it can accelerate your AI/ML initiatives.

Spotlight on Tech