Spotlight on Tech

Simplifying the process of building and managing Edge AI Clouds

by
Brooke Frischemeier
Head of Product Management, Unified Cloud
Rakuten Symphony
February 19, 2025
3
minute read

Some things are worth doing even if they are hard–that might be great advice for your kids, but it’s terrible advice when it comes to building out AI edge networks  

Why? Because AI/ML edge networks need to be designed for seamless remote operation with advanced management and orchestration baked in from the start. Without that, you’re setting yourself up for unnecessary complexity and inefficiency.

That was exactly the point I shared during my presentation at KubeCon + CloudNativeCon North America, titled “Accelerating AI innovation with Rakuten Cloud-Native Platform and Cloud-Native Storage.”

The ability to process data at the edge rather than relying on cloud-based infrastructure can significantly enhance AI-driven applications, particularly in industries like retail, healthcare, and telecommunications.

One of the primary enablers of AI deployment at the edge is a complete virtualization infrastructure that ensures seamless operations. Kubernetes plays a vital role in managing containerized applications, but its success relies on the underlying physical and software infrastructure.

In summary, here are some essential functions and capabilities required for edge AI (watch the session replay for more details):

  • Automating edge operations with policies: Policy-driven automation is fundamental to managing complex AI workloads at the edge, including affinity or anti-affinity rules and distinct database configurations.
  • Container back up for AI is different from virtualization: AI workloads running in Kubernetes environments must capture storage state, entire application state, and Kubernetes configuration.
  • Comprehensive Remote Management: AI deployments need a single management workflow for the entire infrastructure stack—from bare metal servers to networking services.
  • Workload placement beyond NUMA: Deploying AI involves orchestrating an entire AI pipeline, including supporting network functions and storage.
  • High-performance storage remains a critical factor: AI applications can need block, file, and object storage, depending on the specific workload.
  • Every installation needs multi-tenancy: Organizations that don’t share their networks with other organizations still need multi-tenancy for pre-deployment trials of software or configuration changes.
  • Networking is the unsung hero: AI applications are inherently distributed, requiring robust networking solutions to maintain high-performance communication across multiple nodes.
  • Orchestration battles complexity: A comprehensive orchestration and monitoring framework is required for application deployment and ongoing monitoring, logging, and policy enforcement.

Successful AI deployment at the edge requires a holistic approach that encompasses infrastructure, automation, networking, and policy-driven orchestration. With the right tools, enterprises can deploy AI applications efficiently, without errors, and with minimal manual intervention.

Watch the full booth session video to see how policy-driven automation simplifies edge AI operations and unlocks new possibilities.

Videos
Spotlight on Tech