Spotlight on Tech

Google Cloud Next 2025 Partner Talks: Using AI for edge cloud deployment on a massive scale

by
June 5, 2025
5
minute read

Imagine deploying an edge cloud network so vast it touches every corner of the globe, allowing cutting-edge AI to power the future of global organizations, ranging from fast food restaurants to manufacturing sites and everything in between.

That's the reality of Google Distributed Cloud (GDC), and at Google Cloud Next '25, Rakuten Cloud offered an exclusive look under the hood courtesy of Google's Mike Ensor, Technical Lead, who spoke at our partner talk series. In his presentation, he shared how he helped architect Google Distributed Cloud (GDC) for massive scale deployment.

This is interesting to our customers because Rakuten Cloud and Google Cloud have a very close relationship. Both companies are working together to include Rakuten Cloud-Native Storage into GDC.

AI, cloud powers massive deployments

Engineering GDC took four years to meet the needs of McDonald's, whose president and CEO was a keynote highlight. Ensor’s partner talk brought insight and real-world experience to the audience, sharing how AI is transforming the deployment of large-scale edge cloud networks.

Challenges of mass deployments

Ensor walked the audience through the challenges and solutions involved in rolling out intelligent systems at this scale. The complexity of managing edge devices across a massive, decentralized network demands not just clever infrastructure, but a fundamental shift in how we treat AI itself.

“AI is just software,” Ensor said. “At the end of the day, it’s just bits and bytes.” With this perspective, AI or machine learning models—whether large language models or classification algorithms—are no different than any other deployable unit of code.

These models require structured release management, CI/CD pipelines, and consistent interfaces. This thinking eliminates much of the mystery around AI deployment and brings it into familiar software development territory.

Ensor emphasized the importance of versioning AI models the same way we manage software. Minor updates might include new classifiers; major versions may represent significant architectural changes or deprecations. Even patches for minor defects follow the same path as they would in any software release cycle. This rigorous management becomes indispensable when edge devices need to operate autonomously and reliably—sometimes without a stable internet connection.

“AI is just software,” Ensor said. “At the end of the day, it’s just bits and bytes.”

Human intervention does not work at scale

The key to making this all work at scale is automation. By implementing GitOps-style workflows, configurations can be defined in a central repository, from which edge locations asynchronously pull updates. This enables a pull-based model that avoids the bottlenecks and fragility of push-based configurations.

This approach also supports the concept of the desired state, a foundational element for managing distributed systems at scale. Each edge node is designed to self-correct and reconcile to a predefined configuration, even in the face of network outages or drift. Ensor explained how this supports critical operations where systems must remain operational for days during an internet outage.

AI models play a unique role in this infrastructure. Ensor said the models are typically trained in the cloud—and he often uses Google’s Vertex AI pipelines. They are versioned and stored in Google Cloud Storage buckets.

Edge devices then retrieve and deploy these models locally, integrating them into inference services that continue to operate with minimal external dependencies. Importantly, a fraction of inference data—configurable depending on bandwidth and sensitivity—is sent back to the cloud. This feedback loop enables continual model refinement and monitoring, improving performance over time.

Don’t forget taxonomy and labeling

Beyond infrastructure, Ensor stressed the critical importance of taxonomy and labeling in managing complex systems. From software artifacts to deployment strategies, having a consistent, well-documented naming convention and hierarchy enables better decision-making.

Labels become the pivot points, or facets, for targeting updates, segmenting environments, and managing risk. A thoughtful taxonomy provides clarity at scale, whether categorizing stores by geography or deployment readiness.

Conclusion

For organizations looking to deploy AI at the edge, Ensor’s message is clear: treat AI like any other software. Build automation into the foundation. Embrace pull-based configuration and desired state management. And never underestimate the power of a well-structured label. Ensor’s presentation can be watched at this link.

Events