Taiwania Publishes its cloud edge computing vision through Kubecon, a premier conference on emerging Kubernetes ecosystem…

The flagship conference on Kubernetes, KubeCon + CloudNativeCon Europe 2021 in May, was with more than 27,000 attendees. Cheng Wu, our Techfund GP and well-known as a successful serial entrepreneur, shared his insights on future trends of edge cloud technology. Here’s his article:

 

Emerging Edge Cloud and Computing Infrastructure

Global web services are largely served from public clouds today. As they reach the cloud edge, they are handed over to telco wireline and mobile wireless access networks for delivery to end users.

While this delivery approach has been adequate as a best-effort service delivery, mission-critical services — with stringent latency requirements in the range of milliseconds or lower — cannot work this way, since cloud to client latency can easily exceed tens of milliseconds on average and may fluctuate wildly depending on network loads. New emerging services that require ultra-low latency — such as autonomous driving, massive machine-type communications for smart infrastructure and manufacturing, as well as Internet of Things (IOT) — have led to the emergence of edge cloud and associated computing infrastructure to provide accelerated delivery with machine learning intelligence.

 

Although there is a great deal of consensus that cloud edge is where more intelligent processing is needed, there is every little consensus of where to build these edge infrastructures and if they are synonymous with 5G Mobile Edge Computing (MEC). Further, while 5G is expected to play a pivotal role in connecting edge and client devices into the cloud, many forms of wireline and wireless radio technologies are expected to play a role in the broad content of Internet oThings (IOT).

 

Clouds Have Reshaped the Internet

The advent of web 2.0 resulted in the creation of a consumer-to-service and service-to-service delivery overlay paradigm, on top of the above node-to-node physical transport internetwork. However, web 2.0 was constructed largely based on public cloud infrastructure, which sports a global footprint within a single administrative domain such as AWS. The role of the public internet has been relegated to the new “access networks” with 5G and beyond. This the starting point of the cloud edge.

 

Separately and independently, 5G — with its low latency, network slicing and Network Function Virtualization (NFV) — is intended to create a new dynamic mobile access network of equal intelligence to the public cloud, with an edge network that is physically closer to the client. The mobile edge plays the role of a content or data cache and functions as a proxy between the client and the original cloud source.

 

Conceptually the edge cloud can be viewed as a disaggregated virtual Internet compute platform specialized for specific applications, where the edge nodes play the role of a cache for an Internet “backplane” with a cloud “core.” Examples of applications that can benefit from edge nodes include VR/AR. IOT, and healthcare devices.

 

The Rise of Edge Cloud and Edge Inference

Edge clouds are not merely the edge of public clouds. Instead, they are clouds of their own right, for the purpose of creating all-service edge intelligence. While public clouds have matured to become an extension of enterprise data centers from application and operational perspectives, edge clouds are likely to be constructed as third-party services over mobile operators’ mobile edge infrastructure, or over traditional telcos’ access networks and central offices. Further, edge clouds are conceptually small edge data centers, likely with peer-to-peer connectivity and service roaming (or federation) coordination among them, which they are likely to operate not independently but together — to deliver the complex mesh services of the future, with machine learning inference intelligence.

 

This leads to the profound challenge of integrating two disparate cloud architectures, to achieve seamless end-to-end services while optimizing end-to-end traffic flows. From a traffic management perspective, the separation of two distinctly different autonomous systems with edge cloud and public cloud respectively, makes it a profound challenge to optimize for true end-to-end flow optimization — primarily due to the lack of cross-cloud flow visibility and intelligence within each respective cloud. In addition, traffic characterization with each cloud also differs significantly. While the public clouds can assume quasi-static intra-cloud traffic, immune from edge and client-related surges, edge clouds must deal with all on-demand flash crowd traffic and service diversity — which requires sophisticated edge inference intelligence with application QoS-awareness.

Decentralized Web 3.0 Services and Multiplicity of Source (Origin)

Unlike Web 2.0, which gave birth to ultra-large centralized websites that can handle billions of users via aggregation, Web 3.0 promises to create a new Internet service order that is decentralized — while balancing the needs for privacy and data monetization. This is made possible by evolving the World Wide Web from an address-based scheme to a name-based binding scheme for services and resources, using new standards such as the Internet Planetary File System (IPFS) (to allow peer-to-peer content transfer), blockchain (to decentralize transactional executions) and Solid (Social Linked Data, a proposed set of conventions and tools for building decentralized social applications based on Linked Data principles).

 

Solid is modular and extensible; and it relies as much as possible on existing W3C standards and protocols. It allows for decoupling content from applications and achieves content-agnostic application infrastructure, such as the edge cloud described above. Perhaps the most profound change these new technologies bring is the fact that objects will be accessed and resolved by name and can reside on multiple locations — effectively creating the situation of multiplicity of source.

 

From a traffic management perspective, Web 3.0 brings a new set of challenges. First, decentralization will reverse the current hub-and-spoke paradigm for content distribution to one that distributed actors functioning as spokes send traffic to the requester as the hub. Second, all members of a distributed transaction are to be logically grouped together from an operational perspective, despite their data dissimilarity, because a distributed transaction is not complete until all sub-transactions are complete or have achieved a consensus quota.

 

Furthermore, cloud edge, being closer to the request origins, will become the center of information convergence — instead of the core, as in Web 2.0, with accompanying scaling challenges of TLS/Certificates, DNS Security, load balancing, etc.

Real-time 5G applications, such as drones and autonomous vehicles, require that services supported at mobile edge can roam from one MEC to another, without losing stateful information or incurring edge-to-cloud connectivity. With 5G MEC likely located at mid-haul locations, connectivity between MEC sites is assumed — with support such as metro-Ethernet, DWDM fiber loop or MPLS circuits.

 

To maintain maximal flexibility in data and state sharing, virtualization of edge storage across MECs may be unavoidable — in addition to compute node virtualization.

Variety of Edges

While it is to be expected that client-to-MEC latency be kept within a low millisecond range, there are certain applications that may require far short latency — in the range of tens of microseconds, or latency in the range up to 10 milliseconds or longer. As such, microsecond-level latency can only be accommodated likely within the DU itself or directly enhanced with an ML capable RU — while edges that can tolerate longer latency may not require the uLLC support of 5G, thus making it possible to deploy other wireless and wireline access technologies.

 

Regardless of edge connectivity and resource sharing specifics, one thing that can be ascertained is that a variety of edge virtual data center designs and connectivity options will have to co-exist, making a flexible and cost-effective virtual data center architecture a necessity.

Service-Smart Software-Defined Internet 

Inevitably, the intersection of the decentralized Web 3.0 and the modern-day cloud-era internet infrastructure will spawn a new set of web services for consumers and e-commerce in the next decade, at a pace unseen before. This is partly because now, for the first time, the internet will become software definable — with the granularity of control at a per-service level, thanks to the continued advancements of software-defined networking and on-demand virtualization technologies in the past decade.

 

In the era to follow, we envision self-organizing inter-networks to become a new service paradigm, where new services can be constructed as a complete overlay over the internet — enabled by layers of software-defined delivery and transport services, each with horizontal edge inferencing intelligence within their respect layer. We envision it will be possible to construct logical service inter-connections based on intents; and have intents carried out by layers underneath, automatically and seamlessly, according to your SLA requirements. We envision public clouds and mobile edge will bridge into a seamless end-to-end logical network domain, in which encompassing service overlays will be made possible. We envision SOI overlay networks will be service-specific and dynamic, replacing today’s VPN, CDN and any sort of purpose-built value-added network.