Edge Computing: Concepts, Differences from Centralized Computing, and Reference Architectures
The article explains edge computing as a new, global computing model that extends cloud computing by bringing compute, storage, and services closer to users, detailing its differences from centralized computing, its resource‑edge and global‑resource characteristics, and several reference architectures such as ETSI, Intel MEC, ECC, and OpenFog.
Edge computing is a new computing model that follows distributed computing, grid computing, and cloud computing, integrating cloud, network, terminal, and intelligence to optimize resource allocation and enable intelligent, collaborative services.
What differentiates edge computing from centralized computing?
1) Edge computing is a global model covering both central and edge resources.
2) It consists of two main parts:
Resource edge‑ization: distributing compute, storage, cache, bandwidth, and services toward the demand side to provide high reliability, efficiency, and low latency.
Global resource pool: the edge acts as a shared resource pool that collaborates with centralized clouds, supercomputing centers, etc., achieving complementary advantages through coordinated scheduling.
3) Edge computing requires global coordination to achieve high‑performance computing, involving collaborative computation, parallel processing, and network optimization between edge nodes and central clouds.
4) It is an intelligent model that dynamically adapts services based on contextual awareness, user demand, and workload size.
5) Edge computing defines physical boundaries (edge devices near users), logical boundaries (functional edge vs. centralized cloud), gateway nodes (service provision points), edge nodes (massive intelligent terminals), edge side (near‑user devices), and cloud side (high‑performance data centers).
6) The model extends intelligence from the core to the front‑end, enabling intelligent collaboration and improving AI perception distance, types, processing speed, and latency.
Comparison with traditional models
From the perspective of resource allocation, edge computing mobilizes global resources to achieve overall optimal processing, requiring holistic consideration of resource scheduling.
From the collaboration mode, edge computing combines central and edge resources, processing locally when possible and delegating to central or other edge resources when necessary, forming a collaborative computing model.
From the intelligence level, edge computing relies on machine intelligence for automatic, real‑time resource configuration to meet user needs.
From heterogeneity, edge devices are diverse and cannot be simply virtualized; their interconnection is a key challenge.
From the service model, unlike the three‑layer IaaS/PaaS/SaaS model of cloud computing, edge computing adapts services based on application requirements and integrates with cloud models.
From the architecture, edge computing deploys massive intelligent terminals at the network edge, forming a dynamic, automated resource allocation system that integrates cloud, network, terminal, and intelligence.
Reference Architectures
(1) ETSI Reference Architecture : Defined by ETSI GS MEC 003, consisting of network layer, mobile edge host layer, and mobile edge system layer.
(2) Intel MEC Architecture : Positions mobile edge computing between wireless access points and wired networks, comprising routing subsystem, capability open subsystem, platform management subsystem, and edge cloud infrastructure; the first three reside in MEC servers, while edge cloud infrastructure is built in small data centers at the network edge.
(3) ECC (Edge Computing Consortium) Reference Architecture 1.0 : A four‑domain hierarchical design covering Application, Data, Network, and Device domains, providing open interfaces, data optimization services, connectivity, and real‑time intelligent interconnection for devices.
(4) ECC Reference Architecture 2.0 : Adds Edge Computing Node layer, Connectivity Fabric layer, Business Fabric layer, and Intelligent Service layer, presented through concept, functional, and deployment views.
(5) OpenFog Reference Architecture : Published by the OpenFog Consortium to support IoT, 5G, and AI workloads, based on eight core principles (security, scalability, openness, autonomy, RAS, agility, hierarchical architecture, programmability) and organized into horizontal layers (hardware platform, virtualization, node management, application support, application services) and vertical viewpoints (performance, security, management, data analytics, IT business).
These architectures illustrate how edge computing extends traditional cloud models by bringing computation and intelligence closer to users, enabling low‑latency, high‑reliability services across diverse scenarios.
Architects' Tech Alliance
Sharing project experiences, insights into cutting-edge architectures, focusing on cloud computing, microservices, big data, hyper-convergence, storage, data protection, artificial intelligence, industry practices and solutions.
How this landed with the community
Was this worth your time?
0 Comments
Thoughtful readers leave field notes, pushback, and hard-won operational detail here.