Edge computing became popular due to its performance, security and cost benefits over traditional cloud architectures. But it’s not always the best fit for distributed workloads.
Edge computing refers to architectures that process data at or near devices that generate and consume data, such as end-user PCs, mobile phones or IoT sensors. This differs from conventional cloud computing, which relies on servers in central data centers to receive data, process it and send it back to client devices. Edge computing can reduce network latency, lower the exposure of data to network security risks and, in some cases, reduce costs by offloading processing to end users’ devices.
Due to the edge’s appealing advantages, cloud architects might want to push their workloads to the edge. But before they do, they should consider factors including their application’s structure, performance requirements and security considerations.
Types of edge computing architecture
When weighing whether an edge computing model is the right fit, the first question to ask is which type of architecture is available. There are several types of edge models:
- Device edge. Data that is normally processed in the cloud is instead processed directly on client devices. For instance, an IoT sensor in an autonomous vehicle could process data locally to avoid delays that might arise if the data had to move to a cloud data center and back.
- Cloud edge/regional edge. Edge hardware processes data within data centers managed by cloud providers; however, these are geographically closer to client devices than traditional cloud data centers. For example, cloud services like AWS Edge Locations provide data center options that might deliver better latency than standard AWS data centers.
- On-premises edge. An edge computing model in which a company operates on-premises servers to host edge workloads. Consider a retailer that uses local servers inside a store to run a payment processing application, eliminating the need for data to travel to the cloud and back during customer checkout.
- Content delivery network edge. CDNs cache data in servers that are closer to end-users. Some would argue that CDNs are distinct from edge networks because a CDN typically caches data rather than processing it. Nonetheless, CDN architecture is one way to achieve edge computing benefits like lower latency and reduced exposure of data within the network.
Each type of edge model has pros and cons. For instance, the device-edge model works well if the client devices are capable of handling that processing burden uniformly. Standard PCs or laptops are equipped to handle this, but low-power IoT sensors can lack the compute and storage resources necessary to process data efficiently.
Also, using a device-edge model can be difficult for organizations that rely on many different types of edge devices and OSes, which can have different capabilities and configurations.
With the cloud-edge model, end-user devices aren’t a major factor in shaping architecture. This is because organizations do not offload data storage or processing from the central cloud to end-user devices with a cloud edge computing architecture. Instead, they offload to servers that run at the edge of the cloud. Those servers would usually be within a data center that is closer to end-users than the central cloud.
A CDN edge model is easy to implement because of the variety of CDN providers available to replicate data across globally distributed servers. The major drawback, however, is that CDNs typically can only cache data instead of processing it, so they’re not ideal for hosting applications geographically closer to end-users.
Edge computing examples
To illustrate the trade-offs listed above, here are some examples of when edge computing is and isn’t a good fit.
Good examples of edge computing include the following:
- Autonomous vehicles. Self-driving cars collect large amounts of data and need to make decisions in real time for the safety of passengers and others on or near the road. Latency issues that cause delays of just a few milliseconds could have profound impacts, like a crash.
- Smart thermostats. These devices generate relatively little data. In addition, some of the data they collect, such as the times of day people come home and adjust the heat, can have privacy implications. Keeping the data at the edge is practical and can help mitigate security concerns.
- Traffic lights. A traffic light has three characteristics that make it a good candidate for edge computing: the need to react to changes in real time, relatively low data output and occasional internet connectivity losses.
The following are some examples of where edge computing doesn’t work too well:
- Websites. Few websites require the performance or responsiveness of edge infrastructure. It might be nice to shave a few milliseconds off the time it takes a webpage to load, but that improvement is rarely worth the cost. The exception is websites that include large amounts of data or real-time content, such as a site that hosts streaming video.
- Video camera systems. Videos generate a lot of data. Processing and storing that data at the edge isn’t practical because it would require a large and specialized infrastructure. It is much cheaper and simpler to store the data in a centralized cloud facility. An exception is a video system that requires real-time analysis. Consider a system that uses facial recognition to unlock a door. In that case, the ability to process the video locally would be beneficial to prevent delays that might impede users.
- Smart lighting systems. Systems that enable users to control lighting in their home or office over the internet don’t generate a lot of data. But light bulbs — even smart ones — tend to have minimal processing capacity. Lighting systems also lack ultra-low latency requirements; if it takes a second or two for the lights to turn on, it is probably not a big deal.
Edge computing limitations
Admins must evaluate the efficacy of supporting an edge model before they move a workload to the edge. These limitations could push teams back to a traditional cloud architecture, wasting valuable time and resources.
Security on the edge
Edge computing reduces some security risks by minimizing the time data spends in transit, but it also introduces more complex challenges.
Businesses cannot guarantee the safety and security of any data that they store or process on end-user devices they don’t control. Attackers could exploit any vulnerabilities on these devices. Even with a cloud-edge model where businesses retain control over the edge infrastructure, having more infrastructure to manage increases the attack surface.
It’s typically easier to secure data in transit over a network — where it can be encrypted — than it is to secure data that is being actively processed on a device. Any number of security vulnerabilities on the device could expose that data. For that reason, the security drawbacks of edge computing might outweigh the advantages.
This makes edge computing less than ideal for workloads with high security requirements. A standard cloud computing model, with its centralized servers, could be less risky for sensitive data or workloads with special compliance requirements.
Latency requirements
Edge computing improves application performance and responsiveness because the data doesn’t have to make a round trip to and from cloud data centers. This is a key advantage for workloads that require instantaneous or real-time communication streams. Cloud providers continue to add more data center locations, but their massive facilities are often in remote locations far from large population centers. Latency is often an issue when workloads are in traditional cloud data centers.
That said, not all workloads require ultra-low latency, and the latency benefits provided by edge architectures might not be worthwhile for all workloads. Compared to a traditional cloud architecture, an edge network might only improve network responsiveness by a few dozen milliseconds. For standard use cases, like website hosting, that difference is unlikely to be noticeable to users. It’s only important for use cases that require real-time performance, like self-driving vehicles or devices that control machines on a factory floor.
Ensure the latency improvements delivered by the edge are necessary, and whether their benefits outweigh factors like the added cost and management burden of using an edge architecture.
Data volume
Businesses must consider how much data their workloads will process and whether their edge infrastructure can process it efficiently. Workloads that generate large data volumes need expansive infrastructure to analyze and store that data. To process that data, a public cloud data center is likely to be cheaper and — from a management perspective — easier to use than an edge architecture.
On the other hand, workloads that are largely stateless and don’t involve large volumes of data tend to be good candidates for edge computing.
Editor’s note: This article was updated in 2025 to include additional information on edge computing architecture.
Chris Tozzi is a freelance writer, research adviser, and professor of IT and society. He has previously worked as a journalist and Linux systems administrator.