loader image


The use of cloud-based platforms skyrocketed during the 2020 Coronavirus outbreak, which has helped people work collaboratively, shop online, and stay productive. Organizations that had been using cloud-based infrastructure weathered the storm far more easily compared to those who were ill-equipped and struggled to onboard the right tools for the job. It has been an evolution that no one could have predicted, but millions across the world have been supported as a result of cloud computing.

One of the greatest advantages of cloud computing is the flexibility of the service. The computing power can be adapted and modulated as required. In addition, the sharing of computing resources gives access to on-demand services such as IaaS (Internet as a Service), PaaS (Platform as a Service) or SaaS (Software as a Service). In addition, cloud computing enables collaboration without physical proximity and the sharing of applications and interfaces, which the world was able to experience its usefulness in the year 2020.

Newer concepts like edge computing are always on the table to be discussed alongside the cloud in various sectors. However, using one does not eliminate the ability to utilize the other. Some people also believe that edge computing will eventually replace traditional cloud computing, but this isn’t the case because even though they have some meeting points, both technologies also have independent and recognizable roles. There are use cases where edge computing has advantages over traditional centralized cloud infrastructure such as in the healthcare sector, especially during this unprecedented increase in remote medicine even before the Covid-19 pandemic, which includes overcoming latency issues, operational strain and security. 

The main differentiating factor between cloud computing and edge computing is how and where data processing takes place. With cloud, data is stored and processed in a central location (usually a data center), whereas edge computing refers to data processing nearer to the source. The nature of the cloud means that information is relayed back to the data center, processed, and then sent back to the edge of the network where the device is. This can take time for data to travel back and forth and can cause lag or latency.

Another important factor is that the edge is a more cost-saving solution. Amounts of data continue to increase from the ever-increasing number of devices, applications, and people who continuously need to connect, says Rosa Guntrip, senior principal marketing manager, cloud platforms, Red Hat. “If all data needs to go back to a central data center for processing, organizations could be faced with needing to scale up their data center infrastructure to meet rising demands, which impacts costs from both a CapEx and OpEx perspective. In addition, if all of that data needs to go back to a central site, organizations are also looking at the costs of backhauling data (i.e., cost of bandwidth).”

Edge computing can tackle more localized data compute, processing, and analysis, increasing network efficiency and responsiveness, freeing up the cloud to be used for more general-purpose business needs. Analysts at Grand View Research have predicted that the market for this technology will grow from $3.5 billion to $43.4 billion by 2027.

Even with this exponential predicted growth, cloud will remain essential and both will coexist. Some enterprises are adopting edge capabilities to complement existing hybrid cloud strategies and better manage today’s ever-increasing volume of data, says David Williams, managing principal at digital consultancy AHEAD.

Some people may also assume that edge computing is an alternative to the cloud computing model, when in fact, they are complementary. “The two work together to overcome the limitations of any one deployment model,” says Dave McCarthy, research director within IDC’s worldwide infrastructure practice focusing on edge strategies, noting that it is possible to deploy cloud-native approaches in edge locations. A recent IDC survey found that 95% of new edge deployments will be based on cloud-native technology.

“Think of edge as an extension of hybrid architecture,” McCarthy advises. “Historically, hybrid was considered binary: some resources on-premises and some in the public cloud. The definition of hybrid is expanding to include on-premises, multiple public clouds, and a variety of edge locations.”

It is very unlikely that anyone will abandon the cloud in favor of the edge. As the FCC white paper puts it, “Many industry experts are pushing back on the notion that cloud and edge computing are in competition with each other. Instead, forward-looking organizations, and even many public cloud service providers, are beginning to consider how to selectively employ both.”

In other words, functions best handled by the computing split between the end device and local network resources will be done at the edge, while big data applications that benefit from aggregating data from everywhere and running it through analytics and machine learning algorithms running economically in hyperscale data centers will stay in the cloud. And the system architects who learn to use all these options to the best advantage of the overall system will get the best out of these technologies.

As said, each method of computing has its advantages and should therefore be considered complementary. A good network infrastructure will be able to take advantage of and combine both technologies to provide the necessary flexibility for the business.

Cloud computing has bandwidth limitations, so edge computing allows you to maintain the existing infrastructure while expanding the range of possibilities. Edge computing can therefore be seen as a form of response to the challenge of scalability.

To determine which network architecture to use, an enterprise must ask itself the following questions:

  • What type of data will I process?
  • What is my objective?
  • What is the available budget?
  • Is the data highly confidential?
  • How can I deal with data loss?
  • What is the volume of data to be processed?

The answers to these questions will generally lead to one method of calculation being preferred over another.

Pin It

Latest Issue