Edge computing: The next frontier

You can’t swing a cat these days without hitting concepts like 5G or edge computing. These are fascinating and complex topics, but it can sometimes be difficult to understand exactly what they are, how they are relevant, and how they can be used. 

As part of the efforts of the new Rakuten Mobile Autonomous Networking Research & Innovation Department, (mobile) edge computing will be a main area of focus. In order to further bolster these efforts, we are publishing a series of articles on this topic.

The goal of this series is to educate and inform. Largely we will be discussing relatively high-level concepts and ideas. However, we will dig into some topics more deeply. Also, as this is an ongoing research topic for us, the series is likely to evolve over time as we share the interesting topics we are working on. 

So, without further ado, let’s begin! 

Health Care

There are volumes of things to say about edge computing and the different possibilities that it enables, but to simplify things, let’s start off with an example that I imagine most people are familiar with: health care. 

To make things a little more clear, let’s think about medical facilities. Such locations provide many different services, come in different sizes, and are found in many different physical areas (a distributed service!). Next, let’s consider two health care actions that people might take: getting a health check-up and having brain surgery.

If you want to have a check-up, you can make an appointment at your local clinic to be seen by a doctor or nurse. Based on the results of your check-up you can be given medicine or referred to a hospital. However, if you want to have brain surgery, then you need to go to a hospital.
Hospitals are like cloud computing: They offer all kinds of services but are often further away and expensive to run. Clinics are like edge computing: They offer limited services but are cheaper to run, more convenient for users and reduce the load on hospitals.

If you want to have a check-up, you can make an appointment at your local clinic to be seen by a doctor or nurse. Based on the results of your check-up you can be given medicine or referred to a hospital. However, if you want to have brain surgery, then you need to go to a hospital.

Clinics are found in many locations and provide many common services, whereas hospitals are far fewer but offer all services. (First aid kits and pharmacies also fit in this analogy but are covered later.) 

In this example, the hospital is like cloud computing: It offers all of the services requested by its users, but it’s further to get to and expensive to run. Whereas, the clinic is like edge computing, offering more limited services, but cheaper to run, more convenient for its users, and reduces the load on the hospital. 

Put all this together, and you get a complete health service system. (And fog computing, also to be covered later.)  

Edge computing: The clinics of on-demand computing

Now that we’re all medical service architects, let’s get a little more concrete. Edge computing, like cloud computing, is a type of distributed computing system. Distributed computing is a very large and complex field, but boils down to how one or more connected computing devices can be used to do some work. Sometimes these devices (or nodes) are in the same location, like a hospital with many different treatment rooms, and sometimes they are spread across different geographical locations, like clinics.  

Distributed computing system: Boils down to how 1 or more connected computing devices can be used to do some work
Edge computing is a type of distributed computing system.

Just like the example containing different clinics and hospitals, by having more than one node to do work you can share tasks, have backup nodes in case something goes wrong, and place the nodes in different locations for physical redundancy and convenience. 

Imagine the health service industry as a big bubble. The concept of taking some services away from the hospitals and pushing them to the edge of this bubble (via clinics) and closer to users is exactly the same as edge computing. 

From a computing perspective, this has three primary benefits: 

  1. Reducing the response time (latency) between a user and a service, and in doing so enable new services 
  2. Reducing the load on the network 
  3. Reducing the load on the backend servers 

Sounds simple enough, right?

In theory it is, but, as always, the devil is in the detail. In trying to achieve the above, the Autonomous Networking Lab is going to be tackling questions around: 

  • How to place services and resources across these different nodes: Where should the doctors and x-ray machines be placed and with what reasoning? 
  • How to program applications for such a distributed system: What is the correct way to describe a service that our clinics and hospitals will be performing? 
  • How to manage failure: What happens when the x-ray machine breaks, or a surgeon does not show up for work? 
  • How to cope with high service demand: What happens if the entire city wants to see a doctor at the same time? 
  • How to monitor the system: As well as knowing when an x-ray machine is broken or a surgeon is absent, how can we tell there is an epidemic based on the patients that we see in the clinics and hospitals? 
  • How to define the “goodness” of our system: How many complaints/sick people are there? 

And the pot of gold at the end of the rainbow:  

How to achieve all of this autonomously with an artificial engineer (hospital administrator) operating on a real network of millions of users? 

I think all of these challenges are extremely interesting and am very excited to be able to do research into these problems. I hope you will enjoy exploring these concepts with me.

Next time, we will dig a little into the history of edge computing in an attempt to explain the context and why it is often discussed alongside, and compared to, cloud computing.  

BY THE WAY!! 

For this specific topic, we are partnering with universities and companies, offering internships, and looking for people to come work on the problem. If this seems like something interesting that fits with your research agenda, then please get in touch here


About the author: Paul Harvey is leading edge computing research within Rakuten Mobile research, and has been a senior scientist in the Rakuten Institute of Technology since 2018. He gained his PhD in adaptive and distributed systems, and has worked in academia on projects including embedded systems, high performance computing, and programming language design. 

* Images created by Aki Nakazawa

Tags
Show More
Back to top button