ARN

3 places edge computing challenges can lurk

Figure in cost, complexity, and legal issues when weighing the potential benefits of edge neworking.

How much computing power should we put at the edge of the network?

In the past when networks weren’t supposed to be very smart, it wasn’t even a question. The answer was none. But now that it’s possible to bring often substantial amounts of computational equipment right to the very edge of the network, the right answer isn’t always so easy.

The arguments in favour are simple. When packets travel shorter distances, response time is faster. With compute, storage and networking deployed at the edge, network lags and latencies don’t slow down each trip between users and resources, and users and applications get better response times.

At the same time, because more work is done at the edge, the need will drop for bandwidth between remote sites back and central data centres or the cloud: Less bandwidth, lower cost.

But for all the promise, there are some issues that can’t go away, and sometimes other factors come into play that make a conventional architecture the better choice. Here are some of those considerations broken down into three categories: cost, complexity, and legal concerns.

Cost

Many local machines can cost more

The edge-computing model trades one big central cluster for numerous local machines. Sometimes there’s no change in cost because the local hardware reduces the central load by an equal amount. One edge machine replaces one instance in the central cluster.

Often, though, the model creates new redundancies that drive up costs — for instance storage. Instead of one central copy of each file, the edge network may keep a separate copy at each edge node. If your edge network is small, a few extra copies may be great for adding redundancy.

But when you’ve got 200 or more edge nodes, your storage costs could be 200 times greater. This can be limited by only storing the data at the nodes actively engaged by each user, but the problem of multiplication doesn’t completely disappear. At some point the cost of the duplication starts to weigh on overall costs.

Duplication creates complexity for software replication and often increases the bandwidth.  It can work well for static content when the local machines are acting like a CDN and doing little real work. But the more computing that gets added to the mix, the more costs are driven up by synchronising all the copies.

Duplication also drives up bandwidth charges. If there are no copies made at the edges, those n copies may increase the bandwidth costs by a factor of n. In an ideal situation, edge nodes act like smart caches that reduce the overall bandwidth.

But many architectures aren’t ideal, and the replication ends up sending multiple copies throughout the network, driving up bandwidth charges along the way.

In other words the more that edge computing becomes more like computing and less like caching, the greater potential there will be for costs to rise.

Complexity

Timing issues can be gnarly

Depending on workloads, syncing up databases among multiple edge locations can become an issue. Many applications — such as monitoring the Internet of Things or saving a single user’s notes — don’t need to work so hard at synchronisation because they don’t create contentions.

Basic tasks like these are ideal for edge computing. But once users start competing for global resources, deployments become trickier. Google, for instance, puts atomic clocks in its data centres around the world and uses them to adjudicate complex writes in their Spanner database. While an enterprise’s needs may not match Google’s, this syncing issue will demand an added layer of infrastructure and expertise.

Mobile-user challenges

When it comes to edge computing, some users are worse than others, and mobile users can present the biggest problems. As they move from site to site, they may connect to a different edge node, again creating syncing problems. Even employees working at home might from time to time change their location because “working at home” really means “working from anywhere”.

Every time this happens, web applications must shift focus, and the edge nodes must resynchronise. If there’s any user state left cached at the old access node, it needs to be moved and recached at the new node. The time and bandwidth that this takes can eat into expected cost and performance benefits.

Business intelligence requirements

Even with data being processed at the edge, much of it has to migrate eventually to a central server where it might be used to create daily, weekly, or monthly reports, for example. If that means there will be times with high peak-bandwidth demands, it could cut into projected savings that might result from the edge deployment reducing bandwidth needs. Consider this when calculating cost benefits.

Legal

Tax issues

Some states charge sales tax for online purchases, and some don’t. Some have excise taxes that apply just for that state. In many cases applicable taxes depend on the physical location of the hardware where the computing is done. 

Edge computing — because one organisation can deploy it in many jurisdictions — can compound the confusion about which laws apply. That’s a line-of-business complexity that internet retailers need to weigh before committing to an edge-computing deployment.

Data-residency regulations

The locations of users and data are both subject to data-protection laws. Some countries support the General Data Protection Regulation (GDPR), and some have laws of their own. 

There are also laws like HIPAA specifically addressing how medical records are handled. That means enterprises have to analyse which set of rules apply to each edge node and figure out how to meet them, especially if users and servers are in different jurisdictions. Sometimes the best answer could be having edge nodes in the same jurisdiction where users are located but keeping an eye out for users who migrate.