Edge computing is a term that gets used fairly often, but its meaning can vary depending on who you’re talking to. Dominic Romeo, Senior Product Manager at TierPoint, spends a fair amount of time clarifying terms for customers, so we asked him to do the same for us.

Is “Edge” an Industry, Analyst or Customer term?

We talk a lot about edge data centers these days, but is that a term your customers use or is it more an invention of those in the data center industry and the analysts?

Romeo: The customers I work with mostly ask for features of an edge data center without necessarily calling it that. In fact, they’ve been talking about edge data centers even before the term was invented. They’d say things like “I need a data center that’s closer to my end users or close to HQ.” Their request was directly related to issues of latency or reduced cost for connectivity. Those are the big features that seem to come up the most, even more so than some of the new use cases for edge that we’re seeing being driven by 5G and containerized data centers.

Varying Context of Edge Computing

Edge is also term being used by hyperscalers like AWS and Azure, but in a somewhat different context. With them, edge computing is often discussed in the same breath as the IoT. Are you seeing much demand for edge computing in this type of scenario?

Romeo: AWS has their new on-prem stack, AWS Outpost; Azure has Azure Stack. These two technologies allow you to run their cloud environments on-premises, which is what they’re really talking about when they talk about implementing AWS or Azure in their data center, potentially closer to their IoT devices. Certainly, in an automated world, that kind of capability is going to become more attractive to advanced users of AWS and Azure that want the consistent interface to manage their environment.

However, for most of our customers who are still interested in traditional data center capabilities, the use case for running Azure Stack or AWS Outpost just doesn’t seem to be there, yet.  I mean, one of the things everyone likes about the hyperscale clouds is that they have really good reliability. When you sign up with AWS or Azure, you’re buying that massive infrastructure, that repeatability, that steady state that they can provide.

If you take their features and functions out of their data centers and put them on-premises, now you’re suddenly back to worrying about keeping the lights on; you have to worry about power and cooling; you have to worry about security. Not having to worry about these things was a big reason you headed to an AWS or Azure cloud in the first place.

So, while we don’t get asked to offer AWS or Azure on-prem, we get asked to do parallel things. Someone might say, ‘I want object-based storage at a competitive price, and it should use AWS APIs.’ They want some parallel feature sets, but they don’t necessarily want it delivered on an AWS or Azure stack.

What Makes a Data Center an Edge Data Center?

Let’s go back to the idea of the edge that is more familiar to your customers: that of bringing the data center closer to the end user. When discussing edge computing, we often talk about the data centers you have located in smaller cities like Spokane, Little Rock, or Nashville as being “edge data centers,” but your data centers in Chicago or just outside of New York City could also function as edge data centers, couldn’t they?

Romeo: Absolutely, if that’s where your users are. Edge computing can be a very overloaded term, and the definition of edge changes depending on who you’re talking to and what problem they’re trying to solve. But for me, edge is always about getting the data center closer to the user, so you can reduce latency, improve experience, and reduce connectivity costs.

Other than location, what are additional factors that should be considered when choosing an edge data center?

Romeo: Another key factor of the edge data center has to be the connectivity. You can have a great edge data center, maybe it’s a container parked underneath a cell tower. It has cool technology; it has a great edge use case, but if it doesn’t have connectivity, it’s useless.

One of the things that allows us to tell a great edge story is that we have lots of connectivity; lots of carriers and network providers in the majority of our data centers. In some data centers we have access to over two-dozen providers. In others we have only five or six because those are the only carriers in that market.

Carrier Neutrality in Edge Data Centers

You’re talking about the concept of carrier neutrality, right? It’s something that comes up time and time again in our discussions, and I’m wondering how common this data center feature is.

Romeo: It’s more common than it used to be. Data centers came about when telecom companies got into the business. They had to install huge phone-switching equipment. They started getting into ATM and packet-switch networks. They needed a place to land all this network gear, so they built all these central office buildings in metropolitan areas and started to create the building blocks of what we consider the modern data center. These first-generation data centers were owned by carriers, so they were the antithesis of carrier-neutral.

Then companies started building their own data center space. These  enterprise data centers were the second wave of adoption. Theoretically, these data centers could have more than one carrier, but there was usually no incentive for these enterprise data centers to bring in multiple carriers. They just brought in whichever carrier the company had a existing contracts with: AT&T or Verizon or someone like that.

The third wave came when organizations like TierPoint decided to offer data centers as a service because the economics of it are much better. No enterprise wants to own their own data center if they don’t have to, and we can get better economies of scale. With say fifty or so customers in a data center, we can afford to have a better refresh cycle on our chillers, our generators, and all those systems used to maintain the facility.

The latest wave of data centers is all the hyperscalers; the wholesale data centers selling massive amounts of capacity to people who build their infrastructure out in tens of millions of dollars at a time.

These different waves of adoption have spawned different types of data centers, but it wasn’t until the third wave that people started realizing the benefits of being carrier-neutral. TierPoint wanted to attract customers away from the telco data centers or get them out of their on-premises data centers, and some of the features we used to attract customers were reduced cross connect fees and access to additional carriers.

Advice on How to Choose an Edge Data Center

So, what else should people look for when choosing an edge data center?

Romeo: Those are the two big factors that specifically apply to edge data centers, although carrier neutrality applies no matter where your data center is located. Other than location and carrier neutrality, you also want to consider the type of elements you’d normally look at when assessing a data center. That would, of course, include things like cyber and physical security. It’s going to include elements like resistance to natural disasters, so for example, if you have a data center in a seismically active region or a region of the country prone to flooding, is the building built to withstand those type of events?

You probably also want to look at energy consumption. Granted, you’re no longer paying the electric bills directly, but you are paying them indirectly, so you want to choose a data center operator that is as energy-conscious as possible.

Learn more about the Edge

In our next installment of our interview with Dominic Romeo, we’ll dig deeper into the role hyperscalers like AWS and Azure play in the full edge computing story. Learn more about Edge Computing with our Strategic Guide to Edge Computing.

 

Subscribe to the TierPoint blog We'll send you a link to new blog posts whenever we publish, usually once a week.