When it comes to choosing a cloud environment, data security has always been one of the top concerns of business and healthcare IT leaders. We expect it to remain near the top in 2019 and beyond, but its position as #1 may be challenged by a relative newcomer to the list: latency.

Response Time Becomes a Critical Factor

We don’t foresee a time when security isn’t a key priority. In fact, cyber threats are greater than ever, but so is our ability to combat them. With one of our toughest cloud challenges mitigated (although far from neutralized), we can now put more of a focus on other priorities. Latency is one of these that is surfacing in our conversations with healthcare IT professionals.

We’re sharing some highlights from a Forrester Research to write a report outlining some of their predictions for healthcare in 2019. At a high level, Forrester predicts that virtual care, AI (artificial intelligence), and new CX (customer satisfaction) metrics will become high priorities. Every one of these predictions fuels our belief that latency will be a top challenge for healthcare IT execs to resolve. Let’s look at each of these in turn:

Virtual care

Just 14% of doctors and 11% of patients feel they have an adequate amount of time together. (The Physicians Foundation) Healthcare providers of every category are under constant pressure to see more patients in a day, while still providing the same quality of care and outcomes. Virtual care, the ability to provide care via technology, offers great promise. For example, a physician might monitor the vital signs of a patient with a chronic illness using wearable technologies that transmit data to the physician’s office, replacing the many routine trips to the clinic that patients with chronic diseases often need to make.

Though this will allow the physician to take on a greater patient load, their time will still be at a premium. Waiting for data to load every time they want to review a patient’s vital signs won’t be acceptable. For that matter, if there is a problem, delays can be life threatening. In the case of an incident like a stroke or atrial fibrillation, the extra thirty seconds it takes an alert to reach the doctor due to network congestion could mean the difference between life and death.

Artificial intelligence

AI is a core part of the virtual-care equation. Let’s say you’re a new father at home with a sick baby. Prior to virtual care, you typically called a nurse’s hotline and spoke to someone with a general understanding of childhood illnesses. It won’t be long, though, before you’ll be able to speak to an automated assistant so human-like that you might not be able to tell the difference.

This automated assistant will be able to analyze additional data that a nurse typically wouldn’t have in front of them such as knowledge of a recent outbreak of strep at a nearby daycare center or the family’s history of asthma. The automated assistant will use this data plus your answers to their questions to adjust the flow of the conversation.

To be sure, analyzing data like this will take a level of data interoperability that just isn’t there yet, but the industry is working on it. When it is available, it will also require almost instantaneous response times. To have a conversation that feels natural to the caller, this data will need to be gathered and micro-analyzed in a split second. Latency is not an option.

Customer Experience

There’s already been an element of the customer experience in both of our previous examples. In the virtual care example, the patient isn’t going to want to wait around for data to load any more than the physician is. In the AI example, the new father doesn’t want to hear thirty seconds of silence while the bot he’s talking to downloads data.

Beyond that, there’s the issue of day-to-day online interactions through patient portals and other sites. We’ve become an instant-gratification society, and if it takes longer than a millisecond for your site to load, customer satisfaction suffers.

Edge Computing Reduces Latency

Several factors impact latency, including internet and network traffic congestion, and optimization of your network resources. One of the best ways to reduce latency, however, is to deploy your workloads at the edge, i.e., as close to your end users as possible, instead of in a central data center that might be hundreds if not thousands of miles away.

Dominic Romeo, senior product manager at TierPoint, wrote a comprehensive post last year on edge data centers – Edge Data Centers: Keeping Up With Consumers and IT.  TierPoint’s President and CFO, Mary Meduski, also participated in a panel discussion at SXSW called How Connectivity Will Control Everything We Know. The panelists tackled big topics like the future of IT infrastructure and the edge of the network and cloud. Check out the recording of the full session.

Edge computing is one of the reasons we maintain over 40 data centers located in communities across the United States. Sure, we have data centers in major metro areas like Chicago and New York, but we also have data centers in smaller cities like Little Rock, Omaha, and Spokane. You can see the full list and arrange a tour of any of our data centers on our website.

Read the full healthcare trends report

Finally, be sure to download the Forrester report Predictions 2019: Healthcare for insights on the challenges and opportunities healthcare IT professionals  may see in 2019.

 

Subscribe to the TierPoint blog We'll send you a link to new blog posts whenever we publish, usually once a week.