Every day, we create another 2.5 exabytes of data. By 2025, it's predicted that total global data will reach 155 zetabytes, or 155 billion terabytes, and a third of it will require real-time processing. That will require substantial network bandwidth and data center capacity. This will impact how cloud infrastructure will evolve moving forward.
“Ever-increasing demands for computing performance have taxed traditional IT platforms and data centers alike,” noted Dave McKenney, the director of product management at TierPoint.
In TierPoint and Forrester’s January webinar 2020 Predictions for Cloud Computing, McKenney and Forrester analyst Abhijit Sunil discussed trends in cloud computing, including the challenge of big data in the cloud and the developments in cloud infrastructure to improve data and application performance –namely edge computing, high-performance computing (HPC), and hyperconverged infrastructure (HCI).
Download a copy of the Forrester® Predictions 2020: Cloud Computing report
Organizations’ increased need for storage, bandwidth and computing power is a major driver of cloud adoption. Companies want to offload the cost of maintaining IT infrastructure and ensuring scalability and performance. Cloud migration decisions are often based on the need to cost effectively process and store growing amounts of data.
What is data gravity?
Why is data expanding so rapidly? One of the factors, according to Forrester, is “data gravity.” Data gravity is the concept that as a large volume of data is used – analyzed, combined with other data, and updated with new intelligence – that volume expands and becomes more useful, which in turn attracts more usage and more data.
The downside of data gravity is it eventually becomes impossible to manage and store all of that data at one location. That is especially difficult if the data must be accessed by distributed users over a network. The cost of bandwidth alone may become prohibitive. Putting big data on the cloud makes sense economically, but the infrastructure of the public cloud isn’t ideal for transmitting large amounts of data. In the public cloud, latency inevitably creeps in, which can cause critical problems for latency-sensitive applications.
How can our cloud infrastructure manage this data influx?
Leading cloud infrastructure providers are working to overcome data gravity downsides through high-performance computing (HPC), edge computing, and hyperconverged infrastructure (HCI):
Edge computing is based on the concept of moving essential data closer to the end-users and client applications and devices. Edge computing has become especially important since the growth of IoT networks. IoT devices collect and transmit data back and forth over cloud networks, and latency blips can have potentially serious consequences, such as in the case of self-driving cars or real-time weather monitoring. Creating edge nodes along the network perimeter reduces the amount of travel time for IoT data.
“Edge computing is driven by the need to take computing to where the data is generated,” explained Sunil.
Edge computing is also useful for other bandwidth-heavy applications or large stores of content that serve a distributed user base. With edge computing, content can be moved to regional data centers closest to various end-user markets. For instance, a content delivery network (CDN) – an increasingly common edge service – speeds the performance of video-heavy websites or websites with large files.
Edge services also reduce bandwidth costs, noted McKenney: “High-performance network transport over long distances can be very costly. Edge computing for us is about getting the data center closer to the devices and people that generate or use the data – to reduce latency, improve performance, and reduce connectivity costs.”
Cloud-based, High-Performance Computing (HPC)
Cloud-based HPC is another key cloud infrastructure development, said Sunil. Traditionally found in research and scientific settings, high-performance computing is the use of parallel processing for compute and data-intensive applications. AI and machine learning applications, as well as big data analytics and other compute-intensive applications, have increased the demand for HPC. For companies that can’t afford a private HPC cluster or supercomputer, cloud-based HPC is a valuable alternative. Forrester says that more than 40% of global infrastructure decision makers at enterprises will run HPC workloads in the cloud in 2020.
Hyperconverged infrastructure (HCI)
Hyperconverged infrastructure (HCI) is a third infrastructure option offered by many cloud providers and colocation providers. HCI is a software-defined, integrated unit of processing, storage, networking and virtualization resources. Unlike traditional three-tier infrastructure where storage, computing and networking are separately managed, HCI is a “block” of computing resources that can be quickly put into play, scaled up to respond to a market opportunity or challenge, and managed through a single interface.
Major cloud providers such as Amazon and Microsoft offer HPC, edge computing and HCI, as do many regional and national cloud services. TierPoint, for example, provides edge computing services and content delivery networks, cloud platforms that support HPC and HCI, and range of consulting services. Leading data center and cloud services providers such as TierPoint are best equipped to help customers navigate multiple infrastructure options and develop an appropriate cloud modernization strategy.
“We help organizations to develop cloud strategies and select the most appropriate hybrid cloud architecture for them,” said McKenney.
Learn more about these cloud infrastructure trends
It’s hard keeping up with the ever-changing cloud trends. We recently held a webinar with Forrester to make it easier to understand what’s coming. Watch the webinar, 2020 Predictions for Cloud Computing, to hear all of Forrester’s cloud predictions. Want to talk about your cloud strategy? Contact us today to speak with our experts.