Wednesday, January 17, 2018
Sometimes big changes sneak up on you, especially when you're talking about the future of data storage technology. For example, when exactly did full-on cloud adoption become fully accepted by all those risk-averse organizations, understaffed IT shops and disbelieving business executives? I'm not complaining, but the needle of cloud acceptance tilted over sometime in the recent past without much ado. It seems everyone has let go of their fear of cloud and hybrid operations as risky propositions. Instead, we've all come to accept the cloud as something that's just done.
Sure, cloud was inevitable, but I'd still like to know why it finally happened now. Maybe it's because IT consumers expect information technology will provide whatever they want on demand. Or maybe it's because everything IT implements on premises now comes labeled as private cloud. Influential companies, such as IBM, Microsoft and Oracle, are happy to help ease folks formerly committed to private infrastructure toward hybrid architectures that happen to use their respective cloud services.
In any case, I'm disappointed I didn't get my invitation to the "cloud finally happened" party. But having missed cloud's big moment, I'm not going to let other obvious yet possibly transformative trends sneak past as they go mainstream with enterprises in 2018. So when it comes to the future of data storage technology, I'll be watching the following:
Containers arose out of a long-standing desire to find a better way to package applications. This year we should see enterprise-class container management reach maturity parity with virtual machine management -- while not holding back any advantages containers have over VMs. Expect modern software-defined resources, such as storage, to be delivered mostly in containerized form. When combined with dynamic operational APIs, these resources will deliver highly flexible programmable infrastructures. This approach should enable vendors to package applications and their required infrastructure as units that can be redeployed -- that is, blueprinted or specified in editable and versionable manifest files -- enabling full environment and even data center-level cloud provisioning. Being able to deploy a data center on demand could completely transform disaster recovery, to name one use case.
Everyone is talking about AI, but it is machine learning that's slowly permeating through just about every facet of IT management. Although there's a lot of hype, it's worth figuring out how and where carefully applied machine learning could add significant value. Most machine learning is conceptually made up of advanced forms of pattern recognition. So think about where using the technology to automatically identify complex patterns would reduce time and effort. We expect the increasing availability of machine learning algorithms to give rise to new storage management processes. These algorithms can produce storage management processes that can learn and adjust operations and settings to optimize workload services, quickly identify and fix the root causes of abnormalities, and broker storage infrastructure and manage large-scale data to minimize cost.
Management as a service (MaaS) is gaining traction, when looking at the future of data storage technology. First, every storage array seemingly comes with built-in call home support replete with management analytics and performance optimization. I predict that the interval for most remote vendor management services to quickly drop from today's daily batch to five-minute streaming. I also expect cloud-hosted MaaS offerings are the way most shops will manage their increasingly hybrid architectures, and many will start to shift away from the burdens of on-premises management software. It does seem that all the big and even small management vendors are quickly ramping up MaaS versions of their offerings. For example, this fall, VMware rolled out several cloud management services that are basically online versions of familiar on-premises capabilities.
More storage arrays now have in-cloud equivalents that can be easily replicated and failed over to if needed. Hewlett Packard Enterprise Cloud Volumes (Nimble); IBM Spectrum Virtualize; and Oracle cloud storage, which uses Oracle ZFS Storage Appliance internally, are a few notable examples. It seems counterproductive to require in-cloud storage to run the same or a similar storage OS as on-premises storage to achieve reliable hybrid operations. After all, a main point of a public cloud is that the end user shouldn't have to care, and in most cases can't even know, if the underlying infrastructure service is a physical machine, virtual image, temporary container service or something else.
However, there can be a lot of proprietary technology involved in optimizing complex, distributed storage activities, such as remote replication, delta snapshot syncing, metadata management, global policy enforcement and metadata indexing. When it comes to hybrid storage operations, there simply are no standards. Even the widely supported Amazon Web Services Simple Storage Service API for object storage isn't actually a standard. I predict cloud-side storage wars will heat up, and we'll see storage cloud sticker shock when organizations realize they have to pay both the storage vendor for an in-cloud instance and the cloud service provider for the platform.
Despite the hype, nonvolatile memory express (NVMe) isn't going to rock the storage world, given what I heard at VMworld and other fall shows. Yes, it could provide an incremental performance boost for those critical workloads that can never get enough, but it's not going to be anywhere near as disruptive to the future of data storage technology as what NAND flash did to HDDs. Meanwhile, NVMe support will likely show up in most array lineups in 2018, eliminating any particular storage vendor advantage.
On the other hand, a bit farther out than 2018, expect new computing architectures, purpose-built around storage-class memory (SCM). Intel's initial releases of its "storage" type of SCM -- 3D XPoint deployed on PCIe cards and accessed using NVMe -- could deliver a big performance boost. But I expect an even faster "memory" type of SCM, deployed adjacent to dynamic RAM, would be far more disruptive.
How did last year go by so fast? I don't really know, but I've got my seatbelt fastened for what looks to be an even faster year ahead, speeding into the future of data storage technology.
By: DocMemory Copyright © 2023 CST, Inc. All Rights Reserved
|