Select the directory option from the above "Directory" header!

Menu
8 enterprise storage trends to watch

8 enterprise storage trends to watch

Advanced storage technologies, including DNA storage and immutable back-ups, are on the horizon, but some are further from mainstream adoption than others.

Credit: Dreamstime

The data storage industry is experiencing a major transformation driven by multiple factors, including the need for security, speed, efficiency, and lower costs.

IT research firm Gartner recently predicted 23-times growth in shipped petabytes through 2030, a trajectory that promises to radically reshape and redefine current data centre and IT operations. To stay on top of the storage game, keep a close eye on these eight trends.

1. DNA storage

DNA, when used as a data storage medium, promises a far higher capacity and more resilient storage environment than traditional storage architecture. DNA storage allows molecular-level data storage, archiving information directly into DNA molecules.

"The advantages of DNA-based data storage are its density and stability," says Nick Heudecker, a former Gartner analyst. "One gram of DNA can store approximately 215 petabytes of data with a minimum life span of 500 years." Just don't leave the media out in the sun, however, since UV breaks down DNA.

It’s important to note that this is a long-term trend, however. While DNA storage is rapidly advancing, DNA media isn't likely to become mainstream for quite some time. There's currently no firm timeline for DNA storage availability, although some optimists hope that it may become commercially available by the decade's end.

"Current DNA sequencing and synthesis technologies are too expensive and slow to compete with traditional [storage] infrastructure," Heudecker says. Access latency remains high, currently measured in minutes to hours with a maximum write throughput of kilobits per second. 

"A DNA drive competitive with tape archival must support a write throughput of gigabits per second," he notes. Achieving such speed would require DNA synthesis, the writing process, to become six orders of magnitude faster. DNA sequencing, the reading process, must become two to three times faster.

Even if access latency and throughput challenges can be successfully resolved, there's still a steep cost obstacle to overcome. "Tape storage media costs roughly between $16 and $20 per terabyte," Heudecker says. DNA synthesis and sequencing costs hover in the area of $800 million per terabyte.

2. Storage security

All enterprises are paying close attention to network security, but many neglect to fully secure their data, both at rest and in motion. 

"Today, many organisations share data stores between their on-premise data centres and public or private cloud environments," says Cindy LaChapelle, principal consultant for technology research and advisory firm ISG. 

"In an age of ransomware, it's important to also invest in creating data backups that are air-gapped so data copies are inaccessible in the event of a major breach." Air-gapping means using a freestanding computer that's not attached to any type of network.

Scott Reder, principal storage specialist at digital transformation consultancy AHEAD, says he sees a growing interest in adding and improving cyber resiliency capabilities. 

Write-once, read-many (WORM) technology, developed years ago to address the needs of financial organisations complying with U.S. Securities and Exchange Commission regulations, is now being adopted by enterprises in healthcare and a variety of other fields to prevent data alteration. As a result, tools such as NetApp SnapLock and Dell OneFS SmartLock have found new life due to growing cyber threats, Reder says.

For primary file/NAS storage protection, real-time analytics capabilities provided by products such as Superna Ransomware Defender for Dell OneFS and NetApp Cloud Insights with Cloud Secure for ONTAP are available, Reder says. For block storage users, multi-factor and/or protected snapshots are available to protect critical data.

As storage security tools mature, organisations are more proactively working to deploy storage products with baked-in security capabilities that complement broader enterprise security initiatives, such as adopting zero-trust network access (ZTNA), to protect enterprise data.

3. SSD data reduction

Data reduction is the process of reducing the amount of capacity needed to store data. The technology can boost storage efficiency and lower costs. Data reduction techniques, such as compression and deduplication, have been applied to many types of storage systems, but aren't yet widely available for solid-state drives (SSD).

To ensure reliability, compression needs to be lossless, a factor that has challenged SSD makers. 

"Many of the all-flash array storage manufacturers have options for in-line compression, but the technology is often proprietary to the storage vendor," LaChapelle says. This situation is likely to improve in the near future, she notes, as SSD vendors work to deliver maximum capacity at the lowest possible price.

Additionally, beyond compression, SSD vendors are now looking to the PCI-Express 4.0 specification for bandwidth improvements, including faster read and write speeds.

4. Deeper public cloud insights

Mapping and modelling data usage across the entire enterprise application landscape is critical to understanding how storage in the public cloud will ultimately be leveraged.

Since public cloud storage solutions typically charge for ingress and egress, as well as data transit between zones and regions, being able to predict the degree of data movement is critical to managing public storage costs and effectiveness, LaChapelle notes.

Unforeseen chatter between on-premise data centres and public cloud data storage can create performance issues due to latency. "It's best to fully understand the implications of this before applications with co-dependences split between public cloud and on-premise environments," she advises.

Storage vendors have been boosting their analytics capabilities. HPE InfoSight, NetApp ActiveIQ, and Pure Storage Pure1 Meta are among the tools that enterprises can use to gain more comprehensive storage insights.

5. Object storage

The storage world is going through a change spurred by cloud-native applications, including databases, analytics, data lakes, artificial intelligence, and machine language technologies. "These applications are driving the adoption of object storage as their primary storage," says David Boland, vice president of cloud strategy at cloud storage provider Wasabi.

Boland notes that there are three main types of storage: object, block, and file. "Object storage is the only one that delivers low cost and high performance at an exabyte scale," he observes. 

Boland adds that a recent IDC survey showed that 80 per cent of respondents believe that object storage can support their top IT initiatives, including Internet of Things (IoT), reporting, and analytics.

Object storage has been widely available since the early 2000s, but it is only within the past couple of years that a hybrid-storage system consisting of a combination of NVMe SSD performance improvements, and significant price declines, has made it economically feasible to deploy at large scale, Boland says.

Performance is no longer an object storage drawback. Early on, object storage tended to be slower than file or block approaches when locating data. That’s no longer the case. High-performance metadata databases, database engines, and NVMe SSDs deliver the performance needed for very active structured-content applications like databases, Boland explains.

6. Immutable back-ups

Immutable backup technology is attracting the interest of a growing number of enterprises, particularly financial and legal organisations, and for a very good reason. 

"Immutable means 'cannot be changed'," says Chris Karounos, SAN administrator at IT reseller SHI International. "An immutable back-up is a way of protecting data that ensures the data is fixed, unchangeable, and can never be deleted, encrypted, or modified," he explains.

Immutable storage can be applied to disk, SSD, and tape media, as well as cloud storage. Immutable storage is easy and convenient: the user simply creates a file incorporating the desired immutability policy. 

"Immutable back-ups represent the only way to be 100 per cent protected from any sort of erasure or change in back-ups," Karounos says. "In an increasingly fast-paced business environment, where threats are constantly evolving, immutable back-ups are game savers."

7. Time-series database technology

A time-series database (TSDB) is designed to support high-speed data reads and writes. TSDBs unlock new levels of flexibility provided by existing object storage solutions, says Jesse White, CTO at open source network monitoring and management platform provider OpenNMS Group. 

"Specifically, the storage layouts and indices in these TSDBs have been cleverly designed to take advantage of the scalability, resiliency, and cheap costs associated with object storage while mitigating latency impacts."

TSDBs running on object storage are targeted toward enterprises, managed service providers, and other organisations collecting large volumes of time series data for observability and/or analytical purposes.

Stable builds of TSDBs that can take advantage of object storage, such as Cortex, Mimir, and InfluxDB IOx, are readily available. "The object storage solutions they depend on are pervasive across all major cloud providers, and open-source solutions such as MinIO and Ceph provide compatible APIs," White notes.

White reports that while TSDBs leveraging object storage tend to have support for many APIs, object storage APIs are not yet standardised. “Applications may need to adapt to the solution deployed," he adds.

8. Storage minimalism

Tong Zhang, a professor in the Electrical, Computer, and Systems Engineering Department at Rensselaer Polytechnic Institute, believes that the hottest trend in storage is the need for less storage.

The idea that storage is cheap, so let’s just keep everything, doesn’t play any longer, says Zhang, who's also the chief scientist at storage technology firm ScaleFlux. "The compound costs of storage are now taking their toll," he notes.

Zhang believes that data is accumulating faster than enterprises can deploy data centre architectures. 

"We need to put our energy into becoming efficient, and there are several strategies that can be employed in concert, including metadata processing to reduce the payload, pre-filtering to reduce network congestion, transparent compression features embedded into drives, and increasing capacity density without burdening the CPU," he says.


Follow Us

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags storageData Centre

Show Comments