Introduction 

In today's rapidly evolving digital landscape, artificial intelligence (AI) and security have become central themes. However, enterprise storage remains a critical focal point for businesses striving to manage their growing data needs efficiently and securely.  

Modernizing data storage is more crucial than ever, as outdated systems can hinder performance, increase costs, and expose vulnerabilities. This article explores the current enterprise storage landscape, highlighting key areas such as block storage, unstructured data, software-defined solutions, the role of AI and the impact of cloud modeling on storage decisions. By understanding these key areas, enterprises can better navigate the complexities of data management and leverage innovative storage solutions to remain competitive. 

Block storage  

VMware, now 'VMware by Broadcom'  

Love them or hate them, the combined Broadcom acquisition of VMware and subsequent price increases were a huge topic in 2024 across our customer base. Even though VMware is the de facto standard for hypervisors and virtual machines (VMs), many have started exploring alternatives and the feasibility of these alternative hypervisor platforms.  As you can imagine, storage plays a critical part in your applications, data and the virtual machines hosting them. While vSAN was still a strong option going into 2024, the uncertainty around the rising costs has halted many hyper-converged designs, and the momentum seems to have shifted back to external storage arrays that don't lock you into a single OEM's technology. 

Container storage needs are on the rise 

As enterprises continue modernizing their applications and moving to containers, solutions utilizing the KubeVirt project have seen an influx of customer interest for a stopgap solution to get them to their end-state goal. While most container workloads don't require persistent storage, running VMs within a Kubernetes environment does. Interest in Container Native Storage (CNS) and Container Storage Interface (CSI) drivers rose in 2024. We've created a briefing to educate those looking to learn more.   

Storage media and capacity 

Flash-based media remains the preferred choice, especially with QLC being selected for workloads that prioritize capacity over performance. Although QLC is gradually replacing spinning drives, it will take a considerable amount of time before spinning media is completely phased out, as the majority of the world's data still resides on spinning hard drives. Flash-based media, with capacities of 60TB, 75TB and 150TB, significantly surpasses the sizes of spinning drives, which are around 22TB. Additionally, flash-based media consumes far less power, leading to a noticeable decline in spinning drive sales across the customer base. 

Security for block storage 

Our clients are evolving their defensive strategies to combat ransomware and other cyber attacks. In the data storage realm, third-party off-array tools have long been used to monitor activity patterns, detect suspicious behavior and either block files from being written or cut off malicious users. Storage OEMs have implemented assistive technologies in two main ways: 

  • On-array behavior monitoring: All processing is performed by the array itself.
  • Online SaaS portal telemetry monitoring: Telemetry is streamed to the OEM's AIOps system for processing.

Each vendor gives its AIOps portal a different name: Pure1 AIOps, NetApp BlueXP and Dell APEX AIOps Observation (formerly CloudIQ), to name a few. The OEM owns, manages and upgrades the portal, which takes in streaming telemetry from deployed systems and provides monitoring, alerting and trending for those systems.   

Both approaches are effective and achieve the same goal. For file data, files must be scanned before being stored. More details on this can be found in the 'Security for your NAS' section. For block data, the array does not understand what is being written, so on-array or cloud, AIOps monitors for unusual write patterns and simply alerts rather than taking definitive action to stop workloads. Examples of triggers include typically read-heavy workloads suddenly hitting 100 percent write, or a workload abruptly writing a lot of unreducible data. 

Additionally, AIOps portals perform cybersecurity health checks, identifying configuration gaps and providing remediation recommendations. These gaps could include default passwords, missing iSCSI CHAP or unconfigured remote syslog. These recommendations aim to minimize the systems' attack surfaces. 

There is also a continued release of indelible snap features. Most arrays now use immutable snaps that cannot be changed, unlike traditional snaps that can be deleted by bad actors. Indelible snaps have locked retention periods that are immune to NTP clock skew attacks and are not deleted even when array capacity exceeds predefined thresholds. They are implemented as part of a schedule or policy that pairs more frequent immutable copies with less frequent but longer-retained indelible snaps. 

The hope is that these are just components of a defense-in-depth strategy and are not, by themselves, the only strategy. Other good practices like endpoint monitoring, firewall practices and SEIM should be part of the strategy. For a robust defense, a security organization should drive this overall plan in cooperation with infrastructure groups like storage.  

File storage  

NAS hybrid cloud connectivity 

As enterprises continue to build out their hybrid clouds, many are finding that having a NAS platform with cloud connectivity can combine the benefits of traditional on-premises NAS with cloud services. Public clouds are integrated for purposes such as backups, tiering, disaster recovery or extending primary storage. While this type of setup leverages the public cloud and its services, it differs from and solves different challenges than a global file system (GFS) solution, which may also leverage cloud services. 

Global file system (GFS) solutions are solving problems 

The demands of edge computing, along with the needs of distributed applications and users, are driving the necessity for global file data sharing. For many of our clients, relying on a few consolidated data centers is no longer a viable solution. Instead, they are seeking to modernize their existing "big iron" NAS solutions with enhanced functionality and optimization. 

Both traditional NAS and GFS offerings have unique advantages and challenges. NAS with cloud connectivity is ideal for organizations seeking to take advantage of cloud features offered with their on-premises storage, providing robust support and integration from established vendors. GFS offers a unified and flexible storage solution with potential cost efficiencies and a single view of file data across the environment, making them a great replacement for Microsoft file servers. Additionally, GFS solutions also fit well into a cybersecurity framework and can eliminate the need for traditional backups or separate disaster recovery solutions by simplifying and optimizing existing data center environments.   

Oftentimes, both NAS and GFS solutions are deployed targeting different use cases and performance needs to provide the utmost reliability, availability and scalability. 

Security for your NAS 

As mentioned in the 'Security for block arrays' section, autonomous ransomware protection is taking the lead position against most other file service requirements. While storage-based ransomware protections are essential, they should be viewed as the last line of defense within a comprehensive data protection strategy.  

Adopting a multilayered approach to security is crucial. This includes securing identities, physical locations, networks, endpoints and applications, all of which are vital components of your overall security posture. Autonomous ransomware protection can also play a critical role in a holistic data security strategy, helping to protect, detect and recover from ransomware attacks.  

Object storage  

Seeking performance in on-premises object storage 

We have seen a large uptick in requests for all-flash object solution testing with a focus on both performance and power consumption. In fact, nearly all the solution "bake-offs" over the last couple of years have been looking to establish a solution performance profile and compare that performance against competing solutions.   

Many of WWT's Fortune 100 clients are exploring private cloud object solutions for various needs, ranging from archival to performance workloads. However, most clients have a primary use case with specific features and performance requirements that guide their decision-making, with performance often being the key selection criterion. Over the past year, there has been substantial activity within the financial business sector, where performance remains a critical concern for customers.  

Object storage is increasingly being considered for integration with AI use cases and High-Performance Computing (HPC) solutions as part of customers' overall AI and HPC strategy. It is expected to play a significant role in constructing enterprise data lakes and leveraging the rich metadata features that can be associated with objects. 

New use cases for object storage 

With the upward tick for global file systems (GFS), object storage is behind the scenes providing the storage buckets, policies and versioning that GFS relies on to operate and deliver services to the clients and end-users. Depending on performance needs, regulations and requirements, GFS can work with on-premises or cloud-based S3 providers for the data to reside. 

Another factor driving the growth in object storage sales is the shift in data protection and archive solutions from using NAS to object storage for external storage. This switch to the object protocol reduces the need for multiple mount points and load balancers, instead utilizing protocol-specific S3 features that simplify deployment and configuration. 

Storage for AI & hybrid cloud use cases 

The old and new in AI storage  

The landscape of storage for AI is evolving rapidly. Initially, clients deploying AI solutions in their data centers relied on traditional high-performance storage solutions used in HPC environments. And, the OEMs who first achieved NVIDIA Enterprise Reference Architectures (RAs) status have been leading the way for GPU-based AI solutions for years. However, these traditional high-performance storage solutions are often costly and require advanced skills to deploy and manage. It was inevitable that alternatives would emerge.

Recently, new players have emerged, offering storage solutions designed around AI use cases. For example, multi-protocol access to data (S3, NFS and SMB) allows for more efficient data movement in and out of AI environments, expanding the usability of the data platform. Integrated RAG (retrieval-augmented generation) capabilities are becoming available, helping clients reduce deployment time for RAG-based solutions. Additionally, software-defined storage continues to offer deployment flexibility, enabling clients to design the ideal storage platform both on-premises and in the cloud. For more on software-defined storage, check out this article on software-defined storage in 2024. 

Customers are also focusing on other criteria, such as rack density, ease of deployment, power per usable terabyte, data tiering capabilities, multitenancy and data governance features. There is no "silver bullet" when it comes to storage for AI, and many factors must be considered when selecting the ideal solution. To learn more, check out this High-Performance Storage for AI Learning Path

Impact of cloud on storage decisions 

Many users of storage infrastructure are seeking an experience similar to what they have in the public cloud. They want an operational model that allows them to easily provision and manage storage according to their needs, on-demand and in real-time. They prefer having a selection of service levels for storage systems to choose from. Additionally, they desire access to storage in various locations, including traditional data centers, cloud-adjacent facilities, public clouds and edge locations. To meet these requirements, many storage systems are turning to robust APIs (application programming interfaces) to easily automate and manage the systems. 

With hybrid cloud and multi-cloud storage requirements, enterprises also seek data mobility. One solution option is global file access (see the above section on GFS solutions trend). While this is a great solution for certain use cases, it doesn't fit all scenarios. For needs like low latency, high volume and high write requirements, it makes more sense to move the data or keep it local to where it is produced. Other options now exist that utilize RDMA over WAN to accelerate file transfer and meet data movement requirements. 

It is important to weigh the cost of data migration against its benefits, especially when moving data to and from public clouds. Another option to mitigate the cost of moving data is to house it in adjacent public cloud locations. This allows enterprises to access the same data from multiple public clouds at a reduced cost. 

Conclusion 

The enterprise storage space is undergoing a dynamic transformation, driven by technological advancements and evolving business needs. Every discipline within the storage space is transforming in some way and these key areas are shaping the future of data management. By staying abreast of emerging trends and adopting innovative storage solutions, businesses can better manage their data, improve operational efficiency and gain a competitive edge in the digital age.   

At WWT, we can help you modernize your data center and data storage solutions by leveraging advanced storage technologies and robust infrastructure. Our approach integrates well with hybrid cloud environments and new IT services, forming the foundation of a modern data center. By focusing on three pillars — storage, cloud and automation — we are helping businesses evolve their storage from merely accruing capacity to capitalizing on opportunities. This includes managing complex environments, efficiently migrating applications to the cloud and optimizing performance through shared storage pools.  

Additionally, our collaboration with partners ensures that organizations can automate their IT infrastructure, draw valuable data insights and achieve business outcomes related to innovation, customer experience, security and reliability. Our experience with validation testing OEM gear, along with our ability to provide non-biased evaluations based on a modified Kepner-Tregoe decision-making process, is a large draw for our clients. Through tailored solutions and expert guidance, we enable businesses to solve current and future challenges, gain a competitive edge and achieve their goals. 

Unlock the 2025 data center priorities roadmap.
Access report