Free cookie consent management tool by TermsFeed Microsoft Azure Local: Technical deep dive - Storage Spaces Direct (Series) | Group K

Microsoft Azure Local: Technical deep dive - Storage Spaces Direct (Series)

We are recognized experts in delivering Microsoft Azure Local solutions, combining deep technical knowledge with extensive hands-on experience. Our Azure Local experts consistently guide organizations toward robust, scalable, and high-performing infrastructure solutions tailored to their business needs.

This article is part of an ongoing series designed to offer in-depth insights, practical guidance, and expert advice on Microsoft Azure Local technologies. Stay tuned as we continue to explore best practices, advanced deployment scenarios, and valuable lessons learned from real-world implementations.

(reference design Azure Local)

Introduction

Storage Spaces Direct in the latest Windows Server is a mature, robust technology that encapsulates complex distributed storage logic into a mostly self-managing system. It uses the drives across all nodes in tandem, synchronizing data blocks through SMB3 and RDMA, to present unified volumes that remain available even during hardware failures. Its resiliency options (mirroring and parity, or combinations thereof) let admins balance between performance and capacity, while innovations like Local Reconstruction Codes and mirror-accelerated parity ensure that even large clusters can use erasure coding with good performance. S2D’s deep integration with ReFS gives it the ability to detect and correct data corruption transparently, an essential feature for long-lived data integrity. The storage pool architecture with widespread slab distribution yields not only high performance (through massive parallelism) but also faster rebuilds and rebalancing when scaling out

Overview of Storage Spaces Direct (aka S2D)

Microsoft Storage Spaces Direct (S2D) is a software-defined storage feature in Windows Server (Datacenter edition) that groups local drives from multiple servers into a highly available storage cluster. By leveraging Windows Server Failover Clustering and a new virtual storage fabric called the Software Storage Bus, S2D allows all servers in the cluster to see each other’s local drives and use them as a single shared storage pool. This eliminates the need for traditional shared disks or SAN hardware – standard Ethernet networking is used for all storage communication. The figure below illustrates the S2D architecture stack, from physical drives up through the clustered volumes and how they integrate into Hyper-V or file sharing environments.

S2D architecture stack: local drives on each server connect via the Software Storage Bus into a single Storage Pool, on which resilient virtual disks (volumes) are created. These volumes are formatted with ReFS and exposed as Cluster Shared Volumes (accessible as C:\ClusterStorage\... on each node). In converged deployments, volumes can also be served over SMB3 file shares, whereas in hyperconverged deployments, VMs run directly on the cluster’s local CSV volumes.

Under the hood, S2D builds on proven technologies such as SMB3 (with SMB Direct RDMA networking), Failover Clustering, Cluster Shared Volumes (CSVFS), and Storage Spaces, combined with the new Software Storage Bus to glue it all together. A typical S2D cluster can range from 2 up to 16 nodes and can aggregate hundreds of drives (over 400 in total) into storage volumes. If a node or drive fails, the cluster’s built-in fault tolerance keeps data online by using copies on other nodes, and the system will automatically “heal” itself by rebuilding missing data elsewhere. S2D thus enables creation of highly-available, scalable storage using only direct-attached disks in each server, with performance accelerated by caching on faster media and network optimizations like RDMA. The following sections provide a deep dive into how S2D uses and organizes drives across nodes, maintains synchronization of data, and ensures resiliency, performance, and integrity.

Storage pool architecture and drive organization across S2D nodes

When S2D is enabled (e.g. via Enable-ClusterS2D), the system automatically creates a single storage pool that includes all eligible drives in all nodes of the cluster. It is recommended to use one pool per cluster, simplifying management. Each server contributes its local drives (SATA, SAS, NVMe, or even persistent memory disks) to this pool, and the Software Storage Bus makes these drives accessible cluster-wide. In essence, the cluster’s drives function as one big disk pool, replacing traditional shared SAS or Fibre Channel connectivity with a software-defined fabric. All physical drives are categorized by S2D into roles: the fastest devices (e.g. NVMe or SSD) may be designated for caching (if a mix of media is present), while the rest provide capacity for the main storage space. S2D requires at least two cache devices (usually SSD/NVMe) per server and additional capacity drives (SSD or HDD), ensuring that each node has a mix of media for performance and capacity. If only a single type of drive is present (all-flash), S2D can disable the cache tier and use all devices for both performance and capacity as needed.

Within the pool, when you create a virtual disk (an S2D volume), the storage layout is distributed across all nodes. Storage Spaces (the virtualization layer) divides each new volume into many fixed-size chunks called slabs (each 256 MB in size). Each slab is the fundamental unit of allocation and is stored with the chosen resiliency across drives. For example, in a two-way mirror volume, each 256 MB slab is duplicated (two copies), and those copies are placed on different servers’ drives. The placement decision for each slab is made independently, like dealing out cards, to keep the utilization balanced – effectively every drive in every node will host some slabs of the volume. This means the data from a volume is evenly spread across all drives in the cluster. As a result, all drives tend to fill up and load-balance in unison (in ~256 MB increments), rather than one node or disk being overloaded while others sit idle. This wide striping across nodes and disks unlocks high aggregate performance by engaging many disks in parallel for I/O, and also simplifies expansion: when new drives or servers are added, S2D can automatically rebalance by redistributing slabs onto the new capacity to even out usage.

Each S2D volume is typically formatted with the Resilient File System (ReFS), and the volume is added as a Cluster Shared Volume (CSV) so that it is accessible on all nodes simultaneously. To the OS on each server, the CSV appears like a local mounted drive (e.g. C:\ClusterStorage\Volume1\), even though the data is physically distributed cluster-wide. This allows VMs or applications on any node to read/write to the volume directly. A cluster node can directly access slabs that reside on its local drives, and for slabs stored on remote nodes, the reads/writes are transparently carried over the network via the Software Storage Bus using SMB3. The cluster uses CSVFS to coordinate access, ensuring consistency. All of this pooling and distribution happens behind the scenes – from an administrator’s perspective, you simply create a volume of a given size and resiliency, and S2D handles placing the data across nodes and keeping it in sync.

Pooling metadata and coordination

Keeping track of which slabs reside on which physical disk is a critical task. S2D maintains detailed pool metadata that maps every slab and its copies to their physical locations. This metadata is stored on the S2D drives themselves (in a small reserved space) and is redundantly replicated to multiple drives (at least 5 of the fastest drives in the cluster hold copies of the metadata). The metadata is updated whenever data placement changes (e.g. when new slabs are written or rebalanced) and is synchronized aggressively across those drives to prevent it from ever becoming a single point of failure. In practice, losing the pool metadata is exceedingly rare – it’s designed such that no single drive failure can jeopardize the mapping of data in the pool. This distributed metadata, combined with Failover Clustering’s node membership management, allows the cluster to consistently know where each piece of data lives and to orchestrate who should access what.

The Software Storage Bus is the component that makes remote drives accessible, and it consists of kernel drivers that virtualize and redirect I/O as needed. Internally, it uses components like ClusPort (a virtual HBA that connects a node to virtual disks on other nodes) and ClusBlft (which virtualizes the disks/enclosures on each node to present them to the cluster). This effectively creates a virtual SAN: every node can issue I/O to a disk in another server as if it were directly attached. The Software Storage Bus also implements intelligent algorithms for I/O routing and prioritization – for example, it prioritizes application I/O (from user workloads) over background system I/O (such as rebalancing or rebuild traffic) to ensure performance for running VMs/apps isn’t degraded by maintenance operations. All coordination of access to volumes (including any necessary locking for metadata updates) is handled by the cluster’s CSV layer and storage stack, so that multiple nodes can safely perform I/O to the mirrored/parity-protected volumes without conflicts.

Resiliency levels: mirroring and erasure coding

To protect against failures, Storage Spaces Direct offers several resiliency options for volumes, analogous to RAID levels but implemented in software across nodes. The two broad techniques are mirroring and parity (erasure coding). Each S2D volume is configured with a resiliency type at creation, determining how many copies of data or parity blocks are maintained and how many failures can be tolerated. S2D’s resiliency is at the virtual disk (slab) level – meaning if a drive or node fails, lost slabs are reconstructed from remaining copies on other nodes. Below we detail the main resiliency modes:

  • Two-way and three-way mirroring: Mirroring keeps multiple full copies of the data on different nodes (akin to RAID 1 across servers). Two-way mirror writes two copies, requiring at least 2 nodes, and can tolerate one server or drive failure. Three-way mirror writes three copies, requiring at least 3 nodes, and can tolerate two simultaneous failures (e.g. two drives or even two servers can be lost without losing data). In a three-way mirror, the volume’s efficiency is about 33% (since three copies mean 3 TB of physical storage for 1 TB of data). Mirroring provides the highest write performance and simple rebuild logic, because writes are duplicated and no parity computation is needed. S2D typically recommends three-way mirrors for most performance-sensitive workloads when you have 3+ nodes, as they offer fast writes and quick recovery at the cost of capacity overhead.

  • Parity (erasure coding): Parity resiliency encodes data with parity blocks (like RAID 5/6) so that the volume can withstand failures with less storage overhead. Single parity (similar to RAID-5) uses one parity stripe and needs at least 3 nodes, but it only tolerates a single failure, so it’s generally not favored in S2D (three-way mirror is safer at 3 nodes). Dual parity (similar to RAID-6) uses two parity stripes per stripe set, requiring at least 4 nodes, and can tolerate two failures (like three-way mirror). Dual parity is more capacity-efficient than mirroring, especially as cluster size grows – at 4 nodes it has ~50% efficiency (2 TB data stored per 4 TB raw), and this efficiency improves up to ~80% as the number of fault domains increases (e.g. a 12-node dual parity volume might use 3 data + 2 parity in each stripe, achieving ~72% efficiency). However, parity incurs computational overhead and write amplification (writes may require reading/updating parity), so random write performance can be lower than mirror. S2D uses advanced algorithms to mitigate this, described next.

  • Local reconstruction codes (LRC): To optimize dual parity for larger clusters, S2D employs Local Reconstruction Codes, a technique from Microsoft Research. LRC essentially breaks the large parity groups into smaller local groups for encoding, which reduces the amount of data involved in each parity update or rebuild operation. For example, instead of a single parity stripe spanning all 12 nodes, it might use two smaller groups of 6+ nodes each with local parity. This way, if a failure occurs, reconstruction can happen within a smaller subset of drives, requiring fewer reads across the entire cluster. LRC provides the same fault tolerance but with faster repairs and lower CPU usage, making erasure coding more practical even on wide (many-node) clusters. The result is parity volumes in S2D achieve much better performance than naive RAID-6 implementations, and Microsoft highlights that their approach avoids the typical “all-flash only” restriction some competitors have for erasure coding.

  • Mirror-accelerated parity: Uniquely, S2D allows combining mirroring and parity within the same volume to get the benefits of both. A mirror-accelerated parity volume is split into two tiers – a smaller fast tier (mirror) and a larger capacity tier (parity) – managed behind the scenes by ReFS. All incoming writes are first written to the mirror tier for speed, and later, as data cools, chunks are moved to the parity tier and encoded for space efficiency. This happens in near real-time (“real-time tiering”), giving the effect of a fast mirrored volume for hot data and a dense parity volume for cold data within one namespace. For example, an S2D volume might be 20% mirror and 80% parity – new writes land in the 20% portion (3-way mirrored on SSDs), then ReFS gradually moves them to the 80% portion (dual parity, perhaps on HDDs) as they age. Mirror-accelerated parity requires at least 4 nodes (to support parity) and is typically used for achieving a balance between performance and capacity. Microsoft recommends using such mixed resiliency mainly for archive or backup scenarios, whereas for heavy random I/O (like running VMs) pure mirroring is often still preferred for best performance.

Regardless of resiliency type, S2D volumes are designed to tolerate at least two concurrent hardware failures when configured with three-way mirror or dual parity (or a mix). This ensures high availability – even if an entire server goes down, or two drives fail in different servers, the data remains accessible. Administrators can choose the resiliency based on their needs: mirroring for highest speed and simpler management, parity for storage efficiency, or a mix to get some of both. Importantly, all these mechanisms are distributed. For instance, in a dual parity volume, the parity stripes are stored across different nodes (no single “parity node”), and in a mirror, each copy of a slab is on a different node. This distribution not only protects against node failures, but also improves performance by spreading out rebuild and I/O load, as we explore next.

Data block replication and synchronization between S2D nodes

Because S2D spreads mirrored or parity data across nodes, it must keep the copies synchronized at the block level. When an application writes to an S2D volume, the write is intercepted by the Storage Spaces layer, which determines the slab and stripe layout for that data. The write is then issued to multiple disks (and servers) in parallel according to the resiliency. For example, in a three-way mirror, if a VM on Node1 writes a block, S2D will write that block to Node1’s local disk and to two other nodes’ disks that hold the other mirror copies. This happens more or less synchronously: S2D uses the SMB3 protocol over the cluster network (the Software Storage Bus) to send the remote write requests, often using SMB Direct (RDMA) for low-latency, and will acknowledge the write as completed only after the required copies or parity updates are successfully stored on the respective nodes. SMB3 features like Multichannel are used to maximize throughput and automatically load-balance network usage across available NICs. In essence, the cluster’s Ethernet network acts as the “backplane” carrying storage I/O between servers, with RDMA providing high bandwidth and low CPU overhead for these transfers.

Read operations can be served by any one of the copies. The storage stack will often choose the copy that is most efficient to read – for example, if one copy of the data is on the local node’s drive, S2D can read locally to avoid network latency. If the local node doesn’t have a copy, data will be fetched from a remote node via SMB, again leveraging RDMA. This optimization means that in many cases each server reads primarily its “local” portion of data (when running workloads like VMs, the working set might be scattered such that much of it ends up local due to the random distribution). The cluster ensures consistency by design: all writes go through the Storage Spaces orchestrator which updates all copies, so reads will never see stale data. There’s no need for a separate caching coherence mechanism because the writes are coordinated at the time of execution.

During normal operation, S2D keeps the data copies in lockstep. If a transient issue occurs (say a momentary network glitch or a node pause), S2D can resynchronize any out-of-date slabs. The Failover Clustering framework has a built-in mechanism to detect when a node or drive was temporarily unavailable and will trigger a resync job to propagate any missed updates. This is analogous to how traditional RAIDs resync after a disk comes back online. In S2D, this process is efficient thanks to the fine granularity of data: only the particular slabs that need updating will be repaired. The Health Service in S2D monitors data integrity and will kick off repair jobs automatically.

Rebuild after failures: If a drive or entire node fails, S2D immediately starts rebuilding the lost data copies on remaining drives in the cluster (assuming sufficient free space, which is why having some reserve capacity is recommended). Because of the distributed slab design, this repair process is highly parallelized. Rather than one disk having to read all remaining data, all the other drives that had the surviving copies of the lost slabs will participate in reads, and the writes of new copies will be spread out across many disks as well. For example, if Node3’s disk failed, each lost slab might find its mirror copy on a different surviving node’s disk (one slab’s remaining copy is on Node1, another’s on Node2, another on Node4, etc.). Those nodes can all read their portions simultaneously to feed the rebuild. Likewise, the new copies of data can be written to many different drives throughout the cluster (not just one hot-spare disk). This massively parallel repair approach means the cluster can restore full redundancy much faster than a traditional RAID that might be limited by single-disk rebuild speed. Faster repair minimizes the window of vulnerability to a second failure and reduces the performance impact on each disk (since the workload is distributed). Administrators are advised to leave some unallocated space (reserve capacity) in the pool to accommodate rebuilds – for example, reserving the capacity roughly equal to a full disk or more, which acts like a distributed “hot spare” space ready to accept rebuilt data.

If a failed drive is replaced or a node comes back, S2D will automatically integrate it. A replaced disk is detected by the cluster and is added into the pool, then any data that was rebuilt into reserve space can be moved onto the new disk to re-balance. Similarly, if a node was down briefly and rejoined, S2D can automatically catch it up by copying any changes it missed while it was offline. All of this is handled online, with I/O prioritization ensuring user workload I/O remains top priority during resync. The cluster’s Health Service and PowerShell cmdlets (like Get-StorageJob) can be used to monitor these background operations.

Data integrity and error detection

S2D’s use of ReFS (Resilient File System) plays a pivotal role in ensuring data integrity. ReFS was designed with resiliency in mind and introduces checksumming and corruption detection features that work hand-in-hand with Storage Spaces Direct. Key integrity features include:

  • Checksums and integrity streams: ReFS stores checksums for all metadata, and can be configured (by default on S2D volumes, it is enabled) to checksum file data as well via integrity streams. Every time data is read, ReFS can verify the checksum to detect if the data has been corrupted (bit rot, disk errors, etc.). This provides end-to-end validation – from the disk platter up through the file system – that what was written is what is being read.

  • Automatic corruption repair: When ReFS detects a checksum mismatch (corruption), and the volume is on a mirrored or parity space, it will invoke an online repair process using Storage Spaces. Essentially, because S2D maintains duplicate or parity information, ReFS can fetch the alternate copy of the data from another drive and use it to correct the bad block. For example, if one mirror copy of a file has a bad sector, ReFS will read the other mirror and then overwrite the bad copy with the correct data, all while the volume remains online. In parity volumes, if corruption is found, the system can reconstruct the data from the parity and other data blocks similarly. This integration of ReFS with S2D means the system is self-healing – it not only detects corruption, but also fixes it on the fly using good data from redundant copies.

  • Data scrubbing (integrity scrubber): S2D periodically runs a background integrity scanner (sometimes called a scrubber) that traverses through the stored data to proactively find latent corruption and repair it before it’s ever accessed. This process reads chunks of data, verifies their checksums, and if any issue is found, triggers the repair using the alternate copy as described. The scrubbing is done online and in a manner that minimizes interference with foreground I/O. Over time, this ensures that silent bit-flips or latent disk errors are corrected, maintaining a high level of data integrity for the long term.

  • Transactional NTFS/CSV coherence: The cluster itself ensures that higher-level operations (like file metadata changes) are coordinated. CSV volumes use a distributed locking mechanism for metadata, but they keep user file data accessible even if one node is the coordinator. This is more about availability than data integrity per se, but it means that things like node crashes during writes won’t leave the file system in an inconsistent state – the combination of ReFS’s robustness and cluster coordination ensures consistency.

In summary, every piece of data in an S2D volume is protected not just by having multiple copies or parity, but also by cryptographic hashes that guard against bit corruption. The system detects bad data (via checksums on read or via scrubbing) and repairs it using the good copies on the fly. This end-to-end integrity approach (sometimes called “bit rot” protection) is a major advantage of ReFS with Storage Spaces Direct, as it requires no manual intervention to handle disk read errors or corrupted sectors – the cluster self-heals and logs an event to alert administrators.

Caching and storage tiering behavior

S2D is designed to take advantage of different types of storage media, combining them for both performance and capacity. There are two key mechanisms at work here: a caching layer at the storage bus level, and storage tiering within volumes (especially for mirror+parity volumes).

Storage Bus Cache: The fastest devices in each server (for example NVMe SSDs or persistent memory) are automatically used as a cache for the slower capacity drives (such as SATA SSDs or HDDs). This feature, called the Storage Bus Layer Cache, binds each cache device to one or more capacity drives on the same server to accelerate I/O. The cache typically operates in a write-back mode for writes and also as a read cache for frequently accessed data. In practical terms, when a write comes into S2D, it will first hit the local node’s cache drive (if the data is destined for a slower disk on that node) and be buffered at high speed. Likewise, reads of recently or frequently accessed data can be served from the SSD/NVMe cache instead of hitting the slower disk. This significantly improves IOPS and latency for workloads on hybrid configurations. The cache is persistent and transparent – it sits below the virtual disk’s resiliency, meaning that any write in the cache is still replicated to other nodes’ caches or drives according to the resiliency setting. If a node or cache device fails, data won’t be lost because the write had been replicated to other nodes (and their caches or disks) before acknowledgement. In effect, the cache gives a speed boost while the actual fault-tolerant data commits are happening across the cluster.

  • Cache in hybrid vs. all-flash: In a hybrid scenario (mix of SSD and HDD), the cache is critical. For example, a common S2D setup is to have a few NVMe or SSDs as cache, and a larger number of HDDs as capacity. The cache absorbs random writes and caches hot reads, allowing the slower HDDs to mostly handle sequential large transfers or cold data. This mitigates the typical performance issues of HDDs, even for virtualization workloads that tend to be random I/O heavy. If the cluster is all-flash (all SSD/NVMe), the cache might be unnecessary or is automatically disabled. In all-flash S2D, since all devices are fast, S2D may not use a separate cache tier – or if it does (in case of two tiers of flash, e.g., NVMe cache and SATA SSD capacity), it still provides some benefit, but the difference is smaller. Essentially, S2D will not attempt to tier across more than two classes of storage; you cannot have three layers of cache+tier (e.g., NVMe + SSD + HDD all distinct) in one pool – the design supports one cache tier and one capacity tier.

Real-Time Tiering in Volumes: Separately from the physical cache, S2D also implements tiering at the virtual disk level using ReFS’s capabilities (applicable when a volume is configured as mirror-accelerated parity or even all-mirror across different drive types). A volume can have a performance tier and a capacity tier within it. For instance, an S2D volume could be defined such that it spans SSD and HDD, with the SSD portion using mirroring and the HDD portion using parity. ReFS will then automatically direct all new data to the performance tier (SSD mirror) for fast access, and in real-time it will relocate chunks of data that become “cold” to the capacity tier (HDD parity) to free up space for hot data. This movement happens at the sub-file level and in a way that’s largely transparent to the user. As data is accessed frequently, it can even be promoted back to the mirror tier. This is similar to cache, but on a larger granularity and longer time scale, effectively optimizing where data lives based on its activity pattern.

One can think of it this way: in a hybrid S2D cluster, there are two levels of acceleration – a real-time write/read cache at the lowest level (using NVMe/SSD to buffer I/O to HDD), and a slightly higher-level tiering that moves whole 256MB slabs between an SSD-mirrored tier and an HDD-parity tier. Together, these ensure that active working set data is served from the fastest media and with the most optimal (mirror) resiliency, while less-used data is stored on high-capacity media with space-efficient coding. The result is high performance for things like VM OS disks and active databases, combined with excellent storage efficiency for archival or less-used data on the same cluster. Administrators don’t need to manually move data – the system handles it based on heat maps of data usage. It’s worth noting that as of the latest Windows Server versions, ReFS and S2D support data deduplication (introduced in Windows Server 2019 for ReFS) which can further improve effective capacity, though dedupe is typically recommended only on volumes that are all-mirror (since parity + dedupe can be complex).

High performance networking and SMB Direct in S2D

The backbone of Storage Spaces Direct’s cluster communication is the SMB3 protocol over standard Ethernet networks. S2D uses SMB for all storage traffic between nodes, including both metadata coordination and, critically, the block data replication. Microsoft strongly recommends high-bandwidth, low-latency networks for S2D: typically at least 10 Gbps Ethernet, with support for RDMA (Remote Direct Memory Access) such as RoCE or iWARP, branded in Windows as SMB Direct. With SMB Direct, S2D nodes can transfer data between each other’s memory and storage with very low CPU overhead and latency – this is crucial when every write might be sent to multiple nodes. RDMA allows the cluster to achieve extremely high IOPS and throughput (Microsoft has demonstrated S2D clusters exceeding 13 million IOPS with all-NVMe configurations). SMB Multichannel is also enabled, meaning if each server has multiple NIC ports, S2D can spread traffic across them or failover between them, further improving reliability and performance.

A typical networking setup for S2D involves either two RDMA-capable NICs per server (which can be teamed via SMB Multichannel rather than traditional NIC teaming) or even switches that support Data Center Bridging if using RoCE to ensure lossless transport. The cluster network carries different types of traffic: the actual storage data traffic (reads/writes between nodes), as well as cluster heartbeats and CSV coordination traffic. It is possible to configure Quality of Service to prioritize S2D traffic if needed, but generally, if using a dedicated storage network, the SMB bandwidth is available for S2D. Because this is just TCP/IP or RDMA over Converged Ethernet, it’s much simpler than deploying Fibre Channel – you get a converged network for both compute (VM traffic) and storage, especially in a hyperconverged scenario.

SMB Encryption and Signing: SMB3 supports encryption, and in an untrusted network scenario, one could encrypt S2D’s SMB traffic for security. However, in most deployments, the S2D cluster is inside a secure data center network, and encryption is not used by default as it can add overhead. SMB Signing is typically not needed internally either. The focus is on performance: features like SMB Direct and even SMB's compression (introduced in newer SMB versions) could be leveraged if appropriate (though compression is more beneficial for client-server file copy than for this kind of internal replication).

To summarize, the use of SMB3 over RDMA is what allows S2D’s distributed storage to perform like a local high-speed array. It provides a networked “fabric” for the Software Storage Bus, essentially replacing the physical SAS cables or Fibre Channel links with Ethernet cables and switches. This is why consistent, low-latency networking is key to S2D – any bottlenecks in the network would directly impact IO performance and latency across the cluster. With a properly configured RDMA network, nodes can replicate data with minimal latency penalty, enabling features like synchronous mirroring across servers to function efficiently.

Author: Frank Keunen

Most recent posts

BY e-mail

Subscribe to newsletter

Stay connected with us and keep up with the latest industry news, insights, and company updates by subscribing to our newsletter. Stay UP-TO-DATE!

Blog

In the spotlight

Helft van de Vlaamse bedrijven was in 2024 slachtoffer van cyberaanvallen

Helft van de Vlaamse bedrijven was in 2024 slachtoffer van cyberaanvallen

The Power of AI in Network Security

The Power of AI in Network Security

Prestigieuze Gunning voor Group K: Raamcontract van Provincie Limburg Binnengehaald!

Prestigieuze Gunning voor Group K: Raamcontract van Provincie Limburg Binnengehaald!

How AI is Revolutionizing Network Security

How AI is Revolutionizing Network Security

“Empowering businesses to thrive in the digital age”

Collaboration is at the heart of everything we do at Group K. We believe in forging strong partnerships with our clients, enabling us to understand their specific needs and deliver solutions that drive lasting results. Our team is committed to providing unparalleled customer support, ensuring that we are always available to answer questions, provide guidance, and offer expert advice.

How can we help?

Discover our dedicated support team to help you

Expert advice?

We provide tailored recommendations to help you optimize your IT infrastructure.