Remote Direct Memory Access (RDMA) for Intel® Ethernet Devices

Remote Direct Memory Access, or RDMA, allows a network device to transfer data directly to and from application memory on another system, increasing throughput and lowering latency in certain networking environments.

The major difference is that iWARP performs RDMA over TCP, while RoCEv2 uses UDP.

On devices with RDMA capabilities, RDMA is supported on the following operating systems:

To avoid performance degradation from dropped packets, enable link level flow control or priority flow control on all network interfaces and switches.

NOTES:

  • On systems running a Microsoft Windows Server operating system, enabling *QoS/priority flow control will disable link level flow control.
  • Devices based on the Intel® Ethernet 800 Series do not support RDMA when operating in multiport mode with more than 4 ports.
  • On Linux systems, RDMA and bonding are not compatible. If RDMA is enabled, bonding will not be functional.

RDMA on Linux or FreeBSD

For Intel Ethernet devices that support RDMA on Linux or FreeBSD, use the drivers shown in the following table.

Device Linux FreeBSD Supported Protocols
Base Driver RDMA Driver Base Driver RDMA Driver
Intel® Ethernet 800 Series ice irdma ice irdma RoCEv2, iWARP
Intel® Ethernet X722 Series i40e irdma ixl iw_ixl iWARP
Basic Installation Instructions

At a high level, installing and configuring RDMA on Linux or FreeBSD consists of the following steps. See the README file inside the appropriate RDMA driver tarball for full details.

  1. Install the base driver.
  2. Install the RDMA driver.
  3. Install and patch any user-mode RDMA libraries. Exact steps will vary by operating system; refer to the RDMA driver readme for details.
  4. Enable flow control on your device. Refer to the base driver README for details and supported modes.
  5. If you are using RoCE, enable flow control (PFC or LFC) on the device and endpoint your system is connected to. See your switch documentation and, for Linux, the Intel® Ethernet 800 Series Linux Flow Control Configuration Guide for RDMA Use Cases for details.

RDMA for Virtualized Environments in Linux

Devices based on the Intel Ethernet 800 Series support RDMA in a Linux VF on supported Windows or Linux hosts. Refer to the README file inside the Linux RDMA driver tarball for more information on how to load and configure RDMA in a Linux VF.

RDMA on Microsoft Windows

RDMA for Network Direct (ND) User-Mode Applications

Network Direct (ND) allows user-mode applications to use RDMA features.

NOTE: User mode applications may have prerequisites such as Microsoft HPC Pack or Intel MPI Library, refer to your application documentation for more details.

RDMA User Mode Installation

The Intel® Ethernet User Mode RDMA Provider is supported on Microsoft Windows Server 2012 R2 and later.

Follow the steps below to install user-mode Network Direct features.

  1. From the installation media, run Autorun.exe to launch the installer, then choose "Install Drivers and Software" and accept the license agreement.
  2. On the Setup Options screen, select "Intel® Ethernet User Mode RDMA Provider".
  3. On the RDMA Configuration Options screen, select "Enable RDMA routing across IP Subnets" if desired. Note that this option is displayed during base driver installation even if user mode RDMA was not selected, as this option is applicable to Network Direct Kernel functionality as well.
  4. If Windows Firewall is installed and active, select "Create an Intel® Ethernet RDMA Port Mapping Service rule in Windows Firewall" and the networks to which to apply the rule.

    NOTE: If Windows Firewall is disabled or you are using a third party firewall, you will need to add this rule manually.

  5. Continue with driver and software installation.

RDMA Network Direct Kernel (NDK)

RDMA Network Direct Kernel (NDK) functionality is included in the Intel base networking drivers and requires no additional features to be installed.

RDMA Routing Across IP Subnets

If you want to allow NDK's RDMA functionality across subnets, you will need to select "Enable RDMA routing across IP Subnets" on the RDMA Configuration Options screen during base driver installation.

Enabling Priority Flow Control (PFC) on a Microsoft Windows Server Operating System

To avoid performance degradation from dropped packets, enable priority flow control (PFC) or link level flow control on all network interfaces and switches.

NOTE: On systems running a Microsoft Windows Server operating system, enabling *QoS/priority flow control will disable link level flow control.

Use the following PowerShell* commands to enable PFC on Microsoft Windows Server:

Install-WindowsFeature -Name Data-Center-Bridging -IncludeManagementTools
New-NetQoSPolicy "SMB" -NetDirectPortMatchCondition 445 -PriorityValue8021Action 3
Enable-NetQosFlowControl -Priority 3
Disable-NetQosFlowControl -Priority 0,1,2,4,5,6,7
New-NetQosTrafficClass -Name "SMB" -Priority 3 -BandwidthPercentage 60 -Algorithm ETS
Set-NetQosDcbxSetting -Willing $FALSE
Enable-NetAdapterQos -Name "Slot1 4 2 Port 1"

Verifying RDMA Operation with Microsoft PowerShell

You can check that RDMA is enabled on the network interfaces using the following Microsoft PowerShell command:

Get-NetAdapterRDMA

Use the following PowerShell command to check if the network interfaces are RDMA capable and multichannel is enabled:

Get-SmbClientNetworkInterface

Use the following PowerShell command to check if Network Direct is enabled in the operating system:

Get-NetOffloadGlobalSetting | Select NetworkDirect

Use netstat to make sure each RDMA-capable network interface has a listener at port 445 (Windows Client OSs that support RDMA may not post listeners). For example:

netstat.exe -xan | ? {$_ -match "445"}

RDMA for Virtualized Environments in Windows

To enable RDMA functionality on virtual adapter(s) connected to a VMSwitch, you must:

Under certain circumstances, you may disable these settings by default. You can manually set these options in the Adapter Settings panel of Intel PROSet ACU, in the Advanced tab of the adapter properties dialog box, or with the following PowerShell commands:

Set-NetAdapterAdvancedProperty -Name <nic_name> -RegistryKeyword *SRIOV -RegistryValue 1

Set-NetAdapterAdvancedProperty -Name <nic_name> -RegistryKeyword *VMQ -RegistryValue 1

Set-NetAdapterAdvancedProperty -Name <nic_name> -RegistryKeyword RdmaMaxVfsEnabled -RegistryValue <1-32>

Configuring RDMA Guest Support (NDK Mode 3)

NDK Mode 3 allows kernel mode Windows components to use RDMA features inside Hyper-V guest partitions. To enable NDK mode 3 on an Intel Ethernet device, do the following:

  1. Enable SR-IOV in your system's BIOS or uEFI.
  2. Enable the SR-IOV advanced setting on the device.
  3. Enable SR-IOV on the VMSwitch bound to the device by performing the following for all physical functions on the same device:
    New-VMSwitch -Name <switch_name> -NetAdapterName <device_name>
    -EnableIov $true
  4. Configure the number of RDMA virtual functions (VFs) on the device by setting the "RdmaMaxVfsEnabled" advanced setting. All physical functions must be set to the same value. The value is the maximum number of VFs that can be capable of RDMA at one time for the entire device. Enabling more VFs will restrict RDMA resources from physical functions (PFs) and other VFs.
    Set-NetAdapterAdvancedProperty -Name <device_name> -RegistryKeyword RdmaMaxVfsEnabled -RegistryValue <Value: 0 - 32>
  5. Disable all PF adapters on the host and re-enable them. This is required when the registry keyword "RdmaMaxVfsEnabled" is changed or when creating or destroying a VMSwitch.
    Get-NetAdapterRdma | Disable-NetAdapter
    Get-NetAdapterRdma | Enable-NetAdapter
  6. Create VM Network Adapters for VMs that require RDMA VF support.
    Add-VMNetworkAdapter -VMName <vm_name> -VMNetworkAdapterName <device_name> -SwitchName <switch_name>
  7. If you plan to use Microsoft Windows 10 Creators Update (RS2) or later on a guest partition, set the RDMA weight on the VM Network Adapter by entering the following command on the host:
    Set-VMNetworkAdapterRdma -VMName <vm_name> -VMNetworkAdapterName <device_name> -RdmaWeight 100
  8. Set SR-IOV weight on the VM Network Adapter (Note: SR-IOV weight must be set to 0 before setting the RdmaWeight to 0):
    Set-VMNetworkAdapter -VMName <vm_name> -VMNetworkAdapterName <device_name> -IovWeight 100
  9. Install the VF network adapter with the PROSET Installer in the VM.
  10. Enable RDMA on the VF driver and Hyper-V Network Adapter using PowerShell in the VM:
    Set-NetAdapterAdvancedProperty -Name <device_name> -RegistryKeyword RdmaVfEnabled -RegistryValue 1
    Get-NetAdapterRdma | Enable-NetAdapterRdma

RDMA for NDK Features such as SMB Direct (Server Message Block)

NDK allows Windows components (such as SMB Direct storage) to use RDMA features.

Testing NDK: Microsoft Windows SMB Direct with DiskSPD

This section outlines the recommended way to test RDMA for Intel Ethernet functionality and performance on Microsoft Windows operating systems.

Note that since SMB Direct is a storage workload, the performance of the benchmark may be limited to the speed of the storage device rather than the network interface being tested. Intel recommends using the fastest storage possible in order to test the true capabilities of the network device(s) under test.

Test instructions:

  1. Set up and connect at least two servers running a supported Microsoft Windows Server operating system, with at least one RDMA-capable Intel® Ethernet device per server.
  2. On the system designated as the SMB server, set up an SMB share. Note that the performance of the benchmark may be limited to the speed of the storage device rather than the network interface being tested. Storage setup is outside of the scope of this document. You can use the following PowerShell command:
    New-SmbShare -Name <SMBsharename> -Path <SMBsharefilepath> -FullAccess <domainname>\Administrator,Everyone

    For Example:
    New-SmbShare -Name RAMDISKShare -Path R:\RAMDISK -FullAccess group\Administrator,Everyone

  3. Download and install the Diskspd Microsoft utility from here: https://gallery.technet.microsoft.com/DiskSpd-a-robust-storage-6cd2f223
  4. Using CMD or Powershell, cd to the DiskSpd folder and run tests. (Refer to Diskspd documentation for more details on parameters)

    For Example: Set the block size to 4K, run the test for 60 seconds, disable all hardware and software caching, measure and display latency statistics, leverage 16 overlapped IOs and 16 threads per target, random 0% writes and 100% reads and create a 10GB test file at "\\<SMBserverTestIP>\<SMBsharename>\test.dat" :
    .\diskspd.exe -b4K -d60 -h -L -o16 -t16 -r -w0 -c10G \\<SMBserverTestIP>\<SMBsharename>\test.dat

  5. Verify that RDMA traffic is running using perfmon counters such as "RDMA Activity" and "SMB Direct Connection". Refer to Microsoft documentation for more details.

Customer Support

Legal / Disclaimers

Copyright (C) 2019 - 2022, Intel Corporation. All rights reserved.

Intel Corporation assumes no responsibility for errors or omissions in this document. Nor does Intel make any commitment to update the information contained herein.

Intel is a trademark of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

This software is furnished under license and may only be used or copied in accordance with the terms of the license. The information in this manual is furnished for informational use only, is subject to change without notice, and should not be construed as a commitment by Intel Corporation. Intel Corporation assumes no responsibility or liability for any errors or inaccuracies that may appear in this document or any software that may be provided in association with this document. Except as permitted by such license, no part of this document may be reproduced, stored in a retrieval system, or transmitted in any form or by any means without the express written consent of Intel Corporation.