Remote Direct Memory Access, or RDMA, allows a network device to transfer data directly to and from application memory on another system, increasing throughput and lowering latency in certain networking environments.
The major difference is that iWARP performs RDMA over TCP, while RoCEv2 uses UDP.
To avoid performance degradation from dropped packets, enable link level flow control or priority flow control on all network interfaces and switches.
NOTES:
|
These basic Linux RDMA installation instructions apply for the following devices:
For detailed installation and configuration information, see the Linux RDMA driver README file in the driver tarball for Intel Ethernet devices that support RDMA.
This example is specific to Red Hat* Enterprise Linux*. Your operating system specifics may be different.
# tar zxf irdma-<x.x.x>.tar.gz
# cd irdma-<x.x.x>
# ./build.sh
# modprobe irdma
NOTE: By default, the irdma driver is loaded in iWARP mode. It uses the devlink interface to enable RoCEv2 per port. To load all irdma ports in RoCEv2 mode, use the following: |
# yum erase rdma-core
# wget https://github.com/linux-rdma/rdma-core/releases/download/v27.0/rdma-core-27.0.tar.gz
NOTE: Download the rdma-core version that matches the version of the libirdma patch file included with the driver. For example, |
# tar -xzvf rdma-core-<version>.tar.gz
# cd rdma-core-<version>
# patch -p2 <<path-to-component-build>/libirdma-<version>.patch
# cd ..
# chgrp -R root <path-to-rdma-core>/redhat
# tar -zcvf rdma-core-<version>.tgz rdma-core-<version>
# mkdir -p ~/rpmbuild/SOURCES
# mkdir -p ~/rpmbuild/SPECS
# cp rdma-core-<version>.tgz ~/rpmbuild/SOURCES/
# cd ~/rpmbuild/SOURCES
# tar -xzvf rdma-core-<version>.tgz
# cp ~/rpmbuild/SOURCES/rdma-core-<version>/redhat/rdma-core.spec ~/rpmbuild/SPECS/
# cd ~/rpmbuild/SPECS/
# rpmbuild -ba rdma-core.spec
# cd ~/rpmbuild/RPMS/x86_64
# yum install *<version>*.rpm
Devices based on the Intel Ethernet 800 Series support RDMA in a Linux VF on supported Windows or Linux hosts. Refer to the README file inside the Linux RDMA driver tarball for more information on how to load and configure RDMA in a Linux VF.
FreeBSD RDMA drivers are available for the following device series:
Device | Base Driver Name | RDMA Driver Name | Supported Protocols |
---|---|---|---|
Intel® Ethernet 800 Series | ice | irdma | RoCEv2, iWARP |
Intel® Ethernet X722 Series | ixl | iw_ixl | iWARP |
The following instructions describe basic FreeBSD RDMA installation for each device series. For detailed installation and configuration information, refer to the README file in the FreeBSD RDMA driver tarball.
Intel® Ethernet 800 Series:
# tar -xf ice-<version>.tar.gz
# tar -xf irdma-<version>.tar.gz
# cd ice-<version>/ directory
# make
# make install
# cd irdma-<version>/src
# make clean
# make ICE_DIR=$PATH_TO_ICE/ice-<version>/
# make install
NOTE: By default, the irdma driver loads in RoCEv2 mode. To load irdma ports in iWARP mode, add the following line to |
NOTE: Link-level flow control and priority flow control are mutually exclusive. Refer to the base driver README file for more information. |
Intel® Ethernet X722 Series:
# tar -xf ixl-<version>.tar.gz
# tar -xf iw_ixl-<version>.tar.gz
# cd ixl-<version>/src directory
# make
# make install
# cd iw_ixl-<version>/src
# make clean
# make IXL_DIR=$PATH_TO_IXL/ixl-<version>/src
# make install
# sysctl dev.ixl.<interface_num>.fc=3
Network Direct (ND) allows user-mode applications to use RDMA features.
NOTE: User mode applications may have prerequisites such as Microsoft HPC Pack or Intel MPI Library, refer to your application documentation for more details. |
The Intel® Ethernet User Mode RDMA Provider is supported on Microsoft Windows Server 2012 R2 and later.
Follow the steps below to install user-mode Network Direct features.
NOTE: If Windows Firewall is disabled or you are using a third party firewall, you will need to add this rule manually. |
RDMA Network Direct Kernel (NDK) functionality is included in the Intel base networking drivers and requires no additional features to be installed.
If you want to allow NDK's RDMA functionality across subnets, you will need to select "Enable RDMA routing across IP Subnets" on the RDMA Configuration Options screen during base driver installation.
To avoid performance degradation from dropped packets, enable priority flow control (PFC) or link level flow control on all network interfaces and switches.
NOTE: On systems running a Microsoft* Windows Server* operating system, enabling *QoS/priority flow control will disable link level flow control. |
Use the following PowerShell* commands to enable PFC on Microsoft Windows Server:
Install-WindowsFeature -Name Data-Center-Bridging -IncludeManagementTools
New-NetQoSPolicy "SMB" -NetDirectPortMatchCondition 445 -PriorityValue8021Action 3
Enable-NetQosFlowControl -Priority 3
Disable-NetQosFlowControl -Priority 0,1,2,4,5,6,7
New-NetQosTrafficClass -Name "SMB" -Priority 3 -BandwidthPercentage 60 -Algorithm ETS
Set-NetQosDcbxSetting -Willing $FALSE
Enable-NetAdapterQos -Name "Slot1 4 2 Port 1"
You can check that RDMA is enabled on the network interfaces using the following Microsoft PowerShell command:
Get-NetAdapterRDMA
Use the following PowerShell command to check if the network interfaces are RDMA capable and multichannel is enabled:
Get-SmbClientNetworkInterface
Use the following PowerShell command to check if Network Direct is enabled in the operating system:
Get-NetOffloadGlobalSetting | Select NetworkDirect
Use netstat to make sure each RDMA-capable network interface has a listener at port 445 (Windows Client OSs that support RDMA may not post listeners). For example:
netstat.exe -xan | ? {$_ -match "445"}
To enable RDMA functionality on virtual adapter(s) connected to a VMSwitch, the SRIOV (Single Root IO Virtualization) and VMQ (Virtual Machine Queues) advanced properties must be enabled on each port. Under certain circumstances, these settings may be disabled by default. These options can be set manually in the Adapter Settings panel of Intel PROSet ACU, in the Advanced tab of the adapter properties dialog box, or with the following PowerShell commands:
Set-NetAdapterAdvancedProperty -Name <nic_name> -RegistryKeyword *SRIOV -RegistryValue 1
Set-NetAdapterAdvancedProperty -Name <nic_name> -RegistryKeyword *VMQ -RegistryValue 1
NDK Mode 3 allows kernel mode Windows components to use RDMA features inside Hyper-V guest partitions. To enable NDK mode 3 on an Intel Ethernet device, do the following:
New-VMSwitch -Name <switch_name> -NetAdapterName <device_name>
-EnableIov $true
Set-NetAdapterAdvancedProperty -Name <device_name> -RegistryKeyword RdmaMaxVfsEnabled -RegistryValue <Value: 0 - 32>
Get-NetAdapterRdma | Disable-NetAdapter
Get-NetAdapterRdma | Enable-NetAdapter
Add-VMNetworkAdapter -VMName <vm_name> -VMNetworkAdapterName <device_name> -SwitchName <switch_name>
Set-VMNetworkAdapterRdma -VMName <vm_name> -VMNetworkAdapterName <device_name> -RdmaWeight 100
Set-VMNetworkAdapter -VMName <vm_name> -VMNetworkAdapterName <device_name> -IovWeight 100
Set-NetAdapterAdvancedProperty -Name <device_name> -RegistryKeyword RdmaVfEnabled -RegistryValue 1
Get-NetAdapterRdma | Enable-NetAdapterRdma
NDK allows Windows components (such as SMB Direct storage) to use RDMA features.
This section outlines the recommended way to test RDMA for Intel Ethernet functionality and performance on Microsoft Windows operating systems.
Note that since SMB Direct is a storage workload, the performance of the benchmark may be limited to the speed of the storage device rather than the network interface being tested. Intel recommends using the fastest storage possible in order to test the true capabilities of the network device(s) under test.
Test instructions:
New-SmbShare -Name <SMBsharename> -Path <SMBsharefilepath> -FullAccess <domainname>\Administrator,Everyone
New-SmbShare -Name RAMDISKShare -Path R:\RAMDISK -FullAccess group\Administrator,Everyone
.\diskspd.exe -b4K -d60 -h -L -o16 -t16 -r -w0 -c10G \\<SMBserverTestIP>\<SMBsharename>\test.dat
Copyright (C) 2019 - 2022, Intel Corporation. All rights reserved.
Intel Corporation assumes no responsibility for errors or omissions in this document. Nor does Intel make any commitment to update the information contained herein.
Intel is a trademark of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others.
This software is furnished under license and may only be used or copied in accordance with the terms of the license. The information in this manual is furnished for informational use only, is subject to change without notice, and should not be construed as a commitment by Intel Corporation. Intel Corporation assumes no responsibility or liability for any errors or inaccuracies that may appear in this document or any software that may be provided in association with this document. Except as permitted by such license, no part of this document may be reproduced, stored in a retrieval system, or transmitted in any form or by any means without the express written consent of Intel Corporation.