If you’re an Unraid user aiming to get the most out of your SMB (Server Message Block) shares, you’ve landed in the right place. This guide delves deep into Unraid’s SMB shares, offering detailed tips and tricks to enhance your network storage performance. Whether you’re dealing with large file transfers or aiming to maximize your high-speed network, we’ve got you covered. So, let’s embark on this journey to supercharge your Unraid experience!


1. Introduction to SMB Shares in Unraid

Unraid is a powerful Network Attached Storage (NAS) solution that utilizes the SMB protocol for file sharing across Windows networks. While it’s known for its user-friendly interface and flexibility, you might encounter situations where SMB shares underperform, especially when dealing with large files or high-speed networks. Understanding the underlying mechanisms of SMB shares in Unraid is crucial for diagnosing issues and implementing effective optimizations.

Analogy: Think of your Unraid server as a library and the SMB shares as librarians handing out books. Sometimes, the process can be slow due to various factors like librarian availability or organization methods. With a few optimizations, we can streamline the process, making these librarians (SMB shares) more efficient and speeding up the entire borrowing (file transfer) process.


2. Direct Disk Shares vs. User Shares

Understanding the Options

In Unraid, you have the choice between user shares and direct disk shares:

  • User Shares: These are logical shares that can span multiple disks in your Unraid array. They provide a convenient way to organize files without worrying about the physical disk location. However, they rely on a filesystem layer called FUSE (Filesystem in Userspace), which can introduce overhead and potentially slow down file operations.
  • Direct Disk Shares: These give you direct access to individual disks in the array, bypassing the FUSE layer. By eliminating this extra layer, you can reduce overhead and potentially increase data transfer speeds.

Why Direct Disk Shares Can Be Faster

When you use user shares, Unraid employs FUSE to manage files across multiple disks transparently. While this offers flexibility, it adds an extra processing layer that can slow down file operations, especially with large files or high-speed networks like 10 Gigabit Ethernet (10GbE). By accessing disks directly, you remove this layer, resulting in faster data transfers.

Real-World Example: If you’re transferring large files over a 10GbE connection, copying directly to a cache share or a specific disk share rather than a user share can significantly improve transfer speeds. Users have reported being able to fully saturate their 10GbE links, achieving transfer speeds upwards of 1 Gigabyte per second when bypassing the FUSE layer.

How to Enable Direct Disk Shares

Step-by-Step Instructions:

  1. Access the Unraid WebGUI:
  • Open your web browser and navigate to your Unraid server’s IP address or hostname.
  1. Navigate to SMB Settings:
  • Click on Settings in the top menu.
  • Under Network Services, select SMB.
  1. Enable Disk Shares (could be under Global Share Settings);
  • Find the Enable disk shares option.
  • Set it to Yes (hidden) to prevent the shares from appearing in network browsing but still allow direct access.
  1. Apply Changes:
  • Click Apply and then Done to save your settings.
  1. Access the Disk Shares:
  • On your client machine, you can now map network drives directly to the disk shares using paths like \\YourUnraidServer\disk1.

Security Note: Be cautious when enabling disk shares. Direct disk access can bypass some of the safeguards provided by user shares, potentially leading to accidental data loss if files are moved or deleted incorrectly. Ensure you have proper permissions set to prevent unauthorized access.

Keep in mind that your speeds do depend on whether you have 5,400 rpm or 7,200 rpm drives. I was only getting 60 MB/s on my 5400s and about 80 MB/s on my 7200s. The “diskspeed” community app plugin may help determine bottlenecks.


3. Optimizing Windows Network Interface Card (NIC) Settings

Your Windows NIC settings play a crucial role in network performance. By tweaking these settings, you can reduce latency, increase throughput, and resolve speed issues that might be hindering your Unraid SMB share performance.

Interrupt Moderation

  • What It Is: Interrupt moderation is a feature that controls how frequently the NIC interrupts the CPU to process incoming packets. By grouping multiple packets together before sending an interrupt, it reduces CPU overhead. However, this can introduce latency and reduce network throughput, especially in high-speed networks.
  • Why Disable It: Disabling interrupt moderation allows the NIC to process packets more immediately, reducing latency and potentially increasing data transfer speeds. This is particularly beneficial when transferring large files over fast networks like 10GbE.

How to Disable Interrupt Moderation

Step-by-Step Instructions:

  1. Open Device Manager:
  • Press Win + X and select Device Manager from the menu.
  1. Locate Your NIC:
  • In Device Manager, expand Network adapters.
  • Find your network adapter (e.g., Intel Ethernet Connection X722).
  1. Adjust NIC Settings:
  • Right-click on your NIC and select Properties.
  • Go to the Advanced tab.
  1. Disable Interrupt Moderation:
  • Scroll through the list of properties until you find Interrupt Moderation.
  • Set it to Disabled.
  1. Apply Changes:
  • Click OK to save the settings.
  1. Restart if Necessary:
  • Some changes may require a system restart to take effect.

Analogy: Imagine a receptionist (your CPU) who handles phone calls (network packets). With interrupt moderation enabled, the receptionist waits until several calls have come in before addressing them, which can cause delays. Disabling it allows the receptionist to handle each call as it comes in, improving responsiveness.

Powershell

Set-SmbServerConfiguration -EnableMultiChannel $true


4. Unraid-Side Network Optimizations

Optimizing network settings on the Unraid server itself can have a significant impact on performance. By adjusting parameters like interrupt coalescing, you can fine-tune how the server handles network traffic, leading to better throughput and lower latency.

Adjusting Interrupt Coalescing

  • What It Is: Interrupt coalescing is similar to interrupt moderation but occurs on the Unraid server’s network interface. It determines how frequently the network card interrupts the CPU to process incoming (rx) and outgoing (tx) packets.
  • Why Adjust It: Increasing the rx-usecs value can help the NIC handle bursts of incoming packets more efficiently, reducing CPU overhead and potentially increasing network throughput.

How to Adjust Using ethtool

Step-by-Step Instructions:

  1. Access the Terminal:
  • Open the Unraid WebGUI.
  • Click on the Terminal icon (usually found in the upper-right corner).
  1. Check Current Settings:
  • Run the following command to view current interrupt coalescing settings: bash ethtool -c eth0
    • Replace eth0 with your network interface if different.
  1. Set rx-usecs:
  • To adjust the receive interrupt delay, run: bash ethtool -C eth0 rx-usecs 200
    • This sets the NIC to wait 200 microseconds before generating an interrupt.
  1. Verify Changes:
  • Run the initial command again to ensure the settings have been applied:
    bash ethtool -c eth0
  1. Make Changes Persistent:
  • Changes made with ethtool are not persistent across reboots.
  • To make them permanent, add the command to your Unraid go script:
    • Open the go script:
      bash nano /boot/config/go
    • Add the ethtool command before the line that starts emhttp:
      bash ethtool -C eth0 rx-usecs 200
    • Save and exit (Ctrl + X, then Y, then Enter).

Note: Adjusting interrupt coalescing can be a balancing act. Setting rx-usecs too high may reduce CPU load but increase latency, while setting it too low may cause CPU spikes. It’s advisable to experiment with different values to find the optimal setting for your workload.


5. Exploring Advanced Network Settings

Advanced network settings like jumbo frames can significantly improve network performance by reducing overhead and increasing efficiency. However, they require careful configuration to ensure compatibility across your network devices.

Jumbo Frames

  • What They Are: Jumbo frames are Ethernet frames that carry more than the standard 1500 bytes of payload, typically up to 9000 bytes. By sending more data per packet, they reduce the number of packets needed for large data transfers, decreasing overhead and improving throughput.
  • Benefits: Using jumbo frames can lead to higher effective data transfer rates, particularly in high-speed networks. They are beneficial when transferring large files or streaming high-bandwidth content.
  • Considerations: All devices in the network path (NICs, switches, routers) must support jumbo frames. If any device doesn’t support them, it can cause communication issues or dropped packets.

How to Enable Jumbo Frames

On Unraid:

  1. Access Network Settings:
  • In the Unraid WebGUI, go to Settings > Network Settings.
  1. Set MTU (Maximum Transmission Unit):
  • Under your network interface (e.g., eth0), find the MTU setting.
  • Change it from the default 1500 to 9000.
  1. Apply Changes:
  • Click Apply at the bottom of the page.

On Windows:

  1. Open Device Manager:
  • Press Win + X and select Device Manager.
  1. Locate Your NIC:
  • Expand Network adapters and right-click your NIC.
  • Select Properties.
  1. Adjust NIC Settings:
  • Go to the Advanced tab.
  • Find Jumbo Packet, Jumbo Frames, or MTU in the list of properties.
  • Set it to 9014 bytes or the highest value available.
  1. Apply Changes:
  • Click OK to save settings.

Verify Configuration:

  • Ping Test:
  • Open Command Prompt and run:
    cmd ping [Unraid_IP] -f -l 8972
  • You should receive replies without fragmentation.

Important: Before enabling jumbo frames, ensure that your network switches and routers support them. If you’re using managed switches, you may need to enable jumbo frames in their settings.


6. Enhancing SMB Throughput on Windows

Optimizing SMB performance on the Windows client side can lead to significant improvements in data transfer speeds. By tweaking settings and employing certain tricks, you can overcome limitations imposed by the SMB protocol.

Hosts File Trick

  • Purpose: The SMB protocol sometimes limits the number of simultaneous connections to a single server name. By adding multiple entries in the Windows hosts file pointing to the same IP address but with different hostnames, you can trick Windows into opening more connections, potentially increasing throughput.
  • How It Works: By accessing the Unraid server using different hostnames, Windows treats each as a separate server, allowing more simultaneous connections and potentially improving transfer speeds.

How to Edit the Hosts File

Step-by-Step Instructions:

  1. Open Notepad as Administrator:
  • Press Win + S, type Notepad, right-click the Notepad app, and select Run as administrator.
  1. Open the Hosts File:
  • In Notepad, click File > Open.
  • Navigate to C:\Windows\System32\drivers\etc.
  • In the file type dropdown, select All Files to view the hosts file.
  • Open the hosts file.
  1. Add Entries:
  • At the end of the file, add entries like the following: 10.0.0.2 tower 10.0.0.2 tower2 10.0.0.2 tower3
    • Replace 10.0.0.2 with your Unraid server’s IP address.
    • Use different hostnames for each entry.
  1. Save the File:
  • Click File > Save to save your changes.
  1. Access the Shares Using Different Hostnames:
  • In Windows Explorer, you can now map network drives or access shares using the different hostnames:
    • \\tower\share
    • \\tower2\share
    • \\tower3\share

Caution: Editing the hosts file can affect network name resolution. Ensure you only add necessary entries and double-check for typos.

Analogy: Think of this as setting up multiple doors (hostnames) to the same room (server). By using different doors, more people (connections) can enter simultaneously without overcrowding a single entrance.


7. Enabling Disk Shares in Unraid

For advanced users, enabling disk shares provides direct access to individual array and cache disks as SMB shares. This can improve performance by bypassing the user share system, but it comes with increased responsibility regarding data management.

Benefits of Enabling Disk Shares

  • Performance Improvement: Direct disk access can reduce overhead, leading to faster file transfers.
  • Fine-Grained Control: Allows you to manage files on specific disks, which can be useful for troubleshooting or optimizing storage usage.

How to Enable Disk Shares

Step-by-Step Instructions:

  1. Access SMB Settings:
  • In the Unraid WebGUI, go to Settings > SMB.
  1. Enable Disk Shares:
  • Find the Enable disk shares option.
  • Set it to Yes (hidden) to prevent the shares from appearing in network browsing but still allow direct access.
  1. Configure Share Settings:
  • Go to the Shares tab.
  • Click on each disk (e.g., disk1, cache).
  • Adjust the Export setting to Yes (hidden) or Yes, depending on your preference.
  • Set the Security to Private and configure user access permissions.
  1. Apply Changes:
  • Click Apply and Done to save your settings.
  1. Access Disk Shares:
  • On your client machine, access the shares using paths like \\YourUnraidServer\disk1.

Security Considerations:

  • Risk of Data Loss: Directly manipulating files on disk shares can lead to data loss if not done carefully.
  • Permissions: Ensure that only trusted users have access to disk shares.
  • User Shares Conflict: Moving or copying files between disk shares and user shares can cause file system inconsistencies.

Analogy: Enabling disk shares is like gaining backstage access at a theater. You have more control and can see how everything works behind the scenes, but you need to be careful not to disrupt the performance.


8. Network and Hardware Adjustments

Optimizing your network hardware and configurations can eliminate bottlenecks and significantly improve SMB share performance. This involves not just software tweaks but also physical connections and device settings.

Direct Connection

  • What It Is: Connecting your PC directly to the Unraid server using an Ethernet cable, effectively creating a private network between the two devices.
  • Benefits: Bypasses network devices like switches and routers that could introduce latency or limit bandwidth. This direct path can maximize the available bandwidth, especially beneficial for high-speed networks like 10GbE.

How to Set Up a Direct Connection

Step-by-Step Instructions:

  1. Connect the Ethernet Cable:
  • Use a high-quality Ethernet cable (Cat6a or Cat7 for 10GbE) to connect your PC’s NIC directly to the Unraid server’s NIC.
  1. Assign Static IP Addresses:
  • On Unraid:
    • In the WebGUI, go to Settings > Network Settings.
    • Under eth0 (or the appropriate interface), set a static IP address (e.g., 192.168.2.1).
  • On Windows:
    • Go to Control Panel > Network and Internet > Network Connections.
    • Right-click your NIC and select Properties.
    • Select Internet Protocol Version 4 (TCP/IPv4) and click Properties.
    • Set a static IP address (e.g., 192.168.2.2) and subnet mask 255.255.255.0.
  1. Disable Other Network Interfaces (Optional):
  • To ensure traffic goes over the direct connection, disable other NICs or adjust the network metric.
  1. Test the Connection:
  • Ping the Unraid server from your PC to confirm connectivity:
    cmd ping 192.168.2.1

Note: This setup creates a dedicated network between your PC and the Unraid server, which can improve security and performance but may limit access from other devices.

Antivirus and Firewall

  • Impact on Performance: Security software can interfere with network performance by scanning network traffic and files in real-time, adding latency and reducing throughput.
  • Troubleshooting Steps:
  1. Temporarily Disable Security Software:
  • Turn off antivirus and firewall software on your PC to see if transfer speeds improve.
  1. Monitor Performance:
  • Perform file transfers and observe if there’s a noticeable speed increase.
  1. Adjust Settings:
  • If performance improves, consider adding exceptions in your security software for the Unraid server’s IP address or specific network protocols.
  1. Re-enable Security Software:
  • Always re-enable your antivirus and firewall to maintain protection.

Caution: Disabling security software exposes your system to potential threats. Only do so temporarily for testing purposes and ensure you have other protections in place.

Layer 3 Switches and VLANs

  • Using VLANs:
  • Virtual LANs (VLANs) can segment network traffic, improve security, and in some cases, allow for higher MTU settings (jumbo frames) on specific network segments.
  • Benefits:
  • Isolate high-bandwidth traffic between your Unraid server and clients.
  • Reduce network congestion on other segments.
  • Potentially improve performance by dedicating network resources.
  • How to Configure:
  1. Access Switch Management Interface:
  • Log into your Layer 3 switch’s web interface or command-line interface.
  1. Create a New VLAN:
  • Add a new VLAN (e.g., VLAN 10) and assign it an ID.
  1. Assign Ports:
  • Assign the ports connected to your Unraid server and client PCs to the new VLAN.
  1. Configure MTU Settings:
  • Ensure the VLAN allows for jumbo frames by setting the MTU to 9000.
  1. Adjust Network Settings on Devices:
  • Set the appropriate VLAN ID on the NICs of your Unraid server and clients.

Note: VLAN configuration varies by switch manufacturer and model. Consult your switch’s documentation for specific instructions.


9. Advanced Unraid Settings

Beyond basic configurations, Unraid offers advanced settings that can significantly enhance SMB share performance. Features like Turbo Write and SMB Multichannel can optimize how data is written to disks and how network traffic is handled.

Enabling Turbo Writes

  • What It Is: Turbo Write (also known as reconstruct write) changes how Unraid writes data to the array. Instead of reading from all drives to calculate parity, it writes data and parity simultaneously, requiring all drives to be spun up.
  • Benefits:
  • Significantly increases write speeds to the array.
  • Reduces the time it takes to write large amounts of data.
  • Considerations:
  • Increased power consumption due to all drives being active.
  • More wear on drives from being constantly spun up.

How to Enable Turbo Writes

Step-by-Step Instructions:

  1. Access Disk Settings:
  • In the Unraid WebGUI, go to Settings > Disk Settings.
  1. Adjust Write Method:
  • Find Tunable (md_write_method).
  • Change the setting from Auto or Read/Modify/Write to Reconstruct Write.
  1. Apply Changes:
  • Click Apply and Done to save your settings.
  1. Verify Performance:
  • Perform a file transfer and monitor write speeds to confirm improvement.

Analogy: Think of Turbo Write as having all workers (disks) actively participating in the task simultaneously, rather than some waiting while others work. This collective effort speeds up the job but requires more resources.

SMB Multichannel

  • What It Is: SMB Multichannel allows SMB to use multiple network connections simultaneously for increased throughput and network fault tolerance.
  • Benefits:
  • Increased data transfer speeds by aggregating bandwidth.
  • Improved resilience in case one network path fails.
  • Requirements:
  • Multiple network interfaces on both the Unraid server and the client.
  • Compatible NICs and drivers that support SMB Multichannel.

How to Enable SMB Multichannel

On Unraid:

  1. Edit SMB Extra Configuration:
  • In the WebGUI, go to Settings > SMB.
  • Under Samba extra configuration, add the following lines: server multi channel support = yes interfaces = 10.0.0.2;capability=RSS,speed=10000000000
    • Replace 10.0.0.2 with your Unraid server’s IP address.
    • Adjust speed to match your NIC’s speed in bits per second.
  1. Apply Changes:
  • Click Apply and Done.
  1. Restart Samba:
  • Open the terminal and run:
    bash /etc/rc.d/rc.samba restart

On Windows:

  • SMB Multichannel is enabled by default on Windows 8 and later.

Verify SMB Multichannel:

  1. Open Command Prompt as Administrator:
  • Press Win + X, select Command Prompt (Admin).
  1. Check SMB Multichannel Status
   Get-SmbMultichannelConnection (in Powershell). You should see multiple lines indicating multiple streams. If you only see a single line, then RSS is not working properly.
  1. Monitor Connections:
  • The command should list multiple connections if SMB Multichannel is working.

Note: For SMB Multichannel to function correctly, NICs should be of the same type and speed. Mixing different NICs may cause issues.


10. Performance Testing and Fine-Tuning

Testing and fine-tuning are essential to ensure that your optimizations are effective. Tools like iperf can help you measure network performance, while updating drivers and adjusting NIC settings can resolve hardware-related bottlenecks.

Run Network Tests with iperf

  • Purpose: iperf is a network performance testing tool that measures the maximum achievable bandwidth on IP networks. It helps identify network bottlenecks and verify that your network can handle the desired data rates.

How to Use iperf

On Unraid:

  1. Install iperf:
  • Option 1: Using Nerd Tools Plugin
    • In the Unraid WebGUI, go to the Plugins tab.
    • Install the Nerd Tools plugin if not already installed.
    • In Nerd Tools, find and install iperf3.
  • Option 2: Using Docker Container
    • Install an iperf Docker container from the Community Applications.
  1. Start iperf Server:
   iperf3 -s

On Windows:

  1. Download iperf:
  1. Open Command Prompt:
  • Navigate to the folder where iperf3.exe is located.
  1. Run iperf Client:
   iperf3 -c [Unraid_IP] -P 10 -f m
  • Replace [Unraid_IP] with the IP address of your Unraid server.
  • -P 10 specifies 10 parallel connections.
  • -f m formats the output in megabits per second.

Example Output:

[SUM]   0.00-10.00  sec  9.45 GBytes  8.12 Gbits/sec  0             sender
[SUM]   0.00-10.04  sec  9.45 GBytes  8.10 Gbits/sec                  receiver

Analyze Results

  • Throughput: The output shows the total data transferred and the average bandwidth. Compare these numbers to your network’s expected performance.
  • Packet Loss and Retransmissions: Look for any indications of packet loss or retransmissions, which can impact performance.

Update NIC Drivers

  • Why It Matters: Outdated or generic drivers may not fully utilize the capabilities of your NIC, leading to suboptimal performance.
  • How to Update:
  1. Identify Your NIC:
  • In Device Manager, note the exact model of your NIC.
  1. Download Drivers:
  • Visit the manufacturer’s website (e.g., Intel, Broadcom) to download the latest drivers.
  1. Install Drivers:
  • Follow the installation instructions provided by the manufacturer.
  1. Restart System:
  • Reboot your PC to ensure the new drivers are loaded.

Optimize NIC Settings

Key Settings to Adjust:

  • Receive Side Scaling (RSS):
  • Enable: Distributes network processing across multiple CPU cores.
  • How to Enable:
    • In NIC properties, find RSS and set it to Enabled.
  • Power Management:
  • Disable Power Saving Features:
    • In NIC properties, uncheck options like Allow the computer to turn off this device to save power.
  • Set Power Plan to High Performance:
    • In Windows, go to Control Panel > Power Options and select High Performance.
  • Offload Features:
  • Enable Offloading:
    • Enable features like TCP Checksum Offload, Large Send Offload, and Receive Segment Coalescing.
    • These allow the NIC to handle certain tasks, reducing CPU load.
  • Interrupt Moderation:
  • Disable: As discussed earlier, this can improve latency and throughput.
  • Jumbo Frames:
  • Enable: Ensure MTU is set to match other devices on the network.
  • TCP Window Auto-Tuning:
  • Set to Normal:
    • Open Command Prompt as Administrator and run:
    netsh int tcp set global autotuninglevel=normal
  • In my case, it actually slowed down the transfer speeds so I disabled with: netsh int tcp set global autotuninglevel=disabled
  • PCIe Slot Considerations:
  • Ensure Adequate Bandwidth:
    • For add-in NICs, install them in PCIe slots with sufficient lanes (preferably x8 or higher) to prevent bottlenecks.

Test After Changes:

  • After making adjustments, use iperf and file transfers to test the impact of your changes.

11. Understanding ZFS Pools and Their Benefits

ZFS is a robust file system and logical volume manager that offers advanced features like data integrity verification, redundancy, and efficient storage management. In Unraid 6.12 and later, you can create ZFS pools, providing an alternative to the traditional Unraid array.

Benefits of ZFS

  • Data Integrity and Bit Rot Prevention:
  • ZFS uses checksums to detect and correct data corruption (bit rot).
  • It can automatically repair damaged data using redundant copies.
  • Advanced Redundancy Options:
  • RAIDZ Levels:
    • RAIDZ1: Can withstand one drive failure.
    • RAIDZ2: Can withstand two drive failures.
    • RAIDZ3: Can withstand three drive failures.
  • Mirroring: Data is duplicated across multiple disks for redundancy.
  • Performance Improvements:
  • Exclusive Shares Mode:
    • By enabling exclusive shares, you bypass the FUSE layer, potentially doubling transfer speeds.
  • Efficient Caching:
    • ZFS can utilize RAM and SSDs for read and write caching, improving performance.

Implementing ZFS Pools

Step-by-Step Instructions:

  1. Prepare Disks:
  • Use disks of the same size for optimal storage efficiency.
  • Ensure data is backed up, as creating a ZFS pool will erase existing data.
  1. Create a ZFS Pool:
  • In the Unraid WebGUI, go to Pools and click Add Pool.
  • Name your pool and select ZFS as the file system.
  1. Choose RAIDZ Level:
  • Select the desired RAIDZ level based on your redundancy requirements.
  1. Add Drives:
  • Add the disks to the pool.
  • Confirm the configuration and create the pool.
  1. Enable Exclusive Shares:
  • When creating a share, enable Exclusive Access to bypass the FUSE layer.
  1. Configure the Pool:
  • Set up datasets, quotas, and other ZFS features as needed.

Note: ZFS pools are less flexible than the traditional Unraid array when it comes to mixing drive sizes and expanding storage. Plan your storage needs accordingly.

How ZFS Prevents Bit Rot

  • Checksums for Data Integrity:
  • Every block of data has a checksum stored in the metadata.
  • On read, ZFS verifies data integrity by recalculating checksums.
  • Copy-on-Write (CoW) Model:
  • Data is written to new blocks before pointers are updated, preventing corruption during writes.
  • Self-Healing Data:
  • If corruption is detected, ZFS automatically repairs it using redundant data from mirrors or parity.
  • Scrubbing:
  • A scheduled process that systematically checks and repairs data integrity.

Analogy: ZFS is like a vigilant librarian who not only keeps track of every book (data block) but also checks each one regularly for damage and replaces it if necessary, ensuring the library’s collection remains pristine.


12. NVMe Destination Drives

1. Write Cache Saturation (SLC Cache Behavior)

Most consumer-grade NVMe SSDs use a small SLC cache (Single-Level Cell) to achieve high write speeds. Initially, data is written to this cache at very high speeds. Once the cache is full (typically after writing a certain amount of data, often between 10 GB and 30 GB), the drive has to start writing directly to the TLC/QLC NAND, which is significantly slower.

  • Symptoms: The transfer speed drops after writing 10 GB to 30 GB, depending on the size of the SLC cache.
  • Solution:
    • Check the specifications of the NVMe drives, particularly the SLC cache size and sustained write speed beyond the cache.
    • Enterprise-grade NVMe drives often have larger caches or are optimized for sustained writes, so if you need better performance for sustained workloads, these might be worth considering.
    • If you’re using consumer-grade drives, this behavior is common, and there is no way to avoid it other than managing the size of the files you’re writing.

2. Thermal Throttling

NVMe drives generate a lot of heat during sustained writes, especially when transferring large amounts of data. Once the drive reaches a certain temperature, it will throttle (reduce its speed) to prevent damage.

  • Symptoms: After an initial fast transfer (15 GB to 30 GB), the speed drops due to thermal throttling.
  • Solution:
    • Use an NVMe monitoring tool like Samsung Magician, CrystalDiskInfo, or any tool that supports SMART data to check the drive’s temperature.
    • Ensure your NVMe drives have proper cooling. Many motherboards include heatsinks for NVMe drives, but if you don’t have these, consider adding aftermarket heatsinks or improving airflow in the case.
    • Monitor drive temperature during the transfer to see if it’s throttling due to heat.

3. RAID Controller Overhead (For RAID Configurations)

When using RAID 0 or RAID 5, the RAID controller (either hardware or software) has to manage the distribution of data across multiple drives. With RAID 5, there’s additional overhead due to parity calculations.

  • Symptoms: A slight difference in the amount of data transferred before speed drops between RAID 0, RAID 5, and non-RAID setups. The slowdown occurs when the RAID controller can no longer handle sustained writes efficiently.
  • Solution:
    • If you’re using software RAID, ensure that your CPU is not being maxed out by the RAID operations.
    • Monitor CPU usage with Task Manager (Windows) or htop (Linux) during the transfer. If CPU usage spikes during sustained transfers, the software RAID might be struggling.
    • Hardware RAID controllers typically have cache memory, which helps buffer the writes. If you’re using a hardware RAID controller, ensure that its cache is not full or being overwhelmed.
    • RAID 5 generally has slower write speeds due to the need for parity calculations. If sustained write performance is critical, RAID 0 or RAID 10 (if redundancy is required) might be better suited.

4. Windows Write Cache and File System Limits

Windows might have its own write caching behavior, and if the write cache fills up or the filesystem starts encountering limits, transfer speeds can drop.

  • Symptoms: Initial fast speeds due to write caching, followed by a drop as Windows offloads data to the disk.
  • Solution:
    • Check your drive’s write caching settings and toggle them to see if it improves performance. You can do this in Device Manager > Disk Drives > Right-click on the drive > Properties > Policies tab.
    • If the target drive is formatted with NTFS, ensure that the filesystem isn’t experiencing fragmentation or inefficiencies due to large file transfers. You can check the fragmentation status and defragment the drive if necessary, although this is less of an issue with modern SSDs.

5. NVMe Drive Overprovisioning and Endurance

Some NVMe drives can also see a drop in performance when nearing their capacity limits. Many consumer-grade drives maintain performance by using overprovisioning (keeping some NAND space free to act as a buffer), but if the drive is nearing capacity or has been used heavily (affecting wear leveling), performance can degrade.

  • Symptoms: Drives nearing capacity see significant drops in write performance, especially after sustained write activity.
  • Solution:
    • Check the available free space on the NVMe drives. If they’re nearing capacity, try freeing up space to see if performance improves.
    • Consider overprovisioning the drive by leaving more free space unallocated (this helps the SSD controller manage wear leveling more effectively).

13. Final Thoughts: Balancing Performance and Usability

Optimizing SMB shares on Unraid is a balancing act between achieving maximum performance and maintaining a user-friendly, stable system. While the optimizations discussed can significantly improve network speeds and data integrity, they may introduce complexity or require additional resources.

Considerations:

  • Test Incrementally:
  • Implement one change at a time and monitor its effects.
  • This approach helps identify which optimizations provide the most benefit.
  • Backup Data:
  • Always have a reliable backup strategy in place.
  • Changes to file systems and network configurations carry risks.
  • Understand Trade-offs:
  • Higher performance settings may increase power consumption or hardware wear.
  • Advanced configurations can complicate system management.
  • Stay Informed:
  • Keep your Unraid system and plugins updated.
  • Participate in Unraid community forums to stay abreast of best practices and emerging issues.

Embrace the Journey:

Optimizing your Unraid server is an ongoing process. As technology evolves and your needs change, revisiting and adjusting your configurations is essential. Enjoy the enhanced performance and capabilities that come with a well-tuned system!


By following this comprehensive guide, you’re well on your way to unlocking the full potential of your Unraid server’s SMB shares. Happy optimizing!