Status: Completed Budget: €30.00

Designing, implementing, and evaluating a functional Storage Area Network (SAN) using consumer-grade hardware and open-source tools.

01. Context & Problem Statement

During my first personal project meeting, one of my mentors explicitly stated that a standard NAS (Network Attached Storage) project would not be sufficiently challenging for Semester 6. A NAS operates at the file level, introducing overhead that makes it not optimal for enterprise virtualization workloads.

"A NAS is fine for previous semesters, but is a toy compared, let's say to a SAN, if you want to show something better, build a SAN." - First mentor feedback.

The Challenge: How can I design, implement, and validate a functional SAN using only hardware I have at home, while proving its superiority over NAS in performance?

02. Hardware Main Components

The environment was built to simulate a realistic "free" enterprise scenario, reutilizing devices I already owned.

The Physical Lab Setup: NUC, Switch, HDD
Figure 1: The Physical Build (NUC + Netgear Switch + Seagate HDD)
Device Role Configuration
Intel NUC iSCSI Target i5, 8GB RAM, Ubuntu Server 24.04
1TB HDD Physical Storage USB 3.0 Expansion Drive
Netgear GS105E Managed Switch VLAN 10 (Ports 2-4) / VLAN 1 (Ports 1,5)
Uni Laptop Client (Initiator) Windows 11, Microsoft iSCSI Initiator
Network Topology Diagram
Figure 2: Network Topology with VLAN isolation.

03. The Setup: Starting clean

I started by restoring my Intel NUC to factory defaults. It had been used as a home lab before, but I needed to start from zero to document the process accurately.

Hardware Upgrade

I swapped my old Linksys PoE+ switch for a Netgear GS105E. This was beneficial to me because it supports 802.1Q VLAN tagging, which allows me to segment the network properly later.

Fresh Install & Mounting

I installed Ubuntu Server 24.04.3 LTS via a bootable USB created with Rufus. During installation, I set the static IP to 192.168.2.200. After booting, I plugged in the 1TB drive.

I ran sudo fdisk -l /dev/sda and saw it had an old NTFS partition. I decided to wipe it.

# Checking the current disk state fdisk output
# Wipe the old NTFS partition mkfs command
# Create mount point and mount mkdir and mount commands

04. iSCSI Implementation

Installing the iSCSI Target

Now, I have to install the iSCSI Target into the NUC. targetcli normally saves its configurations in /etc/target/saveconfig.json.

sudo apt install targetcli-fb -y sudo systemctl enable --now target

With this, the service will be running and the configs will be saved.

Creating the 100 GB LUN

After that, we have to create the 100 GB LUN, that’s where the virtual disk of the SAN will serve the other devices. A LUN is a Logical Unit Number, the block device the iSCSI presents.

# Creating the 100GB file-backed LUN (Figure 6) Creating 100GB LUN with dd

Here we verify that it is working:

# Verifying the LUN file (Figure 7) Verifying LUN file

Target Configuration

I utilized the Linux-IO (LIO) stack via targetcli-fb to map this new file as a storage object.

# Creating the Backstore (The Virtual Disk) Creating backstore in targetcli
# Creating the Target IQN (The Identity) Creating IQN in targetcli
# Mapping the LUN to the Target Mapping LUN in targetcli
Safety Tip: I created snapshots using `sudo dd if=/dev/zero of=/san/snap_clean.img bs=1G count=0 seek=100` before configuring the target. This allowed me to rollback instantly if I broke the configuration.

05. Chap Problems & Solutions

This was the hardest part of the implementation. Enterprise storage requires authentication, so I attempted to enable Mutual CHAP (where both client and server verify each other).

The Failure: My initial attempts failed. Windows 11 refused to connect. I discovered two issues:

  1. My password was 21 characters long (too long for some initiators).
  2. Windows 11 has a known bug with Mutual CHAP in the Microsoft Initiator.

The Fix: I switched to Unidirectional CHAP and shortened the password to 12 characters.

Phase 1: Configuration on the NUC

What is an ACL? An Access Control List (ACL) acts like a whitelist. Before authentication even begins, iSCSI checks if your laptop's specific IQN is allowed to see the target.
Prerequisite: Before creating the ACL, we have to find the correct IQN for the Windows machine. Press Win + R, type iscsicpl, and check the Configuration tab to copy the exact Initiator Name.
# Checking Windows Initiator Name (Configuration Tab) Windows iSCSI Initiator Properties Configuration
# Create ACL for the laptop using the name found in iscsicpl /iscsi/.../tpg1/acls create iqn.1991-05.com.microsoft:kevin-s-lap
# Set Authentication (Unidirectional only) Setting Unidirectional CHAP in targetcli

Phase 2: Connect the Windows Client

With the Target ready, I went back to the iSCSI Initiator (`iscsicpl`). I navigated to the Discovery tab, clicked Discover Portal, and added the IP address of the NUC (`192.168.2.200`) leaving the port as default.

# Adding the Discovery Portal Adding IP to Discovery Portal

After discovery, I went to the Targets tab, selected my SAN, and clicked Connect. I opened the Advanced settings and checked Enable CHAP log on to input my credentials.

# Enabling CHAP in Advanced Settings Enabling CHAP in Connection Settings

Phase 3: Initializing the Disk

Once the connection was successful, I pressed Win + R and typed diskmgmt.msc to open Disk Management. The unformatted disk appeared immediately as "Disk 1".

# Disk 1 appears as Unallocated Disk Management showing unallocated volume

I initialized it as GPT and formatted it as NTFS (Volume Label: SAN-100GB).

# Disk 1 initialized and formatted (NTFS) Disk Management after formatting
# Successfully mounted as S: Drive in Explorer Windows Explorer showing S drive

06. VLAN Security

Authentication was not enough for me. I wanted to add a network isolation. I accessed the Netgear switch interface (found at 192.168.2.4) and configured 802.1Q Advanced VLANs.

  • VLAN 1 (Home): Ports 1 & 5 (Router Uplink).
  • VLAN 10 (SAN): Ports 2, 3, 4 (NUC & Laptops).
# Step 1: Defining VLAN Membership (Tagging) Netgear VLAN Membership Configuration

However, setting membership isn't enough. I also had to configure the PVID (Port VLAN ID). This ensures that any traffic coming from my laptop (which doesn't send VLAN tags) is automatically tagged as VLAN 10 by the switch.

# Step 2: Setting PVID to force ports into VLAN 10 Netgear PVID Configuration

Result: When my laptop is on Wi-Fi, the SAN IP 192.168.2.200 is unreachable. It only becomes visible when physically plugged into ports 2-4. This effectively "air-gaps" the storage traffic from the home network.

07. Benchmarks

I ran CrystalDiskMark 9.0.1 (4GiB test file, 5 runs) to compare the iSCSI SAN against a traditional NAS setup running on a separate NUC with an internal SATA HDD.

iSCSI SAN Benchmark Result
SAN Results (iSCSI - USB Backend)
SMB NAS Benchmark Result
NAS Results (SMB - SATA Backend)
Metric iSCSI SAN (Block) NAS (File) Analysis
Seq. Read (Q8T1) 117.41 MB/s 117.52 MB/s Tie: Both saturated the Gigabit Ethernet link.
Random Read (4K) 101.51 MB/s 11.25 MB/s SAN Wins (~9x): Massive advantage for VM booting.
Random Write 0.16 MB/s 9.05 MB/s NAS Wins: Internal SATA vs USB Overhead.

Detailed Analysis

The NAS looks better on the sequential write because it uses internal SATA. On the USB HDD (used for the SAN), there is a translation layer which makes the sequential write of the HDD slower.

The good thing is, the random read is 9 times faster on the SAN. That’s the block-level advantage: Windows can optimize I/O way better with iSCSI than with SMB/NFS, making it far superior for latency sensitive workloads like virtualization in Proxmox, Hyper-v or VMware

08. Future Roadmap

The current implementation is functional and secure for Windows clients. But, to fully validate the SAN's versatility as an enterprise solution, the following steps are planned as a private continuation of this project:

# Planned Improvements: 1. Linux Client Verification: - Configure 'open-iscsi' initiator on Ubuntu. - Perform comparative benchmarks (EXT4/XFS over iSCSI). 2. Internal SSD Performance Test: - Allocate a 100GB LUN on the internal 256GB NVMe SSD. - Compare IOPS/Throughput against the USB HDD to quantify the mechanical bottleneck. 3. IoT Integration: - Finalize the ESP32 MQTT data ingestion pipeline. - Log sensor data directly to the SAN volume. 4. Advanced Virtualization: For a medium/large home lab setup. - Use the SAN as shared storage for a Proxmox Cluster.