If you’re setting up iSCSI boot from SAN for your ESXi hosts, there’s an important detail to keep in mind right from the start: each ESXi host gets its own dedicated LUN. That means there’s a one-to-one relationship between LUNs and hosts—these boot LUNs can’t be shared across multiple hosts.
Once your hosts are up and running, you’ll notice something interesting: the host “sees” an iSCSI configuration. You won’t find additional virtual switches, but there will be an additional custom VMkernel interface related to iSCSI, called iScsiBootPG.
That said, everything you’d normally configure after installing ESXi on local storage still applies. You’ll still need to set up two separate subnets and two VMkernel adapters for proper iSCSI multipathing to shared storage. Configure the iSCSI adapter, whether hardware or not. If you don’t have a hardware iSCSI adapter, you’ll need to configure the Software iSCSI adapter. After that, you can configure your datastore—and if it’s freshly provisioned, go ahead and format it for VMFS.
Table of Contents
Configure Firmware – Legacy Boot
Everything in the System Setup seems so buried in the Dell iDRAC! I will do my best to illustrate this. Manufacturers will all have different methods to configure. Some will be right in the firmware of the NIC.
For this, I am starting out in the System Setup. Select Device Settings.

I am only going to configured the first integrated NIC.

Select iSCSI Configuration.

Select iSCSI Initiator Parameters.

Add in the iSCSI initiator details. Provide a unique iSCSI Qualified Name (IQN) per node.
IQN Format: iqn.yyyy-mm.naming-authority:unique-name

Add in the iSCSI target details. These may have to be obtained from the storage administrator, especially the iSCSI Name.

For my particular host, I set the Legacy Boot Protocol as well. Mileage will likely vary for other hosts and manufacturers.

This will not be a selection if the host is configured for UEFI boot, like I have mine configured. In this case, a different method will need to take place.
Configure Firmware – UEFI
From System Setup, select Network Settings.

Scroll down to UEFI ISCSI Settings. Configure the initiator name and enable the device.

Enable the first connection and select Connection 1 Settings.

Configure the settings appropriate for your environment.
- Interface to use
- Protocol
- VLAN
- Initiator settings
- Target settings
- CHAP settings
Note: I had to add an Initiator Gateway even though I do not have an SVI built.


Exit out of all the screens and reboot the host. If all was configured correctly, then the host should boot to the iSCSI target.
Installing and Reviewing the ESXi Configuration
When installing ESXi, now, there should be a Remote disk to install to.

After installing ESXi, go ahead and reboot the host. Wait for the DCUI (Direct Console User Interface) to load, then log in and take care of the basic identity settings—like setting the hostname, DNS, and so on.
Once that’s done, you can log into the host using a web browser.
As mentioned earlier in this post, you’ll probably notice something a bit unusual: the iSCSI Software Adapter is already enabled and configured. That’s not the default behavior—it’s specific to boot-from-SAN setups. If you take a look under the Static Targets section, you’ll see that the boot target has already been defined there.

Next up, let’s take a look at the networking side of things. You’ll notice a few extra components here that you wouldn’t normally see in a typical setup.
One of the key additions is a port group called iScsiBootPG. This port group is part of a dedicated network I’ve set up specifically for iSCSI boot traffic—completely separate from the rest of my environment. It’s configured with the appropriate VLAN to make sure that traffic flows correctly between the host and the storage target.

You’ll also see vmk1 listed—this is a VMkernel adapter that’s been automatically created to handle the iSCSI boot traffic. It’s what allows the host to communicate with the SAN over the network during the boot process. Without this adapter, the host wouldn’t be able to reach its boot LUN.

Wrapping Up
Booting ESXi from an iSCSI SAN might seem a bit intimidating at first, but once you understand the flow, it all clicks into place. The key things to remember are: each host needs its own dedicated LUN, and while the environment may look slightly different—like the presence of the iScsiBootPG port group and the vmk1 adapter—it still follows many of the same configuration steps you’d use with local storage.
Setting up the firmware correctly (whether using Legacy or UEFI boot) is a critical part of the process, and the steps vary depending on the hardware vendor. Once that’s squared away, installing ESXi and reviewing the post-boot configuration becomes a pretty familiar experience.
In the end, iSCSI boot from SAN offers flexibility and centralized storage management, especially for stateless or diskless hosts. As long as your networking is clean, your firmware is configured correctly, and your storage team has your back with the right target details, you’ll be up and running in no time.
Leave a Reply