Table of Contents
Storage vMotion with RDM Disks
Summary: How to do a storage vMotion with RDM disks on vSphere.
Date: Around 2014
Refactor: 29 March 2025: Checked links and formatting.
I had a use case where some VMs with Microsoft Clustering had to be moved to different datastores. That is, the normal VMDK OS disks, not the shared disks which were presented to the VMs as RDM. I first read a few articles: VMware KB article on RDM storage vMotions
An article from Scott Lowe
An article from Joep Piscaer, a former colleague of mine, definitely a recommendation to read!
VMware KB article on Microsoft clustering support
Now, all of these articles are great but they didn't specifically describe my situation. Because for Microsoft Clustering, you need a separate SCSI controller for your RDM disks with physical SCSI BUS sharing, to make sure you can share the disks between VMs on different hosts:
SCSI Bus Sharing policy | Description |
---|---|
None | Virtual disks cannot be shared between virtual machines |
Virtual | Virtual disks can be shared between virtual machines on the same server |
Physical | Virtual disks can be shared between virtual machines on any server |
Also note the support for the actual SCSI controller. I'm a big fan of the paravirtual SCSI controller but that one is not supported for Microsoft Clustering:
The shared storage SCSI adapter for Windows Server 2008 must be the LSI Logic SAS type, while earlier Windows versions must use the LSI Logic Parallel type.
So I had to test what happens with a RDM disk while in virtual and in physical compatibility mode to come with an answer on what is the best migration method.
Test Setup
VM
VM configuration: SjoerdTestVM, no active network. Two disks, aligned, C (OS) en I (SWAP), 10 en 3 GB. Administrator login. Windows Server 2003.
LUN (NetApp LUN)
LUN configured and mapped on LUN id 176 to the specific cluster.
Add RDM to VM
Select the VM and follow these steps to connect the LUN directly to the VM: VM → Edit Settings → Add → Hard Disk → Raw Device Mappings → Select Created LUN 176 → Store with Virtual Machine → Virtual Compatibility → New SCSI adapter (SCSI (1:1) → Finish. Set SCSI Controller to type LSI Logic Parallel → Physical SCSI Bus Sharing
Note that it could happen that the VM won't power on anymore. Check the boot order in the BIOS for the VM.
After powering on the VM you can initialize and format the disk using diskmgmt.msc.
Storage vMotion - Virtual Compatibility
Start the Storage vMotion like this: VM Powered off → Migrate → Change Datastore ACC_01 → ACC_02 → Keep disks Same format as source → Finish.
Result
RDM was converted to a VMDK file. RDM mapping file has completely disappeared. When powering on the VM I got this error:
Reason: Thin/TBZ disks cannot be opened in multiwriter mode.. Cannot open the disk '/vmfs/volumes/4f101f0f-6b0e2383-3fab-00215e08162a/SjoerdRDMTest/SjoerdRDMTest_2.vmdk' or one of the snapshot disks it depends on. VMware ESX cannot open the virtual disk "/vmfs/volumes/4f101f0f-6b0e2383-3fab-00215e08162a/SjoerdRDMTest/SjoerdRDMTest_2.vmdk" for clustering. Verify that the virtual disk was created using the thick option.
Solution: Changed SCSI BUS Sharing mode from physical → none. Now the VM starts and data is readable.
Storage vMotion - Virtual Compatibility - Remove RDMs
I restored the VM to exactly the same configuration like this:
- VM powered off → removed newly created VMDK disk.
- VM → Edit Settings → Add → Hard Disk → Raw Device Mappings → Select Created LUN 176 → Store with Virtual Machine → Virtual Compatibility → New SCSI adapter (SCSI (1:1) → Finish.
- Set SCSI Controller to type LSI Logic Parallel → Physical SCSI Bus Sharing
I tested the RDM disk and data was readable without a problem.
Now the test without RDMs by removing them and reading them after the migration:
VM Powered off → Remove RDM disk → Remove from virtual machine. Migrate → Change Datastore ACC_02 → ACC_01 → Keep disks same format as source → Finish.
After migration → VM → Edit Settings → Add → Hard Disk → Raw Device Mappings → Select Created LUN 176 → Store with Virtual Machine → Virtual Compatibility → New SCSI adapter (SCSI (1:1) → Finish. Set SCSI Controller to type LSI Logic Parallel → Physical SCSI Bus Sharing
Result
Power on VM → Disk is available immediately as well as the data on it. RDM is still a RDM (of course).
Storage vMotion - Physical Compatibility
I restored the VM again, only now with the RDM attached in physical compatibility mode:
- VM → Edit Settings → Add → Hard Disk → Raw Device Mappings → Select Created LUN 176 → Store with Virtual Machine → Physical Compatibility → New SCSI adapter (SCSI (1:1) → Finish.
- Set SCSI Controller to type LSI Logic Parallel → Physical SCSI Bus Sharing
- Power on VM → Disk is available immediately as well as the data on it.
Then the storage vMotion: VM Powered off → Migrate → Change Datastore ACC_01 → ACC_02 → Keep disks Same format as source → Finish.
Result: RDM was converted to a VMDK file. RDM mapping file has completely disappeared. When powering on the VM I got this error:
Reason: Thin/TBZ disks cannot be opened in multiwriter mode.. Cannot open the disk '/vmfs/volumes/4f101f0f-6b0e2383-3fab-00215e08162a/SjoerdRDMTest/SjoerdRDMTest_2.vmdk' or one of the snapshot disks it depends on. VMware ESX cannot open the virtual disk "/vmfs/volumes/4f101f0f-6b0e2383-3fab-00215e08162a/SjoerdRDMTest/SjoerdRDMTest_2.vmdk" for clustering. Verify that the virtual disk was created using the thick option.
Solution: Changed SCSI BUS Sharing mode from physical → none. Now the VM starts and data is readable.
I tried it again trying to move only the RDM disk but this gave this error:
Incompatible device backing specified for device '0'.
Conclusion
To successfully migrate a VM with RDMs used in Microsoft Clustering the only option is to remove the RDM from the VM, migrate the VM and then reattach the RDMs again.