Summary: This post explains how protection groups and recovery plans are made in VMware Site Recovery Manager (SRM) 5.1.
Date: Around 2014
Refactor: 13 February 2025: Checked links and formatting.
Notice that the protection group is created at the protected site and the recovery plan is created at the recovery site.
This is required to sync your data between the protected and recovery site and already discussed in NetApp SnapMirror but I wanted to do a quick and dirty one here anyway, just based on commandline interaction:
On the source filer:
filer02B> vol create R_01B_FC_PRD_02B_01 aggr0 30g filer02B> vol autosize R_01B_FC_PRD_02B_001 -m 50g -i 5g on filer02B> vol options R_01B_FC_PRD_02B_01 nosnap on filer02B> vol options R_01B_FC_PRD_02B_01 no_atime_update on filer02B> vol options R_01B_FC_PRD_02B_01 fractional_reserve 0 filer02B> snap reserve R_01B_FC_PRD_02B_01 0
On the target filer:
filer02b> vol create R_02B_FC_PRD_01B_01 -s none aggr0 50g filer02b> vol options R_02B_FC_PRD_01B_01 nosnap on filer02b> vol options R_02B_FC_PRD_01B_01 no_atime_update on filer02b> vol options R_02B_FC_PRD_01B_01 fractional_reserve 0 filer02b> snap reserve R_02B_FC_PRD_01B_01 0 filer02b> vol restrict R_02B_FC_PRD_01B_01 filer02b> snapmirror initialize -S 10.10.18.72:R_01B_FC_PRD_02B_01 filer02b:R_02B_FC_PRD_01B_01 filer02b> rdfile /etc/snapmirror.conf filer02b> wrfile /etc/snapmirror.conf --- Add the entire output of the rdfile command and add an extra line for the new snapmirror
Now create a LUN and a datastore in the new volume and place a VM on this.
Note: After adding a new volume you have to include the volume in (both) the array managers for replication.
To create a protection group log in to the vSphere Client and open the Site Recovery View. At the protected site, click the “Protection Groups” button in the left bottom corner, and then click the “Create Protection Group” button in the left top corner:
In the Create Protection Group wizard select the protected site and the array manager pair under which the datastore/VM you want to protect lives in:
Now select the datastore group on which the VM is located. As you can see, all datastores and RDM that are part of the VMs on these datastores are automatically selected:
Now name and describe the group:
Review the settings and finish the wizard. This might take a while due to the protecting of VMs:
Note that due to inventory issues your VM could display some kind of warning. After double-clicking the VM and selecting a destination VM folder I got the protection state to OK for my VM:
Also, a placeholder VM is created in the destination cluster on the placeholder storage group:
When you create a protection group and get the error Unable to protect VM XXX due to unresolved devices
, this is mostly due to missing inventory mappings:
At the recovery site, click the “Recovery Plans” button in the left bottom corner, and then click the “Create Recovery Plan” button in the left top corner:
Select the site where the VMs will be recovered, which is The Hague. Since I'm logged on the recovery site, this is also the local site:
Select the protection group you'll use:
Select the test network:
Name and describe the recovery plan:
Review the settings and finish the wizard:
You can now review the VMs that are part of the recovery plan, before you run a test:
You can now run a test of your recovery plan by pressing the test button. Do not press the Recovery button unless you really want to. Part of the recovery is to shutdown your production VM, while test keeps your production environment untouched:
After clicking test a general warning will popup telling you what you're about to do. You can select to do a final replication of changes to the recovery site, I choose not to to save time:
A final warning appears, click Start to run the test:
Now the test is actually running:
When the test is done you should perform a Cleanup operation to return to your protected state.
In case you find that logging in does not work as expected after doing a test, take a look at DC role seizing AD DC Role Seizing, we have to do this sometimes.
We also ran into this problem a few times, with DNS not fully starting.