Volume migration using SnapMirror
Ontap Snapmirror is designed to be simple, reliable and cheap tool to facilitate disaster recovery for business critical applications. It comes default with Ontap but has to be licensed to use.
Apart from DR, snapmirror is an extremely useful in situation like
Apart from DR, snapmirror is an extremely useful in situation like
1. Aggregates or volumes reached maximum size limit.
2, Need to change volume disk type (tiering).
Prep workBuild a new aggregate from free disks
1. List the spares in the system
# vol status -s
Spare disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
——— —— ————- —- —- —- —– ————– ————–
Spare disks for block or zoned checksum traditional volumes or aggregates
spare 7a.18 7a 1 2 FC:B – FCAL 10000 372000/761856000 560879/1148681096
spare 7a.19 7a 1 3 FC:B – FCAL 10000 372000/761856000 560879/1148681096
spare 7a.20 7a 1 4 FC:B – FCAL 10000 372000/761856000 560879/1148681096
spare 7a.21 7a 1 5 FC:B – FCAL 10000 372000/761856000 560879/1148681096
spare 7a.22 7a 1 6 FC:B – FCAL 10000 372000/761856000 560879/1148681096
spare 7a.23 7a 1 7 FC:B – FCAL 10000 372000/761856000 560879/1148681096
spare 7a.24 7a 1 8 FC:B – FCAL 10000 372000/761856000 560879/1148681096
spare 7a.25 7a 1 9 FC:B – FCAL 10000 372000/761856000 560879/1148681096
spare 7a.26 7a 1 10 FC:B – FCAL 10000 372000/761856000 560879/1148681096
2. Create new aggregate
Add the new disks. Make sure you add sufficient disks to create complete raid groups. Else later when you add new disks to the aggregate , all the new writes will go to the newly added disks until it fills up to the level of other disks in the raid group. This creates a disk bottleneck in the filer as all the writes are now handled by limited number of spindles.
# aggr add aggr_new 7a.18,7a.19,7a.20,7a.21,7a.22,7a.23,7a.24,7a.25,7a.26,7a.27
3. Verify the aggregate is online
# aggr status aggr_new
3. Create new volumes with name vol_new and size 1550g on aggr_new
# vol create vol_new aggr_new 1500g
4. Verify the volume is online
# vol status vol_new
5. Setup snapmirror between old and new volumes
First you need to restrict the destination volume by using the command # vol restrict vol_new
a. snapmirror initialize -S filername:volname filername:vol_new
b. Also make an entry in /etc/snapmirror.conf file for this snapmirror session
filername:/vol/volume filername:/vol/vol_new kbs=1000 0 0-23 * *
Note kbs=1000 is throttling the snapmirror speed
On day of cut over
Update snapmirror session
# snapmirror update vol_new
Transfer started.
Monitor progress with ‘snapmirror status’ or the snapmirror log.
Transfer started.
Monitor progress with ‘snapmirror status’ or the snapmirror log.
# snapmirror status vol_newSnapmirror is on.
Source Destination State Lag Status
filername:volume_name filername:vol_new Snapmirrored 00:00:38 Idle
Source Destination State Lag Status
filername:volume_name filername:vol_new Snapmirrored 00:00:38 Idle
Quiesce the relationship – this will finish the in session transfers, and then halt any further updates from snapmirror source to snapmirror destination. Quiecse the destination
# snapmirror quiesce vol_new
snapmirror quiesce: in progress
This can be a long-running operation. Use Control – C (^C) to interrupt.
snapmirror quiesce: dbdump_pb : Successfully quiesced
snapmirror quiesce: in progress
This can be a long-running operation. Use Control – C (^C) to interrupt.
snapmirror quiesce: dbdump_pb : Successfully quiesced
Break the relationship – this will cause the destination volume to become writable
# snapmirror break vol_newsnapmirror break: Destination vol_new is now writable.
Volume size is being retained for potential snapmirror resync. If you would like to grow the volume and do not expect to resync, set vol option fs_size_fixed to off
Volume size is being retained for potential snapmirror resync. If you would like to grow the volume and do not expect to resync, set vol option fs_size_fixed to off
Enable quotas: quota on volname
Rename volumes
Once the snapmirror session is terminated, we can now rename the volumes
# vol rename volume_name volume_name_temp
# vol rename vol_new volume_name
Remember, the shares move with the volume name. ie. if the volume hosting the share is renames the corresponding change is reflected in the recreate the path of the share. This requires us to delete the old share and recreate it with the correct volume name. Filecifsconfig_share.cfg under etc$ has listing of the commands run to create the shares. Use this file as reference.
cifs shares -add “test_share$” “/vol/volume_name” “Admin Share Server Admins”
cifs access “test_share$” S-1-5-32-544 Full Control
Use a -f at the end of the cifs shares -add line to eliminate the y or n prompt.
Start quotas on the new volume
# quota on volume_name
You are done. The shares and qtrees are now referring to the new volume on a new aggregate. Test the shares by mapping them on a windows host.
No comments:
Post a Comment