Tuesday, September 2, 2014

NetApp SnapMirror for Migrations


Volume migration using SnapMirror
Ontap Snapmirror is designed to be simple, reliable and cheap tool to facilitate disaster recovery for business critical applications. It comes default with Ontap but has to be licensed to use.
Apart from DR, snapmirror is an extremely useful in situation like
1. Aggregates or volumes reached maximum size limit.
2, Need to change volume disk type (tiering).
Prep workBuild a new aggregate from free disks
1. List the spares in the system
# vol status -s
Spare disks
RAID Disk       Device  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
———       ——  ————- —- —- —- —– ————–    ————–
Spare disks for block or zoned checksum traditional volumes or aggregates
spare           7a.18 7a    1   2   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096
spare           7a.19 7a    1   3   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096
spare           7a.20 7a    1   4   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096
spare          7a.21 7a    1   5   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096
spare          7a.22 7a    1   6   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096
spare          7a.23 7a    1   7   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096
spare          7a.24 7a    1   8   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096
spare          7a.25 7a    1   9   FC:B   –  FCAL 10000 372000/761856000  560879/1148681096
spare          7a.26 7a    1   10  FC:B   –  FCAL 10000 372000/761856000  560879/1148681096
2. Create new aggregate
Add the new disks. Make sure you add sufficient disks to create complete raid groups. Else later when you add new disks to the aggregate , all the new writes will go to the newly added disks until it fills up to the level of other disks in the raid group. This creates a disk bottleneck in the filer as all the writes are  now handled by limited number of spindles.
# aggr add aggr_new 7a.18,7a.19,7a.20,7a.21,7a.22,7a.23,7a.24,7a.25,7a.26,7a.27
3. Verify the aggregate is online
# aggr status aggr_new
3. Create new volumes with name vol_new and size 1550g on aggr_new
# vol create vol_new aggr_new 1500g
4. Verify the volume is online
# vol status vol_new
5. Setup snapmirror between old and new volumes
First you need to restrict the destination volume by using the command  # vol restrict vol_new
a. snapmirror initialize -S filername:volname filername:vol_new
b. Also make an entry in /etc/snapmirror.conf file for this snapmirror session
filername:/vol/volume filername:/vol/vol_new kbs=1000 0 0-23 * *
Note kbs=1000 is throttling the snapmirror speed
On day of cut over
Update snapmirror session
# snapmirror update vol_new
Transfer started.
Monitor progress with ‘snapmirror status’ or the snapmirror log.
# snapmirror status vol_newSnapmirror is on.
Source Destination State Lag Status
filername:volume_name filername:vol_new   Snapmirrored 00:00:38 Idle
Quiesce the relationship – this will finish the in session transfers, and then halt any further updates from snapmirror source to snapmirror destination. Quiecse the destination
# snapmirror quiesce vol_new
snapmirror quiesce: in progress
This can be a long-running operation. Use Control – C (^C) to interrupt.
snapmirror quiesce: dbdump_pb : Successfully quiesced
Break the relationship – this will cause the destination volume to become writable
# snapmirror break vol_newsnapmirror break: Destination vol_new is now writable.
Volume size is being retained for potential snapmirror resync. If you would like to grow the volume and do not expect to resync, set vol option fs_size_fixed to off
Enable quotas: quota on volname
Rename volumes
Once the snapmirror session is terminated, we can now rename the volumes
# vol rename volume_name volume_name_temp
# vol rename vol_new volume_name
Remember, the shares move with the volume name. ie. if the volume hosting the share is renames the corresponding change is reflected in the recreate the path of the share. This requires us to delete the old share and recreate it with the correct volume name. Filecifsconfig_share.cfg under etc$ has listing of the commands run to create the shares. Use this file as reference.
cifs shares -add “test_share$” “/vol/volume_name” “Admin Share Server Admins”
cifs access “test_share$” S-1-5-32-544 Full Control
Use a -f at the end of the cifs shares -add line to eliminate the  y or n prompt.
Start quotas on the new volume
# quota on volume_name
You are done. The shares and qtrees are now referring to the new volume on a new aggregate. Test the shares by mapping them on a windows host.

How to copy files within NetApp



It always comes up, how can I copy single files, or large areas directly from the NetApp console? Generally the answer comes back, you can’t, use RoboCopy or rsync or another file migration tool. However there are definitely ways of copying files around directly from the filer itself, and often this is the most efficient way of doing it! However, these aren’t the most intuitive or well documented commands.
There may be other methods, and if you have something you have used in the past or you know of, please feel free to share! Not all methods are suitable for all tasks, but each has it’s own individual uses.
ndmpcopy
This is often overlooked as a file / folder copy command, and is often just used to migrate entire volumes around. In fact it can be used to copy individual folders or filers around, and even better can be used to copy data to other filers! Make sure ndmp is enabled first (ndmpd on). The syntax is quite simple…
ndmpcopy /vol/vol_source_name/folder/file /vol/vol_dest_name/file
Just to break this down, we are choosing to copy a filer from “/vol/vol_source_name/folder” and we want to copy it into “/vol/vol_dest_name”. This isn’t too restrictive, we don’t have to keep the same path, and we can even copy things about in the same volume (such as copying things into QTrees if you need). You can copy things from an entire volume, to a single QTree, down to single folders way down in the directory tree. The only real restriction is you cannot use wildcards, and you cannot select multiple files to copy.
If you want to copy files from one filer to another, we simply extend this syntax…
ndmpcopy -sa:-da:source_filer:/vol/vol_source_name/folder/file destination_filer:/vol/vol_dest_name/file
Replaceandwith the source filer (-sa) login and the destination filer (-da) login. Here we copy a single file from one location on one filer, to another on another!
We can also define the the incremental level of transfer. By default the system will do a level 0 transfer, but you can define to do a single level 1 or 2 incremental transfer. If the data has changed too much, or too much time has passed since the last copy, this may fail or may take longer than a clean level 0.
This can be very useful, and as the filer is doing this at block level, all ACL’s are completely preserved. Take care to enable that the security style is the same on the destination to prevent ACL’s from being converted however.
mv
This is a “priv set advanced” command, and so apparently reserved for “Network Appliance personnel”. “mv” is very straight forward, give it a source and destination, and a single file will get moved. Remember this is a move, so it is not technically a file copy at all.
mv <file2> <file2>
flex clone
This is a real cheat, but a great cheat! You clone an entire volume based on a snapshot, then you split this volume off from the snapshot. This a great way of getting an entire volume copied with minimal disruption. The clone is almost immediately created, and can then be online and used live. The clone split operation happens in the background so you can move things and be live at the new location in very little time at all.
vol clone create new_vol -s volume -b source_vol source_snap
Where “new_vol” is the new volume you want to create, “-s volume” is the space reservation, “-b source_vol” is the parent volume that the clone will be based on and “source_snap” is the snapshot you want to base the clone on.
vol clone split start new_vol
Will then start the split operation on the “new_vol”
vol copy
Rather than a flex clone, if you haven’t got that licensed, you can do a full vol copy. This is effectively the same as a vol clone, but you need to do the entire operation before the volume is online and available. You need to create the destination volume first and then restrict it so that it is ready for the copy. Then you start the copy process.
vol copy start -s snap_name source_vol dest_vol
“-s snap_name” defines the snapshot you want to base the copy on, and “source_vol” and “dest_vol” define the source and destination for the copy. “-S” can also be used to copy across all the snapshots that are also included in the volume. This can be very useful if you need to copy all backups within a volume as well as just the volume data.
lun clone
If you need to copy an entire LUN, and again you haven’t got flex clone licensed, you can do a direct lun clone, and lun clone split. This is only really useful if you need a duplicate of the LUN in the same volume. It will create a clone based on a snapshot that already exists.
lun clone create clone_path -b parent_path parent_snap
“clone_path” being the new LUN you want to create, “parent_path” being the source LUN you want to clone from and “parent_snap” being a snapshot that already exists of the parent LUN. The you need to split the LUN to become independent with.
lun clone split start clone_path
SnapMirror / SnapVault
You can also use SnapMirror or SnapVault to copy data around. SnapMirror can be useful if you need to copy a large amount of data that will change. You can setup a replication schedule, then during a small window of downtime, you can do a final update and bring the new destination online.
dump and restore
This isn’t really a good way of copying files around, but it certainly a method. If you attach a tape device directly to the filer, you could do a dump, then a restore to a new location or filer. This can be the only method if you have a large amount of data to move to a new site, and no bandwidth or no way of having the 2 systems side by side temporarily.

HOWTO Secure iSCSI Luns Between Debian Linux 7.1 and NetApp Storage with Mutual CHAP


This post demonstrates how to enable two-way or mutual CHAP on iSCSI luns between Debian Linux 7.1 and NetApp storage. The aggregate, lun and disk sizes are small in this HOWTO to keep it simple.

1) Install open-iscsi on your server.
> apt-get install open-iscsi
> reboot (don’t argue with me, just do it!)

2) Display your server’s new iscsi initiator or iqn nodename.
> cat /etc/iscsi/initiatorname.iscsi
InitiatorName=iqn.1993-08.org.debian:01:e6d4ee61d916

3) On the NetApp filer, create the volume that will hold the iscsi luns. This command assumes you have aggregrate aggr1 already created. If not use an aggregate that has enough room for your volume.
netapp> vol create MCHAPVOL aggr1 10g

4) Create the lun in the volume.
netapp> lun create -s 5g -t linux /vol/MCHAPVOL/DEB71_iSCSI_MCHAP_01

5) Create an igroup and add the Linux iscsi nodename or iqn from step 2 above to it.
netapp> igroup create -i -t linux ISCSI_MCHAP_DEB71
netapp> igroup add ISCSI_MCHAP_DEB71 iqn.1993-08.org.debian:01:e6d4ee61d916
netapp> igroup show

ISCSI_MCHAP_DEB71 (iSCSI) (ostype: linux):
iqn.1993-08.org.debian:01:e6d4ee61d916 (not logged in)

6) Map the lun to the iscsi-group and give it lun ID 01.
netapp> lun map /vol/MCHAPVOL/DEB71_iSCSI_MCHAP_01 ISCSI_MCHAP_DEB71 01

7) Obtain the NetApp target nodename.
netapp> iscsi nodename
iqn.1992-08.com.netapp:sn.84167939

8) Set the CHAP secret on the NetApp controller.
netapp> iscsi security add -i iqn.1993-08.org.debian:01:e6d4ee61d916 -s chap -p MCHAPDEB71 -n iqn.1993-08.org.debian:01:e6d4ee61d916 -o NETAPPMCHAP -m iqn.1992-08.com.netapp:sn.84167939

netapp> iscsi security show

init: iqn.1993-08.org.debian:01:e6d4ee61d916 auth: CHAP Inbound password: **** Inbound username: iqn.1993-08.org.debian:01:e6d4ee61d916 Outbound password: **** Outbound username: iqn.1992-08.com.netapp:sn.84167939

9) On the server, edit your /etc/iscsi/iscsi.conf file and set the parameters below.  
> vi /etc/iscsi/iscsid.conf:
node.startup = automatic
node.session.auth.authmethod = CHAP
node.session.auth.username = iqn.1993-08.org.debian:01:e6d4ee61d916
node.session.auth.password = MCHAPDEB71
node.session.auth.username_in = iqn.1992-08.com.netapp:sn.84167939
node.session.auth.password_in = NETAPPMCHAP
discovery.sendtargets.auth.authmethod = CHAP
discovery.sendtargets.auth.username = iqn.1993-08.org.debian:01:e6d4ee61d916
discovery.sendtargets.auth.password = MCHAPDEB71
discovery.sendtargets.auth.username_in = iqn.1992-08.com.netapp:sn.84167939
discovery.sendtargets.auth.password_in = NETAPPMCHAP
> wq!

10) On the server, discover your iSCSI target (your storage system).
> iscsiadm -m discovery -t st -p 10.10.10.11
10.10.10.11:3260,1000 iqn.1992-08.com.netapp:sn.84167939

> iscsiadm -m node  (this should display the same as above)
10.10.10.11:3260,1000 iqn.1992-08.com.netapp:sn.84167939

11) On the server, manually login to the iSCSI target (your storage array).
> iscsiadm -m node –targetname “iqn.1992-08.com.netapp:sn.84167939″ –login

Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260] (multiple)
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260] successful.

On the NetApp storage console you should see the iSCSI sessions:
[iscsi.notice:notice]: ISCSI: New session from initiator iqn.1993-08.org.debian:01:e6d4ee61d916 at IP addr 10.10.10.203
[iscsi.notice:notice]: ISCSI: New session from initiator iqn.1993-08.org.debian:01:e6d4ee61d916 at IP addr 10.10.10.203

Verify the iSCSI session on the filer:
netapp> iscsi session show
Session 49
Initiator Information
Initiator Name: iqn.1993-08.org.debian:01:e6d4ee61d916
ISID: 00:02:3d:01:00:00
Initiator Alias: deb71

12) Stop and start the iscsi service on the server.
> service open-iscsi stop
Pause for 10 seconds and then run the next command.
> service open-iscsi start

[ ok ] Starting iSCSI initiator service: iscsid.
[....] Setting up iSCSI targets:
Logging in to [iface: default, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260] (multiple)
Login to [iface: default, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260] successful.
. ok
[ ok ] Mounting network filesystems:.

13) From the server , check your session.
> iscsiadm -m session -P 1

14) From the server, check the NetApp iSCSI details.
iscsiadm –mode node –targetname iqn.1992-08.com.netapp:sn.84167939 –portal 10.10.10.11:3260

15) From the server, find and format the new lun (new disk).
> cat /var/log/messages | grep “unknown partition table”
deb71 kernel: [ 1856.751777]  sdb: unknown partition table

> fdisk /dev/sdb

Device contains neither a valid DOS partition table, nor Sun, SGI or OSF disklabel
Building a new DOS disklabel with disk identifier 0x07f6c360.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won’t be recoverable.

Warning: invalid flag 0×0000 of partition table 4 will be corrected by w(rite)

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Command (m for help): n
Partition type:
p   primary (0 primary, 0 extended, 4 free)
e   extended
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-10485759, default 2048): press enter
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): press enter
Using default value 10485759

Command (m for help): p
Disk /dev/sdb: 5368 MB, 5368709120 bytes
166 heads, 62 sectors/track, 1018 cylinders, total 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x07f6c360

Device Boot      Start     End               Blocks       Id  System
/dev/sdb1         2048    10485759     5241856   83  Linux

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

Command (m for help): q

16) On the server, create the Linux file system on the new partition.
> mkfs -t ext4 /dev/sdb1
mke2fs 1.42.5 (29-Jul-2012)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
327680 inodes, 1310464 blocks
65523 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1342177280
40 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736

Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done

17) Verify the partition.
> blkid /dev/sdb1
/dev/sdb1: UUID=”afba2daf-1de8-4ab1-b93e-e7c99c82c054″ TYPE=”ext4″

18) Create the mount point and manually mount the directory.
> mkdir /newiscsilun
> mount /dev/sdb1 /newiscsilun
> df -h | grep newiscsilun
Filesystem Size  Used Avail Use% Mounted on
/dev/sdb1 5.0G   10M  4.7G   1% /newiscsilun

19) Add the new mount point to /etc/fstab.
> vi /etc/fstab
/dev/sdb1 /newiscsilun ext4 _netdev 0 0
> wq!

Note: the _netdev option is important so that it doesn’t try mounting the target before the network is available.

20) Test that it survives a reboot by rebooting the server. With the _netdev set, iscsi starts and your CHAP logins should take place before it attempts to mount. After the reboot, login and verify its mounted.

> df -h | grep newiscsilun
Filesystem Size  Used Avail Use% Mounted on
/dev/sdb1 5.0G   10M  4.7G   1% /newiscsilun

21) On the server you can check session stats.
> iscsiadm -m session -s
Stats for session [sid: 1, target: iqn.1992-08.com.netapp:sn.84167939, portal: 10.10.10.11,3260]
iSCSI SNMP:
txdata_octets: 69421020
rxdata_octets: 765756
noptx_pdus: 0
scsicmd_pdus: 365
tmfcmd_pdus: 0
login_pdus: 0
text_pdus: 0
dataout_pdus: 924
logout_pdus: 0
snack_pdus: 0
noprx_pdus: 0
scsirsp_pdus: 365
tmfrsp_pdus: 0
textrsp_pdus: 0
datain_pdus: 193
logoutrsp_pdus: 0
r2t_pdus: 924
async_pdus: 0
rjt_pdus: 0
digest_err: 0
timeout_err: 0
iSCSI Extended:
tx_sendpage_failures: 0
rx_discontiguous_hdr: 0
eh_abort_cnt: 0

22) As root, change permissions on /etc/iscsi/iscsid.conf. I’m not sure why they haven’t fixed this clear text CHAP password in a file issue so just make sure only root can read/write the file.
> chmod 600 /etc/iscsi/iscsid.conf

23) On the NetApp storage you can verify the Lun and the server’s session.
> lun show -v /vol/MCHAPVOL/DEB71_iSCSI_MCHAP_01
/vol/MCHAPVOL/DEB71_iSCSI_MCHAP_01      5g (5368709120)    (r/w, online, mapped)
Serial#: hoagPJtrPZCi
Share: none
Space Reservation: enabled
Multiprotocol Type: linux
Maps: ISCSI_MCHAP_DEB71=1

>  iscsi session show -v
Session 55
Initiator Information
Initiator Name: iqn.1993-08.org.debian:01:e6d4ee61d916
ISID: 00:02:3d:01:00:00
Initiator Alias: deb71

Session Parameters
SessionType=Normal
TargetPortalGroupTag=1000
MaxConnections=1
ErrorRecoveryLevel=0
AuthMethod=CHAP
HeaderDigest=None
DataDigest=None
ImmediateData=Yes
InitialR2T=No
FirstBurstLength=65536
MaxBurstLength=65536
Initiator MaxRecvDataSegmentLength=65536
Target MaxRecvDataSegmentLength=65536
DefaultTime2Wait=2
DefaultTime2Retain=0
MaxOutstandingR2T=1
DataPDUInOrder=Yes
DataSequenceInOrder=Yes
Command Window Size: 32

Connection Information
Connection 0
Remote Endpoint: 10.10.10.203:57127
Local Endpoint: 10.10.10.11:3260
Local Interface: e0a
TCP recv window size: 131400