Tuesday, August 26, 2014

Netapp HC checklist

Resizing a SnapMirror source and destination volume pair

Many of us always wanted to know what is the file size we would be transfering while we intitate a snapmirror. I have made an real time example here hope it helps

Vol1 is the volume we use for transfering( i have kept the same name at both destination as source for eazy understanding)

sourcefiler> snap list vol1
Volume vol1
working

%/used %/total date name
———- ———- ———— ——–
2% ( 2%) 1% ( 1%) Oct 23 04:15 destinationfiler(0123456789)_vol1.745 (snapmirror)
5% ( 4%) 2% ( 1%) Oct 22 23:01 hourly.0
9% ( 4%) 3% ( 1%) Oct 21 23:01 hourly.1
12% ( 4%) 4% ( 1%) Oct 20 23:01 hourly.2
18% ( 7%) 6% ( 2%) Oct 19 23:04 hourly.3
21% ( 5%) 7% ( 1%) Oct 18 23:01 hourly.4
25% ( 6%) 9% ( 2%) Oct 17 23:02 hourly.5
28% ( 5%) 11% ( 1%) Oct 16 23:01 hourly.6

In the above we check snap list currently for the volume at the source ( it will list the base snap shot for the source)

destinationfiler*> snap list vol1
Volume vol1
working

%/used %/total date name
———- ———- ———— ——–
0% ( 0%) 0% ( 0%) Oct 23 04:15 destinationfiler(0123456789)_vol1.745
4% ( 4%) 1% ( 1%) Oct 22 23:01 hourly.0
8% ( 4%) 2% ( 1%) Oct 22 04:15 destinationfiler(0123456789)_vol1.744
8% ( 1%) 2% ( 0%) Oct 21 23:01 hourly.1
11% ( 4%) 3% ( 1%) Oct 20 23:01 hourly.2
17% ( 7%) 6% ( 2%) Oct 19 23:04 hourly.3
20% ( 5%) 7% ( 1%) Oct 18 23:01 hourly.4
25% ( 6%) 9% ( 2%) Oct 17 23:02 hourly.5
27% ( 5%) 10% ( 1%) Oct 16 23:01 hourly.6

Here we check the snapshot which was used last for the snapmirror.

sourcefiler*> snapmirror destinations -s
Path Snapshot Destination

vol1 destinationfiler(0123456789)_vol1.745 destinationfiler:vol1 ( this command tells which was the snapshot used for the snapmirror) in this case its 745

destinationfiler*> snapmirror update vol1
Transfer started.
Monitor progress with ‘snapmirror status’ or the snapmirror log. ( we started a snapmirror update for vol1)

sourcefiler*> snap list vol1 ( once the snapmirror is intiated we see a new snapshot is created which is 746 in our case)
Volume vol1
working

%/used %/total date name
———- ———- ———— ——–
0% ( 0%) 0% ( 0%) Oct 23 16:46 destinationfiler(0123456789)_vol1.746 (busy,snapmirror)
2% ( 2%) 1% ( 1%) Oct 23 04:15 destinationfiler(0123456789)_vol1.745 (busy,snapmirror)
5% ( 4%) 2% ( 1%) Oct 22 23:01 hourly.0
9% ( 4%) 3% ( 1%) Oct 21 23:01 hourly.1
12% ( 4%) 4% ( 1%) Oct 20 23:01 hourly.2
18% ( 7%) 6% ( 2%) Oct 19 23:04 hourly.3
21% ( 5%) 7% ( 1%) Oct 18 23:01 hourly.4
25% ( 6%) 9% ( 2%) Oct 17 23:02 hourly.5
28% ( 5%) 11% ( 1%) Oct 16 23:01 hourly.6

destinationfiler*> snap list vol1 ( on the destination side we see that 744 is deleted and currently it reference to 745 only)
Volume vol1
working

%/used %/total date name
———- ———- ———— ——–
0% ( 0%) 0% ( 0%) Oct 23 04:15 destinationfiler(0123456789)_vol1.745
4% ( 4%) 1% ( 1%) Oct 22 23:01 hourly.0
4% ( 1%) 1% ( 0%) Oct 21 23:01 hourly.1
7% ( 4%) 2% ( 1%) Oct 20 23:01 hourly.2
14% ( 7%) 4% ( 2%) Oct 19 23:04 hourly.3
17% ( 5%) 6% ( 1%) Oct 18 23:01 hourly.4
22% ( 6%) 8% ( 2%) Oct 17 23:02 hourly.5
25% ( 5%) 9% ( 1%) Oct 16 23:01 hourly.6

( we do snap delta to check the difference between the snapshots which are currently used in our case its 745 and 746) which shows as 25268240 KB as below)

sourcefiler*> snap delta -V vol1 destinationfiler(0123456789)_vol1.745 destinationfiler(0123456789)_vol1.746

Volume vol1
working

From Snapshot To KB changed Time Rate (KB/hour)
————— ——————– ———– ———— —————
destinationfiler(0123456789)_vol1.745 destinationfiler(0123456789)_vol1.746 25268240 0d 12:31 2016440.503

this is the size we would be transferring during this snapmirror update process.

destinationfiler*> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
sourcefiler-my_vif-103:vol1 destinationfiler:vol1 Snapmirrored 00:46:56 Idle
sourcefiler-my_vif-103:vol1 destinationfiler:vol1 Snapmirrored 12:34:40 Transferring (9505 MB done)

we can verify after the snapmirror is done by using snapmirror status -l command at the destination.

destinationfiler*> snapmirror status -l vol1
Snapmirror is on.

Source: sourcefiler-my_vif-13:vol1
Destination: destinationfiler:vol1
Status: Idle
Progress: -
State: Snapmirrored
Lag: 00:08:53
Mirror Timestamp: Tue Oct 23 16:46:55 IST 2012
Base Snapshot: destinationfiler(0123456789)_vol1.746
Current Transfer Type: -
Current Transfer Error: -
Contents: Replica
Last Transfer Type: Update
Last Transfer Size: 25268248 KB ( this is one we got using snapdelta command as described earlier)( There is 8kb difference which im not sure of why)
Last Transfer Duration: 00:07:49
Last Transfer From: sourcefiler-my_vif-13:vol1

Snap mirror

http://keepingitclassless.net/2012/04/a-quick-and-dirty-netapp-snapmirror/

To enable quotas


1.we need to enter quota entries in /etc/quotas file
Ex: wrfile -a /etc/quotas *    tree@/vol/BUENO_WIN1    -       -       -       -       - 
2.No enable quota on the volume
Ex: quota on -w BUENO_WIN1
3.Now check the quota status
Ex: quota status BUENO_WIN1
4. Once quota on then check quota report
Ex: quota report 

Monday, August 25, 2014

Difference between SnapVault software and qtree-based SnapMirror




The following are some of the key differences between SnapVault  and the qtree-based
SnapMirror feature.
SnapMirror uses the same software and licensing on the source appliance and the destination
server.

SnapVault software has SnapVault primary systems and SnapVault secondary systems, which
provide different functionality. The SnapVault primaries are the sources for data that is to be backed up.
The SnapVault secondary is the destination for these backups.
Note: As of Data ONTAP 7.2.1, SnapVault primary and SnapVault secondary can be installed on
different heads of the same cluster. Data ONTAP 7.3 supports installing both the primary and
secondary on a standalone system.
SnapVault destinations are typically read-only. Unlike SnapMirror destinations, they cannot be made
into read-write copies of the data. This means that backup copies of data stored on the SnapVault
server can be trusted to be true, unmodified versions of the original data.
Note: A SnapVault destination can be made into read-write with the SnapMirror and SnapVault bundle.
SnapMirror transfers can be scheduled every few minutes; SnapVault transfers can be scheduled at
most once per hour.

Multiple qtrees within the same source volume consume one Snapshot copy each (on the sourcesystem) when qtree-based SnapMirror software is used, but consume only one Snapshot copy totalwhen SnapVault software is used.

The SnapMirror software deletes SnapMirror Snapshot copies when they are no longer needed for
replication purposes. The copies are retained or deleted on a specified schedule.
SnapMirror relationships can be reversed, allowing the source to be resynchronized with changes made
at the destination. SnapVault provides the ability to transfer data from the secondary to the primary only
for restore purposes. The direction of replication cannot be reversed.
SnapMirror can be used to replicate data only between NetApp storage systems running Data ONTAP.
SnapVault can be used to back up both NetApp and open systems primary storage, although the
secondary storage system must be a FAS system or a NearStore system

Qtree SnapMirror versus Volume SnapMirror


Qtree SnapMirror over volume SnapMirror
QSM: Unaffected by disk size or disk checksum differences between the source and destination irrespective of type of volumes used (traditional or flexible)
VSM:Unaffected by disk size or disk checksum differences between the source and destination if flexible volumes are used
Affected by disk size or disk checksum differences between the source and destination if traditional volumes are used
QSM:Destination volume must have free space available equal to approximately 105% of the data being replicated
VSM:Destination volume must be equal or larger than the source volume
QSM:Sensitive to the number of files in a qtree due to the nature of the qtree replication process. The initial phase of scanning the inode file may be longer with larger number of files
VSM:Not sensitive to the number of files in a volume
QSM:Qtree SnapMirror destinations can be placed on the root volume of the destination storage system
VSM:The root volume cannot be used as a destination for volume SnapMirror
QSM: Replicates only one Snapshot copy of the source volume where the qtree resides (the copy created by the SnapMirror software at the time of the transfer) to the destination qtree. Therefore, qtree SnapMirror allows independent Snapshot copies on the source and destination
VSM:Replicates all Snapshot copies on the source volume to the destination volume. Similarly, if a Snapshot copy is deleted on the source system, volume SnapMirror deletes the Snapshot copy at the next update. Therefore volume SnapMirror is typically recommended for disaster recovery scenarios, because the same data exists on both source and destination. Note that the volume SnapMirror destination always keeps an extra SnapMirror Snapshot copy
QSM:A qtree SnapMirror destination volume might contain replicated qtrees from multiple source volumes on one or more systems and might also contain qtrees or non-qtree data not managed by SnapMirror software
VSM:A volume SnapMirror destination volume is always a replica of a single source volume
QSM:Multiple relationships would have to be created to replicate all qtrees in a given volume by using qtree-based replication
VSM:Volume-based replication can take care of this in one relationship (as long as the one volume contains all relevant qtrees)
QSM:For low-bandwidth wide area networks, qtree SnapMirror can be initialized using the LREP tool
VSM:Volume SnapMirror can be initialized using a tape device (SnapMirror to Tape) by using the snapmirror store and snapmirror retrieve commands.
QSM:Qtree SnapMirror can only occur in a single hop. Cascading of mirrors (replicating from a qtree SnapMirror destination to another qtree SnapMirror source) is not supported
VSM: Cascading of mirrors is supported for volume SnapMirror
QSM: Qtree SnapMirror updates are not affected by backup operations. This allows a strategy called continuous backup, in which traditional backup windows are eliminated and tape library investments are fully used.
VSM:Volume SnapMirror updates can occur concurrently with a dump operation of the destination volume to tape by using the dump command or NDMP-based backup tools. However, if the volume SnapMirror update involves a deletion of the Snapshot copy that the dump operation is currently writing to tape, the SnapMirror update will be delayed until the dump operation is complete
QSM:The latest Snapshot copy is used by qtree SnapMirror for future updates if the –s flag is not used
VSM:Volume SnapMirror can use any common Snapshot copy for future updates
QSM: Qtrees in source deduplicated volumes that are replicated with qtree SnapMirror are full size at the destination
Source deduplicated volumes that are replicated with volume SnapMirror remain deduplicated at the destination
Even though the source volume is deduplicated, qtree SnapMirror will expand the data and send the entire data to the destination
VSM:Deduplication savings also extend to the bandwidth savings because volume SnapMirror only transfers unique blocks
QSM:Source and destination volumes can be independently deduplicated
Destination volume is read-only and therefore destination volume cannot be independently deduplicated. If deduplication savings are desired on the destination volume, then the source volume must be deduplicated
QSM: The files in the file system gain new identity (inode numbers etc.) in the destination system. Therefore, file handles cannot be migrated to the destination system
VSM:The files in the file system have the same identity on both source and destination system
QSM: LUN clones can be created on the destination volume, but not in the destination qtree
VSM:LUN clones cannot be created on the destination volume because the volume is read-only. However, LUN clones can be created on a FlexClone volume because the FlexClone volume is writable

Snapmirror steps

1.You must license snapmirror on both filers.( This is mandatory)

2. Enable Snapmirror on both filers.
pri> options snapmirror.enable on
dr> options snapmirror.enable on

3. Turn on the Snapmirror log
pri> options snapmirror.log.enable on
dr> options snapmirror.log.enable on

4. Allow the destination filer access to the source filer. This is done by adding the ip address to /etc/snapmirror.allow on the source filer.
pri> wrfile /etc/snapmirror.allow

5. To create a volume for replication we must first create the volume and then restrict it( @ destination) .

6. Now  initialize a volume based replication. This is performed on the destination filer.
ctrldr> snapmirror initialize -S ctrlpri:vol1 ctrldr:volDR
Monitor with Snapmirror status at the destination

CIFS share creation

Create a volume( assuming we have created a volume vol1)

Import the volume to the vfiler

vfiler add vfiler1 /vol/vol1

vfiler1 is the vfiler where we are hosting the CIFS

qtree creation :

qtree create <complete volume name with qtree >

qtree create /vol/vol1/qtree1

change the security style of the qtree

qtree security < complete qtree path> ntfs

qtree security /vol/vol1/qtree1 ntfs

cifs shares -add <cifsname> <complete qtree path>

cifs shares -add cifs1 /vol/vol1/qtree1 ( where cifs1 is the cifs share name)

deleting the everyone full control ( by default when a cifs is created it will be with everyone full control access)

cifs access – delete cifs1 everyone full (this deletes the everyone full access for the cifs)

adding user groups to cifs

cifs access <cifsname> domain\groupname full

cifs access cifs1 domain1\group1 full

volume creation template

vol create <vol name> -s none <aggr name> 2t                ( creates a volume of 2TB)

snap reserve <vol name> 0                                     ( No space is reserved for snapshots)

vol autosize <vol name> -m 4044g -i 50g on          ( volume grows upto 4TB in the increament of 50g)

vol options <vol name> fractional_reserve 0             ( fractional reserve is set to 0 as against the default of 100)

sis off /vol/<vol name>                         ( No Dedupe)

vol options <vol name> nosnap on

snap sched <vol name> 0 0 0                ( no snaps have been scheduled)

_________________________________________________________



sis config -s sun-sat@1 /vol/<vol name>         ( dedupe scheduled at 1am daily)

snap sched <vol name> 0 0 7@20    ( snap shots scheduled daily at 20hrs 7 snapshots are retained at any point of time )

Mapping ISCI host on netapp

Install the iscsi intiator on the host end

Add the target by clicking the add button

Click on the target tab login to the netapp

Once the host is able to login to the netapp.. we see new session from intiator iqn.1991-05.com. microsoft. on netapp screen

Filer*> igroup add iqn.1991-05.com.microsoft:servername.net igroup_iscsi

Add the iqn of the host intiator to the igroup.

And then map the lun to the igroup

filer*> lun map /vol/ISCSI_Volume/q_ISCSI_Volume/q_ISCSI_Volume.lun igroup_iscsi 1

Once mapped scan for disks on computer management of the host

Igroup creation and mapping to a LUN

qtree create /vol/vol1/qtree

igroup create -f -t vmware host1 50:01:43:90:00:c4:ae:b6

lun create -s 500g -t vmware -o noreserve /vol/vol1/qtree/lun1.lun ( thin provisioned Lun)

lun create -s 500g -t vmware /vol/vol1/qtree/lun1.lun ( thick provisioned)

lun map /vol/vol1/qtree/lun1.lun host1 1 ( where host1 is the igroup of the host) and 1 is the lun ID ( lun ID should be unique for each controller)

How to find snapmirror transfer size

Vol1 is the volume we use for transfering( i have kept the same name at both destination as source for eazy understanding)

sourcefiler> snap list vol1
Volume vol1
working

%/used %/total date name
———- ———- ———— ——–
2% ( 2%) 1% ( 1%) Oct 23 04:15 destinationfiler(0123456789)_vol1.745 (snapmirror)
5% ( 4%) 2% ( 1%) Oct 22 23:01 hourly.0
9% ( 4%) 3% ( 1%) Oct 21 23:01 hourly.1
12% ( 4%) 4% ( 1%) Oct 20 23:01 hourly.2
18% ( 7%) 6% ( 2%) Oct 19 23:04 hourly.3
21% ( 5%) 7% ( 1%) Oct 18 23:01 hourly.4
25% ( 6%) 9% ( 2%) Oct 17 23:02 hourly.5
28% ( 5%) 11% ( 1%) Oct 16 23:01 hourly.6

In the above we check snap list currently for the volume at the source ( it will list the base snap shot for the source)

destinationfiler*> snap list vol1
Volume vol1
working

%/used %/total date name
———- ———- ———— ——–
0% ( 0%) 0% ( 0%) Oct 23 04:15 destinationfiler(0123456789)_vol1.745
4% ( 4%) 1% ( 1%) Oct 22 23:01 hourly.0
8% ( 4%) 2% ( 1%) Oct 22 04:15 destinationfiler(0123456789)_vol1.744
8% ( 1%) 2% ( 0%) Oct 21 23:01 hourly.1
11% ( 4%) 3% ( 1%) Oct 20 23:01 hourly.2
17% ( 7%) 6% ( 2%) Oct 19 23:04 hourly.3
20% ( 5%) 7% ( 1%) Oct 18 23:01 hourly.4
25% ( 6%) 9% ( 2%) Oct 17 23:02 hourly.5
27% ( 5%) 10% ( 1%) Oct 16 23:01 hourly.6

Here we check the snapshot which was used last for the snapmirror.

sourcefiler*> snapmirror destinations -s
Path Snapshot Destination

vol1 destinationfiler(0123456789)_vol1.745 destinationfiler:vol1 ( this command tells which was the snapshot used for the snapmirror) in this case its 745

destinationfiler*> snapmirror update vol1
Transfer started.
Monitor progress with ‘snapmirror status’ or the snapmirror log. ( we started a snapmirror update for vol1)

sourcefiler*> snap list vol1 ( once the snapmirror is intiated we see a new snapshot is created which is 746 in our case)
Volume vol1
working

%/used %/total date name
———- ———- ———— ——–
0% ( 0%) 0% ( 0%) Oct 23 16:46 destinationfiler(0123456789)_vol1.746 (busy,snapmirror)
2% ( 2%) 1% ( 1%) Oct 23 04:15 destinationfiler(0123456789)_vol1.745 (busy,snapmirror)
5% ( 4%) 2% ( 1%) Oct 22 23:01 hourly.0
9% ( 4%) 3% ( 1%) Oct 21 23:01 hourly.1
12% ( 4%) 4% ( 1%) Oct 20 23:01 hourly.2
18% ( 7%) 6% ( 2%) Oct 19 23:04 hourly.3
21% ( 5%) 7% ( 1%) Oct 18 23:01 hourly.4
25% ( 6%) 9% ( 2%) Oct 17 23:02 hourly.5
28% ( 5%) 11% ( 1%) Oct 16 23:01 hourly.6

destinationfiler*> snap list vol1 ( on the destination side we see that 744 is deleted and currently it reference to 745 only)
Volume vol1
working

%/used %/total date name
———- ———- ———— ——–
0% ( 0%) 0% ( 0%) Oct 23 04:15 destinationfiler(0123456789)_vol1.745
4% ( 4%) 1% ( 1%) Oct 22 23:01 hourly.0
4% ( 1%) 1% ( 0%) Oct 21 23:01 hourly.1
7% ( 4%) 2% ( 1%) Oct 20 23:01 hourly.2
14% ( 7%) 4% ( 2%) Oct 19 23:04 hourly.3
17% ( 5%) 6% ( 1%) Oct 18 23:01 hourly.4
22% ( 6%) 8% ( 2%) Oct 17 23:02 hourly.5
25% ( 5%) 9% ( 1%) Oct 16 23:01 hourly.6

( we do snap delta to check the difference between the snapshots which are currently used in our case its 745 and 746) which shows as 25268240 KB as below)

sourcefiler*> snap delta -V vol1 destinationfiler(0123456789)_vol1.745 destinationfiler(0123456789)_vol1.746

Volume vol1
working

From Snapshot To KB changed Time Rate (KB/hour)
————— ——————– ———– ———— —————
destinationfiler(0123456789)_vol1.745 destinationfiler(0123456789)_vol1.746 25268240 0d 12:31 2016440.503

this is the size we would be transferring during this snapmirror update process.

destinationfiler*> snapmirror status
Snapmirror is on.
Source Destination State Lag Status
sourcefiler-my_vif-103:vol1 destinationfiler:vol1 Snapmirrored 00:46:56 Idle
sourcefiler-my_vif-103:vol1 destinationfiler:vol1 Snapmirrored 12:34:40 Transferring (9505 MB done)

we can verify after the snapmirror is done by using snapmirror status -l command at the destination.

destinationfiler*> snapmirror status -l vol1
Snapmirror is on.

Source: sourcefiler-my_vif-13:vol1
Destination: destinationfiler:vol1
Status: Idle
Progress: -
State: Snapmirrored
Lag: 00:08:53
Mirror Timestamp: Tue Oct 23 16:46:55 IST 2012
Base Snapshot: destinationfiler(0123456789)_vol1.746
Current Transfer Type: -
Current Transfer Error: -
Contents: Replica
Last Transfer Type: Update
Last Transfer Size: 25268248 KB ( this is one we got using snapdelta command as described earlier)( There is 8kb difference which im not sure of why)
Last Transfer Duration: 00:07:49
Last Transfer From: sourcefiler-my_vif-13:vol1