Thursday, March 20, 2014

LUN creation on NetApp and mapping to windows 2008 or R2 or windows server


Before creating lun on netapp need to check the license to related protocal. If there is no license purchase & add the license using License add command. You can find the license details using license command.


Steps for LUN Creation process  
1.     LUN creation  process
 filer1> lun create -s 10g -t windows_gpt /vol/vol0/windows/windows2008R2
lun create: created a LUN of size:   10.0g (10742215680)
2.     Igroup creation process
                              filer1>  igroup create -f -t windows windows2008 21:00:00:e0:8b:94:c5:0a
.
3.     Lun Mapping process
filer1> lun map  /vol/vol0/windows/windows2008R2 windows2008 18

4.     To check the LUN is mapped
filer1> lun show -m
LUN path                            Mapped to          LUN ID  Protocol
-----------------------------------------------------------------------
/vol/vol0/windows/windows2008R2     windows2k8              0       FCP

5.     To see the LUN Info

filer1> lun show -v
        /vol/vol0/windows/windows2008R2   10.0g (10742215680)   (r/w, online, mapped)
                Serial#: 
                Share: none
                Space Reservation: enabled
                Multiprotocol Type: windows_gpt
                Maps: windows2k8=0
                Occupied Size:   75.1m (78753792)
                Creation Time: Thu Sep 22 03:12:17 EDT 2011
                Cluster Shared Volume Information: 0x0
Provisioning NetApp LUN for windows server

loging to windows 
To scan the LUN  Mycomputer --> device Manager


After scan you can find LUN details in disk drive.


                                              

you can find disk management  it will available (if it showing more than one disk then you need to install multi path software DSM (you can download from netapp site). If the server having more than one connection or more than one HBA need to install DSM. (because of ALUA).


                                                  

 if you want to use you need initial the disk.After initialization it will support windows file system.



               
If you need MBR (master Boot Record). You HBA has to support that Master boot also.
after that format use the disk.















you can find above New volume D above. It is block level access. Need to now LUN number & detail right click properties.

Vfiler Description

Vfiler Description:

The vfiler command controls the configuration of Virtual Filers (vfilers) on a filer.
The vfiler command is available only if your filer has the vfiler license.


vfiler create vfilername [-n] [-s ipspace ] -i ipaddr [-i ipaddr ]… path [ path ...]

There are two ways to create a vfiler. The first uses the -i option to specify configuration information on the command line. Use this form when creating a vfiler for the first time


ex: fas3170> vfiler create test -s ipspace1 -i 192.168.42.142 /vol/fas6080a_vfiler1_root

vfiler create vfilername -r path

The second form uses the -r option to re-create a vfiler from configuration information stored in the specified data set. Use this form when creating a vfiler from a data store that has been Snapmirrored between filers.

Note: When re-creating a Snapmirrored vfiler using the vfiler create vfilername -r path form of the command, the specified vfiler_name parameter must match the name of the original vfile exactly, and the path must match the first path that was specified in the vfiler create command that originally created the vfiler.

vfiler destroy [-f] vfilername
vfiler rename old_vfilername new_vfilername
vfiler add vfilername [-f] [-i ipaddr [-i ipaddr]…] [ path [ path ...]]

The add subcommand adds the specified IP addresses and/or paths to an existing vfiler. The arguments have the same rules as those specified during the initial create. The -f option skips the confirmation and warnings.

Ex:  fas3170> vfiler add fas3170_vfiler2 /vol/fas6080b_vfiler1_nas_clone

vfiler remove vfilername [-f] [-i ipaddr [-i ipaddr]…] [ path [path ...]]
vfiler limit [ max_vfilers ]

NOTE: to increase this value, you must reboot for it to take effect.  Limits are based on memory.  In failover mode, a node can handle double the vfilers to handle the cluster partner.  The simulator is limited to 5 and defaults to 5.
–  FAS Controllers with <1GB RAM   11 max vFilers
–  FAS Controllers with >=1GB RAM 26 max vFilers
–  FAS Controllers with >=2GB Ram 65 max vFilers

vfiler move vfiler_from vfiler_to [-f] [-i ipaddr [-i ipaddr]…] [path [path ...]]
vfiler start vfilertemplate
vfiler stop vfilertemplate
vfiler status [-r|-a] [ vfilertemplate]
vfiler run [-q] vfilertemplate command [args]
vfiler allow vfilertemplate [proto=cifs] [proto=nfs] [proto=rsh] [proto=iscsi] [proto=ftp] [proto=http]
vfiler disallow vfilertemplate [proto=cifs] [proto=nfs] [proto=rsh] [proto=iscsi] [proto=ftp] [proto=http]
vfiler context vfilername
vfiler dr configure [-l user:password ] [-e ifname:IP address:netmask, ... ] [-d dns_server_ip:... ] [-n nis_server_ip:... ] [-s ] remote_vfiler@remote_filer
vfiler dr status remote_vfiler@remote_filer
vfiler dr delete [-f] remote_vfiler@remote_filer
vfiler dr activate remote_vfiler@remote_filer
vfiler dr resync [-l remote_login:remote_passwd ] [-a alt_src, alt-dst ] [-s ]vfilername@destination_filer
vfiler migrate [-m nocopy [-f]] [-l user:password ] [-e ifname:IP address:netmask, ... ]remote_vfiler@remote_filer
vfiler migrate start [-l user:password ] [-e ifname:IP address:netmask, ... ]remote_vfiler@remote_filer
vfiler migrate status remote_vfiler@remote_filer
vfiler migrate cancel remote_vfiler@remote_filer
vfiler migrate complete remote_vfiler@remote_filer
vfiler help

Wednesday, March 19, 2014

Thin Provisioning in netapp 7-mode


Thin Provisioning is an efficient way to provision storage, because the storage is not all pre-allocated up front. In other words, when a volume or LUN is created using thin provisioning, no space on the storage system is used. The space remains unused until data is written to the LUN or the volume, at which time only enough space to store the data will be used.

Thin Provisioning there need to be enabling some options and they are mentioned below:
you have to use below commands or you can use system manger.
·         Aggregate
-         Snapshot Reserve
·         Volume
-         Autosize
-         Snapshot Autodelete
-         SNAP reserve
-         Space Guarantee
·         LUN
-         Overwrite Reserve
-         LUN Reservation

vol create <volume_name> -s none  <aggr_name> size
vol options <volume_name> fractional_reserve 0
vol autosize <<volume_name> on
vol autosize <volume_name> -m <growth_size>
vol options <volume_name> try_first volume_grow

snap reserve <volume_name>0
snap autodelete <volume_name> no

lun set reservation <volume>/<qtree>/<lun_name.lun> disable

if enable above options, then it will become complete thin provisioned.


If your using system manger select volume then go for edit. you find the options

Thin Provisioning over view

NetApp Storage Over Commitment for LUNs

Most modern storage arrays allow for some sort of thin provisioning.  If you are currently using thin provisioning on your system you open up the possibility to overcommit your storage.  This technology is relatively new to the storage industry, but almost all vendors provide some means to overcommit their arrays. Network guys have known about this for years only they refer to it as oversubscription.  Cisco UCS systems for example are 2-1 oversubscribed.  A fully loaded chassis (8 blades) is capable of 160GB of throughput, but there is only 80GB of bandwidth available from the FEX (Uplinks).  The common thought is not all systems will demand all their bandwidth at the same time, and the same theory holds true for storage.  We can drive up utilization of our storage arrays by overcommiting data.

Take for example the following scenario creating a 100GB LUN using the "Create LUN Wizard" in System Manager.  If you check the new volume that is created in this process you will see it is now nearly full.  This is because the LUN is space reserved by default.  If we write 10GB of data to the LUN, the used space won't change since the system has already set aside space for the LUN.  A look at the picture below shows how this might look.



If you browse to your newly created LUN and uncheck "Space Reserved" you have just made your LUN "thin".  As you can see from the diagram below you now have more space available in the volume.  This is a safe option for thin provisioning since your LUN still has the volume space guaranteed and it can grow to the fully provisioned size if necessary.  This will give you the ability to store more snapshots, which is a good thing, but it doesn't let us overcommit our storage unless we build more LUNs in the same volume.





In order to get the best use of your NetApp array you can also thin provision your volumes.  The way you do that is by setting your volume guarantee to "none".  By looking at the diagram below we now see the only space consumed in our aggregate is the actual data written to our LUN.



There are a few caveats with this design.  The first is you need to keep a close eye on your storage utilization.  I recommend Operations Manager from NetApp as it will let you create alerts for things you need to watch like aggregate free space and aggregate overcommitment.  Almost just as importantly Operations Manager will track historical data growth.  This will let you plan ahead for extra capacity when needed.  If you do not actively monitor your space you could run into a situation where your LUNs go offline and are no longer available to the systems that need them.  The other thing to note with this design is how to handle volume auto-growth.

When you create a volume via System Manager the volume will only auto-grow to 20% above its creation size, so a 100GB volume will only auto-grow to 120GB. This results in a good deal of management overhead if you ever start growing thin provisioned LUNs. You run the risk of having the volume run out of space unless you grow the volume, or adjust the autosize parameter (something you can't do from System Manager).  I recommend configuring the volume autosize equal to the size of the containing aggregate on non space reserved volumes.  You could create the volume equal to the size of the aggregate and get basically the same result, but you won't be able to use the aggregate overcommitment alarm from Operations Manager.

If you have a large number of arrays, volumes, and LUNs changing all of them would be daunting were it not for the recently released Data OnTap Powershell toolkit.  I put together this script so you can get a good idea of how much space you could be saving on your storage arrays.  By default it will look only at LUNs that are already thin provisioned (space reserved LUNs are ignored), volumes that have a space guarantee set, and volumes that are not in a SnapMirror relationship.  The script doesn't modify anything on your system, it just reports on how much space you could save.  If you want to take full advantage of the space savings uncomment these lines in the script:


Set-NaSnapshotAutodelete $volume.name state on | Out-Null
Set-NaVolOption $volume.name fractional_reserve 100 | Out-Null
Set-NaVolOption $volume.name guarantee none | Out-Null
Set-NaVolOption $volume.name try_first volume_grow | Out-Null
Set-NaVolAutosize $volume.name -Enabled -MaximumSize $aggrSize.SizeNominal | Out-Null


Once you uncomment those lines and re-run the script it will set all your volumes with thin provisioned LUNs to have a fractional reserve of 100, no volume space guarantee, snapshot auto-delete, volume auto-grow, and a volume autosize equal to the size of the containing volumes aggregate.  The autosize parameter will give you a set it and forget it approach to managing thin provisioned LUNs.  Pick up the script over at the NetApp forums 

How to add more disks to netapp simulator

Steps to follow to add more disks to Netapp simulator.

priv set advanced

disk show

useradmin diaguser unlock

useradmin diaguser password

systemshell

cd /sim/dev

vsim_makedisks -h

sudo vsim_makedisks -n 14 -t 36 -a 2

sudo vsim_makedisks -n 14 -t 36 -a 3

ls ,disks

exit

priv set admin

reboot