This article shows the options to configure NetApp ONTAP storage. In this instance the NetApp ONTAP Simulator is used and the environment is running on a VMware vSphere homelab. This is the third step after the initial deployment and installation of the NetApp ONTAP virtual appliance.
Before proceeding with the setup and configure NetApp ONTAP storage there is a number of steps required which are preliminary to the correct provisioning of the storage. In this instance the storage provided by NetApp ONTAP will be used in VMware vCenter to create VMFS Datastores. The vSphere Hosts will then connect to this newly created Datastore using the iSCSI protocol. NetApp also provides an additional appliance, the Virtual Storage Console (VSC). The VSC appliance provides and manages access to the available storage created in the NetApp Cluster to the VMware vSphere environment. Moreover, the VSC appliance includes support for the VMware VASA (vStorage APIs for Storage Awareness) and SRA (Storage Replication Adapter) compatible with VMware SRM (Site Recovery Manager). A dedicated article included in this series dedicated to NetApp ONTAP will cover these configurations in more details. At the time of writing the latest version of NetApp VSC (7.2) supports up to VMware 6.5u2. Next release (7.2.1 or 8.0) of NetApp VSC will include support for VMware 6.7u1.
The process to configure NetApp ONTAP storage to provide VMware Datastores is straight forward. Since this is the first one to configure the setup is a bit longer than required just to make sure all components are in place. This is a good exercise to learn more about the NetApp ONTAP solution and tweak settings where required to meet expectations.
The most important parts include:
- Adding licenses to NetApp ONTAP
- Configure NTP Client
- Create Disk Aggregates
- Configure NetApp Storage Virtual Machine (SVM)
- Setup iSCSI
- Create Network Interfaces
- Create Volume
- Create LUN
Configure NetApp ONTAP Licenses
After downloading and installing the NetApp ONTAP Cluster it is a good idea to import the licenses shipping with the Simulator release. This opens up the various configurations and features that will be used. From the main Dashboard > Configuration > Licenses a quick page show the status and the option to add new ones.
The wizard allows to add even multiple licenses based on the different features. All licenses can be added at once using a comma or pointing to a license file when available.
After importing the license files available withe the NetApp ONTAP Simulator it should look to something like the following.
Configure NetApp ONTAP NTP
In the wheel icon on the top right the option to setup general settings for the Cluster and the Node. In this menu the Date and Time to setup the NTP client. The suggestion is to use the same NTP server used by the vSphere Host where the NetApp ONTAP appliance is running. When this field is empty the VMware tools installed in the appliance will try to get the Time from its vSphere Host.
Configure NetApp ONTAP Disk Aggregates
Disk aggregates are groups of disks logically grouped (even from different disk shelves) which can be configured to large RAID groups. When assigning disks this process also gives the ownership to a specific Nodes and Node as part of a Cluster. In the case of the NetApp ONTAP Simulator the disks are already created during the initial setup from a built-in script when creating the NetApp Cluster.
All available disks can be accessed from Dashboard > Storage > Aggregates & Disks. This also shows the main details and types of disks presented to the NetApp Cluster or Node.
In the section Storage > Aggregates the option to review and create new disk groups and choose on the RAID protection to use.
A nice feature which is built-in the OnCommand System Manager console is a wizard which suggests an aggregate configuration based on available resources.
In this case the disks aggregate will be created manually. As the wizard is showing it provides the option to include a name for the aggregate, the disk types associated to the RAID group, the number of disks to use and other features like the snaplock type and the mirroring feature. Whereas the former provides features for the Write Once Read Many (WORM) capabilities the latter includes the option to duplicate data to an independent RAID group achieving data redundancy. At the moment such feature will not be used.
With the RAID configuration NetApp ONTAP offers 3 types of protection: RAID-TEC, RAID-DP and RAID4. More on this in a dedicated article. Upon selection of the RAID type the wizard updates the number and type of disk used including effective usable capacity.
Configure NetApp ONTAP SVM (Storage Virtual Machine)
Once the physical disks have been created and grouped into an aggregate disk group the next step is to create a NetApp Storage Virtual Machine (SVM). SVM provide access to the storage and are a logical configuration of different settings together like the Data Protocol, LIF and more.
From Storage > SVMs the wizard to create a new NetApp ONTAP SVM. In this instance the LUN for the VMware Datastore will be accessed through iSCSI. It is important to define this protocol and the wizard will automatically select the security style and default language. It is important to specify the DNS suffix name and DNS server IP Address for correct name resolution and FQDN queries.
In this step everything can be left blank except the LUN OS Type where VMware needs to be selected.
At this point as a final part for the creation of the SVM is to specify the password for the vsadim account (local to the NetApp Cluster) and create a new LIF. Link Interface is essentially the network binding for this particular Data traffic. It will be separated from the Node Management and Cluster Management traffic. Each NetApp ONTAP Node is shipping with at least 4 network cards. From e0a to e0d. Using the browsing button there is the option to specify a free one not already dedicated to the Node or Cluster Management. Each SVM can have up to 4 LIFs for redundancy. For example 2 LIFs can be configured to point at two separate switches for a fail-over configuration using a Round-Robin (RR) technique. It is also a good idea to create DNS (A) and (PTR) Records for the LIFs to use. So in the case of NetApp ONTAP Simulator would be at least 4. 2 for the first Node and 2 for the second Node. The Simulator license allows booting up to two nodes in the same Cluster!
And a final page showing the main settings for the NetApp SVM creation.
Configure NetApp ONTAP Network Interfaces
Along with the creation of the NetApp SVM there is an automatic creation of the Network Interface associated with the SVM. Pretty neat. Problem is the Data Protocol is associated to “none” instead of “iSCSI”. As the screenshot is showing the other two interfaces are already associated to Cluster and Node Management Traffic types.
Since the existing one cannot be modified it is possible to delete and create a new one specifying the iSCSI and Data type of traffic. This also allows to create a second Network Interface to provide redundancy to the Data Traffic on iSCSI.
Configure NetApp ONTAP SVM settings
One last configuration step for the SVM is to associate this particular SVM to the iSCSI traffic and the Disk Aggregate group. From Storage > SVMs view the option to edit the current SVM adding iSCSI in the Data Protocols.
Next is to associate this particular SVM to the desired disk Aggregate as per previous configuration step.
Configure NetApp ONTAP iSCSI settings
Now that the disks groups and the SVM accessing them have been created the next step is to configure NetApp ONTAP iSCSI server setting to let the Initiator to connect to the iSCSI Target running on the NetApp Cluster or better Node in this case. From Storage > SVMs > SVM Settings> the iSCSI section shows the service is not running and also no interfaces are associated yet.
From this section by clicking on Start and Stop service this will generate the iSCSI Target name on the NetApp Node and also the association the create SVM.
When editing the SVM settings in the general tab the option to indicate which iSCSI Initiators will be allowed to connect and with which protocol. Very importantly, it is necessary to also specify the “Operating System”. Since these Initiators will be connecting to a “VMware LUN” will select the type accordingly.
In the Initiators now it is a matter to past the iSCSI Initiator name as it appears on the software iscsi interface in the VMware vSpehere Host.
This operation should be repeated for all VMware Hosts intended to connect to the NetApp LUN.
When starting the iSCSI service it should now show the iSCSI interface connected. Something similar to the screenshot below where the Data traffic connection is using exactly the intended interface “e0b”.
Configure NetApp ONTAP Volume
Next step is to create the NetApp Volume where the NetApp LUN will be configured. If all the available license keys with the NetApp ONTAP Simulator have been added this will allow to use nice features with the NetApp storage Volumes.
So from Storage > Volumes section the option to create a Volume for the NetApp LUN. This includes:
- Name: is simply the desired name for the Netapp Volume.
- Aggregate: is the group of disks configured with RAID protection.
- Tiering Policy: defines if the volume stores snapshots only (from other volumes).
- Size: defines the desired size of the volume based on the size of the aggregate. In theory this should not be overprovisioned but considering aggregate can dynamically expand is making this configuration flexible.
- Space Reserve: gives the option to use thin or thick provisioning for the volume allocation. Useful when same aggregate is used for multiple volumes.
The creation of the NetApp volume is pretty much instantaneous and also provides the option to review or change settings. More on these in a dedicated article.
Configure NetApp ONTAP LUN
At this point everything is ready to configure Netapp ONTAP LUN storage which will be presented to VMware vCenter to create a VMFS Datastore. From the Storage > LUNs section the wizard to create a LUN on the selected SVM.
In this wizard the settings for the Name, description, Type and size of the LUN including the Space Reservation feature. Same as before with thin or thick provisioning. Should there not be enough space to create the LUN the wizard will report this at the end. To know roughly how big should be that LUN as a workaround by opening another Tab on the Web Browser and logging into another session on the NetApp ONTAP is possible to check the size of the selected Aggregate.
Next is to select an existing Volume or create a new one on the fly where to create this LUN.
Very importantly now the option to select which VMware vSphere Hosts will be able to connect to the LUN Target as iSCSI Initiators.
NetApp ONTAP also include the option to run Quality of Service (QoS) routines with customizable policy to avoid the network traffic exceeds predefined limits. Very useful with older or mixed storage types by vendor, speed, class and so on.
The wizard has now all the information to create the LUN storage.
And in a matter of few moments the LUN is configured and ready to be used by VMware vCenter to provision and additional Datastore.
Next step involves the NetApp SVC which installs as an appliance on the VMware vSphere environment and by mean of its plugins configures and provisions storage directly to the VMware vCenter. More to be covered in the next steps.