Category: VMware


The configure-network utility is used for performing various network configurations on the VMware virtual appliances including VCSA and vRealize Suite.

Just login to the virtual appliance using ‘root’ account and run the command: /opt/vmware/share/vami/vami_config_net

image

0) Show current Configuration

1) Exit

2) Default Gateway: Allows configuration of the default gateway for the network

3) Hostname: Hostname of the virtual appliance.

4) DNS: Two DNS servers can be configured as needed.

5) Proxy Server: Proxy Server & Port Number (http:// is prefixed by network utility).

6) IP Address for eth0: This can be either static (specifying the IP address and Subnet Mask) or dynamic (via DHCP).

Ref: Using the configure-network command-line utility in VMware vCenter Support Assistant (2042462)

Advertisements

vRealize Suite Lifecycle Manager allow you to download vRealize OVAs through “My VMware” registered account. The following steps could be followed to configure the Proxy server in case there is no direct internet connection.

Procedure:

1. Login to vRSLM using the ‘root’ account.

2. Run the following command to configure the Network/Proxy settings manually

/opt/vmware/share/vami/vami_config_net

clip_image001

3. Configure the Proxy Server Settings by entering the menu number (5)

Press (Y) and add the IP address/FQDN & Port Number of the Proxy server.

clip_image002

vRealize Suite Lifecycle Manager provides a single installation and management platform for all products in the vRealize Suite.

Once the vRSLM OVF Deployed, you will need to configure the following basic settings before downloading and installing the rest of the vRealize products.

Procedure:

1. Login to vRealize Suite Lifecycle Manager appliance by using the URL https://IPaddress/vrlcm using the default admin account “admin@localhost” and Password “vmware”.

Note: If you are logging in Manager for the first time, reset the root password

clip_image002

2. Configuring Settings:

a. Common Configuration: Allow you to modify Common settings such as passwords, SSH settings, and configuration drift interval.

clip_image004

Drift interval: the interval of time vRealize Suite Lifecycle Manager uses to collect data for configuration drift reports.

b. OVA Configuration: Allow you to select OVA Source location (NFS Share/Local /data/source) or download the OVA direct from VMware using “My VMware” registered account.

clip_image006

c. My VMware: Enter your “My VMware” credentional to download product OVAs through My VMware. “You may need to configure the proxy on LCM”

clip_image008

Once registered, you may start the download of the OVA Products.

clip_image010

clip_image012

d. Logs: select the level of information to collect and the number of log files to keep.

clip_image014

Also, You may trigger the download of the logs.

clip_image015

e. Update: Allow you to install updates to the vRealize Suite Lifecycle Manager appliance.

clip_image017

f. Generate Certificate: Allow you to generate a new SAN Certificate

clip_image019

References:

· Release Notes: https://docs.vmware.com/en/vRealize-Suite/2017/rn/vrealize-lifecycle-manager-1-release-notes.html

· Documentation: https://docs.vmware.com/en/vRealize-Suite/2017/com.vmware.vrsuite.lcm.doc/GUID-5E1CB756-CE86-430D-89C0-DE3831C33738.ht

VMware Workstation 14 Pro allows you to quickly scan for virtual machines on local folders as well as network shared storage and USB drives.

Procedure:

1. Select File > Scan for Virtual Machines.

clip_image001

2. In the Select a location to scan text box, enter or browse for a location, then click Next.

clip_image002

3. Select the virtual machines and the library node, then Click Finish.

clip_image003

Notes:

1. To use the same folder hierarchy in the library, click Match the file system folder hierarchy in the library.

2. If the location of the virtual machines you are adding to the library is on a remote server or a removable storage device, select the options in the Copy to local disk options dialog box that meet your needs.

Reference:

VMware Workstation Pro 14.0

vRealize Suite Lifecycle Manager is designed to streamline and simplify the deployment and on-going management of the vRealize product portfolio, throughout the product life cycle. vRealize Suite Lifecycle Manager relieves IT teams of day-to-day admin tasks by accelerating product installation and deployment, simplifying on-going management and configuration, and enabling best practice-based implementation.

image

Key Benefits:

  • Rapid Installation:

Simple and flexible deployment model with product and solution based installation supported. Automated environment replication and validation process.

  • Easy Ongoing Management:

Automated configuration management and drift management with health monitoring capabilities.

  • One-Click Upgrade:

Simplified upgrade and patching process with environment snapshot function.

  • Best Practice Implementation:

Easy alignment with VMware recommended reference architecture and validated design (VVD) through pre-defined settings.

Software Requirements

  • vCenter Server 6.0/6.5
  • ESXi version 6.0/6.5

Hardware Requirements

  • 2 vCPUs
  • 16 GB memory
  • 127 GB storage

Supported vRealize Suite Products

  1. vRealize Automation 7.2 or 7.3
  2. vRealize Orchestrator 7.2 or 7.3 (embedded with vRealize Automation)
  3. vRealize Business for Cloud 7.2.1, 7.3, or 7.3.1
  4. vRealize Operations Manager 6.5 or 6.6.1
  5. vRealize Log Insight 4.3 or 4.5

Deployment:

1. Login to vSphere Web Client (Flash), Select the Deploy OVF Template.

2. Browse to the path of the vRealize Suite Lifecycle Manager appliance OVA file.

clip_image002

3. Enter an appliance name and select a deployment location.

clip_image004

4. Select the host and cluster.

clip_image006

5. Review the template details.

clip_image008

6. Read and accept the end-user license agreement.

clip_image010

7. Select the storage and vDisk format (Thick Format recommended for Production environment)

clip_image012

8. From the drop-down menu, select a Destination Network and IP Protocol (IPv4)

clip_image014

9. Define the Host Name, Certificate configurations and IP settings.

clip_image016

10. Verify the settings and click finish to complete the deployment.

clip_image018

Note: the following command could be used to modify the network configuration after the deployment.

/opt/vmware/share/vami/vami_config_net

Login to vRealize Suite Lifecycle Manager

1. Login to vRealize Suite Lifecycle Manager appliance by using the URL https://IPaddress/vrlcm using the default admin account “admin@localhost” and Password “vmware”.

clip_image020

References:

· vRealize Suite Lifecycle Manager Solution Brief

· Blog: https://blogs.vmware.com/management/2017/09/vrealize-suite-lifecycle-management.html

· Release Notes: https://docs.vmware.com/en/vRealize-Suite/2017/rn/vrealize-lifecycle-manager-1-release-notes.html

· Documentation: https://docs.vmware.com/en/vRealize-Suite/2017/com.vmware.vrsuite.lcm.doc/GUID-5E1CB756-CE86-430D-89C0-DE3831C33738.html

· Download: https://my.vmware.com/group/vmware/details?downloadGroup=VRSLCM-10&productId=675&rPId=18268

· Use Case Deployment Using vRealize Suite Lifecycle Manager: https://pubs.vmware.com/vmware-validated-design-41/index.jsp#com.vmware.vvd.usecases-lcm-deploy.doc/GUID-4F7F9BCA-86B6-4E01-80C6-B4C83E6F35AA.html

VMware PowerCLI is a command-line and scripting tool built on Windows PowerShell and provides more than 600 cmdlets for managing and automating VMware products and features.

Online Installation using the PowerShell Gallery:

1- Determine Current PowerShell Version:

PS C:\> $PSVersionTable.PSVersion

2- Finds VMware PowerCLI modules from the PS online gallery

PS C:\> Find-Module -Name “VMware.Powercli” -AllVersions | Format-List Name,Version

3- Install VMware PowerCLI module that meets specified criteria from PS online gallery

PS C:\> Install-Module -Name VMware.PowerCLI -RequiredVersion 6.5.2.6268016 -Scope CurrentUser

4- Verify the Installation of the VMwarePowerCLI module that has been installed

PS C:\ > Get-Module -ListAvailable | where {$_.Name -like “VMware.Powercli”} | Format-List Name,Version 

5- Create desktop shortcut for VMwarePowerCLI module  using the target

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -noe -c “Import-Module VMware.PowerCLI”

6- Update VMware PowerCLI to the latest version

PS C:\> Update-Module -Name VMware.PowerCLI

Offline Installation:

1- Download the VMware PowerCLI module using a computer that has internet access

PS C:\> Save-Module -Name VMware.PowerCLI -Path C:\PowerCLI

2- Copy and replace the individual VMware PowerCLI module folders to one of the following locations on the target computer

  • Current user usage: %userprofile%\Documents\WindowsPowerShell\Modules
  • All users usage: %SystemRoot%\System32\WindowsPowerShell\v1.0\Modules

References:

vRealize Operations Manager 6.5 focuses on enhancing product scalability limits and troubleshooting capabilities.

  • Additional monitoring capabilities: Adds ability to increase memory and increase the scope of monitoring within the same environment.
  • Automatic upgrade of Endpoint Operations Agents
  • Enhanced troubleshooting capabilities: Quickly correlates logs and metrics in context for any monitored object using Log Insight within vRealize Operations Manager.
  • Improved collaboration: Simplifies export and import of custom groups and eases dashboard sharing between different vRealize Operations Manager installations.

Install Software Upgrade

Prerequisites

· Create a snapshot of each node in your cluster.

· Obtain the PAK file for your cluster.

To update your vRealize Operations Manager environment, you need to download the right PAK file for the clusters you wish to upgrade from VMware

Notice: that only the Virtual Appliance clusters use an OS Update PAK file.

image

Procedure

1

Log into the master node vRealize Operations Manager Administrator interface of your cluster at https://master-node-FQDN-or-IP-address/admin.

2

Click Software Update in the left panel.

3

Click Install a Software Update in the main panel.

image

4

Follow the steps in the wizard to locate and install your PAK file.

a

If you are updating a Virtual Appliance deployment, perform the OS update.

image

This updates the OS on the virtual appliance and restarts each virtual machine.

image

b

Install the product update PAK file.

image

Wait for the software update to complete. When it does, the Administrator interface logs you out.

5

Log back into the master node Administrator interface.

The main Cluster Status page appears and cluster goes online automatically. The status page also displays the Bring Online button, but do not click it.

image

6

Clear the browser caches and if the browser page does not refresh automatically, refresh the page.

The cluster status changes to Going Online. When the cluster status changes to Online, the upgrade is complete.

Note: If a cluster fails and the status changes to offline during the installation process of a PAK file update then some nodes become unavailable. To fix this, you can access the Administrator interface and manually take the cluster offline and click Finish Installation to continue the installation process.

7

Click Software Update to check that the update is done.

A message indicating that the update completed successfully appears in the main pane.

image

Ref:

vRealize Operations Manager 6.5 Release Notes

https://pubs.vmware.com/Release_Notes/en/vrops/65/vrops-65-release-notes.html

Install a Software Update

https://pubs.vmware.com/vrealizeoperationsmanager-65/index.jsp#com.vmware.vcom.core.doc/GUID-0F212458-40BD-4C0E-9319-DB40DD7BC671.html

VCAP6-DCV Deployment Study Guide

This study guide covers all exam blueprint sections with additional Topics, KB and Articles to help prepare you for the VCAP6-DCV Deployment Exam.

Section 1 – Create and Deploy vSphere 6. X Infrastructure Components

Section 2 – Deploy and Manage a vSphere 6. X Storage Infrastructure

Section 3 – Deploy and Manage a vSphere 6. X Network Infrastructure

Section 4 – Configure a vSphere Deployment for Availability and Scalability

Section 5 – Configure a vSphere Deployment for Manageability

Section 6 – Configure a vSphere Deployment for Performance

Section 7 – Configure a vSphere 6. X Environment for Recoverability

Section 8 – Configure a vSphere 6. X Environment for Security

Download: VCAP6-DCV Deployment Study Guide.pdf

Contents:

  • EMC UnityVSA.
  • About Storage Pools.
  • About Storage tiers.
  • Storage Tier Descriptions.
  • About FAST VP.
  • FAST VP Tiering Policy.
  • VMware-Aware Unisphere.
  • Add a VMware vCenter server or ESXi host.
  • VMware Datastores.
  • Virtual Machine File System (VMFS) Datastores.
  • Create a VMware VMFS Datastore.
  • VMware Network File System (NFS) Datastores.
  • Create a VMware NFS datastore.
  • VMware vStorage API for Array Integration (VAAI)
  • VMware vStorage API for Storage Awareness (VASA)
  • Add the system as a VASA provider (Register Storage Providers)
  • VMware Virtual Volumes (VVOLS)
  • Capability Profiles.
  • VMware Storage Policies and Rules.
  • Check Compliance for a VM Storage Policy.
  • Connectivity.
  • VVOL Datastores.
  • Create a VMware VVol datastore.
  • Add VVOL Datastores to vCenter.
  • Protocol Endpoints.
  • VM Storage Policies.
  • Virtual Volumes.
  • Data Services.
  • VVOL Metrics.

EMC UnityVSA

EMC UnityVSA (Unity Virtual Storage Appliance) is a unified Software Defined Storage (SDS) solution that runs atop the VMware ESXi platform.

UnityVSA provides a flexible storage option for environments that do not require dedicated storage systems such as test/development or remote office/branch office (ROBO) environments.

About Storage Pools

A pool is a set of disks that provide specific storage characteristics for the resources that use them.

For example, the pool configuration defines the types and capacities of the disks in the pool. For physical deployments, the pool configuration also defines the RAID configurations (RAID types and stripe widths) for these disks.

You choose which pool to use when you create a new storage resource.
Note: Before you create storage resources, you must configure at least one pool. You cannot shrink a pool or change its storage characteristics without deleting the storage resources configured in the pool and the pool itself. However, you can add disks to expand the pool.

Pools generally provide optimized storage for a particular set of applications or conditions.

When you create a storage resource for hosts to use, you must choose a pool with which to associate the storage resource. The storage that the storage resource uses is drawn from the specified pool.

If there are multiple disk types on the system, you can define multiple tiers for the pool.

In physical deployments, each tier can be associated with a different RAID type.

About Storage tiers

The storage tiers available for both physical and virtual deployments are described in the table below.

· For physical deployments, the storage tier is associated with the physical disk type.

· For virtual deployments, the storage tier is associated with the virtual disk’s underlying characteristics and must be manually assigned.

· For both types of deployments, if FAST VP is installed on the system, you can create tiered pools to optimize disk utilization. A tiered pool consists of multiple disk types, such as SAS Flash 2 disks and SAS disks.

Note: SAS Flash 3 disks must be used in a homogeneous pool.

Storage Tier Descriptions

Storage tier

Disk types

Description

Default RAID configuration(physical deployments only)

Extreme Performance tier

Solid state extreme performance disks.

The following types are supported:

SAS Flash 2

SAS Flash 3

Provides very fast access times for resources subject to variable workloads.

For example, databases can achieve their best performance when using SAS Flash disks. SAS Flash disks are more expensive than SAS disks per GB of storage.

Only SAS Flash 2 disks can be used in the FAST Cache and with FAST VP.

RAID 5 (4 + 1).

Performance tier

SAS – Rotating performance disk

Provides high, all-around performance with consistent response times, high throughput, and good bandwidth at a mid-level price point. Performance tier storage is appropriate for database resources accessed centrally through a network.

RAID 5 (4 + 1).

Capacity tier

NL-SAS – Rotating capacity disk

Provides the highest storage capacity with generally lower performance.

Capacity storage is appropriate for storing large amounts of data that is primarily static (such as video files, audio files, and images) for users and applications that do not have strict performance requirements.

For data that changes or is accessed frequently, capacity tier storage has significantly lower performance.

RAID 6 (6 + 2).

About FAST VP

Fully Automated Storage Tiering for Virtual Pools (FAST VP) enables the system to retain the most frequently accessed or important data on fast, high-performance disks and move the less frequently accessed and less important data to lower-performance, cost-effective disks. FAST VP does the following for storage pools:

• Monitors the usage of the data in a tiered pool. Tiered pools are heterogeneous pools that are configured with multiple classes of disks (SAS Flash 2 plus SAS and/or NL-SAS).

• Depending on the tiering policy, uses the monitoring statistics to automatically relocate data chunks, at 256 MB granularity, to other tiers within the pool. For example, the Start High then Auto-Tier policy relocates data to the storage tier that is best suited for that data, based on relative activity.

• Performs load balancing across the disks in tiered and non-tiered pools.

FAST VP is an automated feature that optimizes disk utilization. It requires very little manual intervention.

To configure and use the FAST VP feature, the FAST VP license must be installed on the system. FAST VP can use all supported disk types except for SAS Flash 3 disks. Not all models support FAST VP.

The data relocation performed by FAST VP can help you achieve the following benefits:

· Increased performance

In some cases, you can double performance throughput by adding less than 10 percent of a pool’s total capacity in SAS Flash 2 disks.

· Reduced Total Cost of Ownership (TCO)

Using a combination of NL-SAS, SAS, and SAS Flash 2 disks instead of all SAS disks enables you to address performance requirements and still reduce the disk count. In some cases, you can achieve up to a two-thirds reduction in disk count by using FAST VP.

FAST VP Tiering Policy

The following table describes the FAST VP tiering policy settings, which are defined at the data-resource level. This policy defines both the initial tier placement and the ongoing automated tiering of data during data relocation operations.

FAST VP tiering policy settings

Tiering policy

Corresponding initial tier placement

Description

Start High then Auto-Tier (default)

Highest available tier

Recommended setting. Sets the initial data placement to the highest-performing disks with available space, and then relocates portions of the storage resource’s data based on I/O activity.

Auto-Tier

Optimize for pool performance

Sets the initial data placement to an optimum, system-determined setting, and then relocates portions of the storage resource’s data based on the storage resource’s performance statistics such that data is relocated among tiers according to I/O activity.

Highest Available Tier

Highest available tier

Sets the initial data placement and subsequent data relocation (if applicable) to the highest-performing disks with available space.

Lowest Available Tier

Lowest available tier

Sets the initial data placement and subsequent data relocation (if applicable) to the most cost-effective disks with available space.

 

VMware-Aware Unisphere

EMC UnityVSA provides VMware discovery capabilities to collect virtual machine and datastore storage details from vSphere and display them in the context of the storage system.

This automates the iSCSI target discovery for ESXi hosts to access the storage. In Unisphere, you can provision storage for a VMware datastore and configure access to the relevant ESXi host.

The storage system then automatically connects to the ESXi host and configures the relevant datastore access. When you modify or delete a datastore in Unisphere, the storage system automatically updates the ESXi host to include the change or remove the datastore.

Add a VMware vCenter server or ESXi host

Procedure

1. Under Access, select VMware > vCenters.

2. Select Add.

3. On the Add vCenter or ESXi Host window, enter the relevant details, and click Find.

clip_image002

4. From the list of discovered entries, select the relevant ESXi hosts, and click Next.

clip_image003

5. On the Summary page, review the ESXi hosts, and click Finish.

clip_image005

6. On the Result page, review the overall status and Click Close.

clip_image007

If a vCenter is entered, all of the ESXi hosts managed by that vCenter are discovered and are eligible for import. You can select all or just a subset of the discovered ESXi hosts to be imported. Any Fibre Channel or iSCSI initiators on these ESXi hosts are also imported for host registration purposes.

Once imported, the following information is populated in each tab:

· vCenters: The name and software version of the vCenters are displayed in the vCenters page.

· ESXi Hosts: This page provides a list of ESXi hosts along with the vCenter that it’s managed by, code version, and number of initiators.

· Virtual Machines: This page provides a list of Virtual Machines along with the ESXi host that it’s hosted on and the size of the VM.

· Virtual Disks: This page provides a list of Virtual Disks that are being provided from this Unity system along with the VM it’s assigned to, size of the virtual disk, and the datastore it came from.

clip_image009

VMware Datastores

when provisioning a datastore, a LUN or file system is created first and then access is granted to an ESXi host.

Then, the VMware administrator performs a rescan and builds a Virtual Machine File System (VMFS) on the LUN or mounts the NFS export.

Unisphere allows for creation of VMFS and NFS datastores that are optimized for VMware.

Unity simplifies datastore provisioning by automating the tasks that are normally performed by the VMware administrator.

When a datastore is created and access is provided to an ESXi host, it is automatically rescanned and made available as a datastore in vSphere.

These datastores can take advantage of the same data services that are available to LUNs and File Systems, such as snapshots and replication.

Virtual Machine File System (VMFS) Datastores

VMFS datastores are accessed through block protocols so iSCSI or Fibre Channel connectivity is required.

Once the communication path has been established, ensure the VMware ESXi hosts for these datastores are registered.

This process can be automated by importing the VMware vSphere information in to Unisphere. Once this is complete, VMFS datastores can be created.

In the VMFS datastore creation wizard, host access can be configured to ESXi hosts. For any ESXi hosts that are provided access to this datastore, the new storage is automatically rescanned and made available to the ESXi host.

Hosts can be given access to the datastore on a LUN, Snapshot, or LUN and Snapshot level. After the creation of a VMFS datastore, the capacity can be increased, but not reduced.

Create a VMware VMFS Datastore

Procedure

1. Under Storage, select VMware > Datastores.

clip_image011

2. Click the Add icon.

3. On the Type screen, select Block for VMFS.

clip_image013

4. Enter a Name and optionally a Description for the datastore.

clip_image015

5. Select the storage configuration details for the datastore.

clip_image017

6. Select the Host that Can Access the Storage

clip_image019

7. Enable the Snapshot Schedule if required.

clip_image021

8. Enable the Replication if required.

clip_image023

9. Review the summary and then click finish.

clip_image025

10. The new storage is automatically rescanned and made available to the ESXi host

clip_image027

VMware Network File System (NFS) Datastores

NFS datastores leverage UnityFS, a 64-bit file system architecture, which includes several advantages. UnityFS offers 64TB file system sizes, file system shrink, replication, snapshots, increased limits, and more.

NFS datastores require a NFS-enabled NAS server to be created first. In the NFS datastore creation wizard, access can be configured to ESXi hosts. For any ESXi hosts that are provided access to this datastore, the new storage is automatically rescanned and made available to the ESXi host. The wizard also allows the Host IO Size to be selected.

clip_image029

The Host IO Size specifies the smallest guaranteed physical mapping within the file system. 8K (default), 16K, 32K, 64K or a specific application can be selected from the dropdown list. Matching this to the application’s block size provides benefits such as eliminating the overhead and performance impact of unnecessarily granular mappings.

If you’re unsure about this setting, or if the datastore is for general purpose use, use the default of 8K since this setting cannot be changed after the datastore is created. Configuring this to be larger than the actual host IO size could result in increased overhead, reduced performance, and higher flash wear.

However, configuring this to be too small does not allow the datastore to fully take advantage of the performance optimizations when the IO size is matched. It is recommended to leave this at the default value of 8K for general purpose datastores, or if you are not certain which application or IO size is used on this datastore.

If the system detects the majority of I/Os are different than the configured file system size, a warning is generated in Unisphere to alert the administrator.

Create a VMware NFS datastore

Procedure

1. Under Storage, select VMware > Datastores.

clip_image030

2. Click the Add icon.

3. On the Type screen, select File for NFS.

4. Select the NAS Server the datastore will be based on.

clip_image032

5. Enter a Name and optionally a Description for the datastore.

clip_image034

6. Select the storage configuration details for the datastore.

clip_image036

7. Select the Host that Can Access the Storage and the required Permission

clip_image038

7. Enable the Snapshot Schedule if required.

clip_image039

8. Enable the Replication if required.

clip_image040

9. Review the summary and then click finish.

clip_image042

10. The new storage is automatically rescanned and made available to the ESXi host.

clip_image044

VMware vStorage API for Array Integration (VAAI)

vStorage API for Array Integration (VAAI) improves ESXi host utilization by offloading storage-related tasks to the Unity system. Since these tasks are processed by the array, the ESXi host’s CPU, memory, and network utilization is reduced.

For example, an operation such as provisioning full clones from a template VM can be offloaded to Unity. Unity processes these requests internally, performs the write operations, and returns an update to the ESXi host once the requests are complete.

The following primitives are supported with Unity:

· Block

o Atomic Test and Set (ATS) – Enables arrays to perform locking at a block level of a LUN, instead of the whole LUN. Also known as Hardware-Assisted Locking.

o Block Zero – Enables arrays to zero out a large number of blocks to speed up virtual machine provisioning. Also known as Hardware-Assisted Zeroing.

o Full Copy – Enables arrays to make full copies of data within the array without the need for the ESXi host to read and write the data. Also known as Hardware-Assisted Move.

o Thin Provisioning – Enables arrays to reclaim unused blocks on a thin LUN. Also known as Dead Space Reclamation.

· File

o Fast File Clone – Enables the creation of virtual machine snapshots to be offloaded to the array.

o Full File Clone – Enables the offloading of virtual disk cloning to the array.

o Reserve Space – Enables provisioning virtual disks using the Thick Lazy and Eager Zeroed options over NFS.

What do I need to know about the Hardware Acceleration Support Status?

If you go to ESXi Host > Configuration > Storage, you can see the Hardware Acceleration Status in the panel on the right side.
For each storage device and datastore, the vSphere Client displays the hardware acceleration support status in the Hardware Acceleration column of the Devices view and the Datastores view.
The status values are Unknown, Supported, and Not Supported. The initial value is Unknown. The status changes to Supported after the host successfully performs the offload basic operations. If the offload operation fails, the status changes to Not Supported.

clip_image046

To determine if your storage device supports VAAI, test the Full Copy VAAI primitive:

  1. Using the vSphere Client, browse the datastore and locate a virtual disk (VMDK) of at least 4 MB that is not in use.
  2. Copy the virtual disk to a new file.
  3. Check the Hardware Acceleration status to verify that it changes from Unknown to either Supported or Not Supported.

Note: VAAI primitives can also be tested by creating a virtual machine with at least one new virtual disk, or cloning a virtual machine.
Can I check the VAAI status from the command line?

  • On ESXi 6 To check the VAAI status, run this command:
    # esxcli storage core device vaai status get
    You see output similar to:

clip_image048

VAAI hardware offload cannot be used when:

  • The source and destination VMFS volumes have different block sizes
  • The source file type is RDM and the destination file type is non-RDM (regular file)
  • The source VMDK type is eagerzeroedthick and the destination VMDK type is thin
  • The source or destination VMDK is any kind of sparse or hosted format
  • Cloning a virtual machine that has snapshots because this process involves consolidating the snapshots into the virtual disks of the target virtual machine.
  • The logical address and/or transfer length in the requested operation is not aligned to the minimum alignment required by the storage device (all datastores created with the vSphere Client are aligned automatically)
  • The VMFS datastore has multiple LUNs/extents spread across different arrays

VMware vStorage API for Storage Awareness (VASA)

vStorage API for Storage Awareness (VASA) is a VMware-defined and vendor neutral API that enables vSphere to determine the capabilities of a storage system. The API requests basic storage information from the Unity system, which is used for monitoring and reporting storage details to the user.

For example, if a datastore has FAST Cache, thin, and autotier capabilities, this information is displayed and also used to monitor whether or not an it’s compliant with defined policies.

Unity has a native VASA provider which supports both the VASA 1.0 and 2.0 (Virtual Volumes) protocols, so no external plugins or add-ons are required.

In order to leverage VASA, the Unity system must be added as a Vendor Provider in vSphere.

clip_image050

If a policy is selected while provisioning a VM, the available datastores’ capabilities are checked and categorized as either compatible or incompatible. After deployment, the VM is continuously monitored for compliance with the selected policy. If a VM becomes uncompliant, an alert is provided to the administrator.

Add the system as a VASA provider (Register Storage Providers)

For the vCenter server to communicate with the system, add the system as a storage provider in the vSphere client.

Use the following information:

· Name – Name of the storage provider that will appear in the vSphere client. You can choose to use any name you want.

· URL – The VASA Provider service URL. The URL must be in the following format: https://<management IP address>: 8443/vasa/version.xml

· Login – Unisphere user name with the Administrator or VM Administrator role. It is recommended that you specify a user account with the VM Administrator role. Note the following syntax:

o For local users: local/<user name>

o For LDAP users: <domain>/<user name>

· Password – The password associated with the user account.

Procedure

1. Browse to the vCenter Server in the vSphere Web Client navigator.

2. Click the Manage tab, and click Storage Providers.

clip_image052

3. Click the Register a new storage provider icon.

4. Type connection information for the storage provider, including the name, URL, and credentials.

clip_image054

5. (Optional) To direct the vCenter Server to the storage provider certificate, select the Use Storage Provider Certificate option and specify the certificate’s location.

If you do not select this option, a thumbprint of the certificate is displayed. You can check the thumbprint and approve it.

This step is not valid for Virtual SAN and is optional for all other types of storage providers.

6. Click OK to complete the registration.

The vCenter Server has registered the storage provider and established a secure SSL connection with it.

clip_image055

VMware Virtual Volumes (VVOLS)

Virtual Volumes (VVols) is a storage framework introduced in VMware vSphere 6.0 that is based on the VASA 2.0 protocol. VVols enables VM-granular data services and Storage Policy Based Management (SPBM).

In order to configure VVols, you must have:

· VMware vCenter 6.0 or newer

· VMware ESXi 6.0 or newer

· VMware vSphere Web Client

In traditional storage environments, LUNs formatted with VMFS or NFS mount points are used as datastores for virtual machines. Data services are applied at the LUN or file system level, which means all virtual machines that reside on that particular datastore are also affected.

VVols enables storing VM data on individual virtual volumes, which reside on a VVol datastore.

Data services, such as snapshots and clones, can be applied at a VM-level granularity and are offloaded to the Unity system.

Also, policies and profiles can be leveraged to ensure VMs are stored on compliant storage. Any VMs that become noncompliant result in an alert to the administrator.

The workflow for provisioning a VVol datastore differs from traditional NFS or VMFS datastore provisioning. clip_image057

Capability Profiles

Capability Profiles are used to advertise the available characteristics of a storage pool as part of SPBM. These capabilities include service level, usage tags, FAST Cache, drive type, and so on.

When creating a new storage pool, there is an option to also create a Capability Profile for that pool.

clip_image059

Capability Profiles include the capabilities below.

Note that on UnityVSA, certain capabilities such as FAST Cache and RAID Type are not available so they are omitted.

Usage Tags

User-defined tags that storage administrators create to designate what this Capability Profile should be used for. Since these tags are also propagated to vSphere, they can used as a communication mechanism between the storage and VMware administrators.

For example, if this Capability Profile is tagged with “Database”, the VMware administrator can place databases VMs on this Capability Profile.

Service Level

A level such as Platinum, Gold, Silver, or Bronze that’s calculated based off the tier and RAID types that comprise the pool.

Note that due to the virtualized nature of UnityVSA, RAID level is not included in the service level calculation.

UnityVSA Service Levels Type

Gold

Silver

Bronze

Single Tier Pool

Extreme Performance

Performance

Capacity

Two Tier Pool

Extreme Performance

Performance

Performance

Capacity

Extreme Performance

Capacity

Three Tier Pool

Extreme Performance

Performance

Capacity

Storage Properties

The detailed storage properties below are also displayed here for visibility. It is recommended that when creating a VM storage policy, you use either Usage Tags or Service Level.

These low-level storage properties are designed for advanced VMware administrators who are familiar with storage capabilities.

· Tiering Policies – Shows the available tiering policies for this Capability Profile. This is only available for pools that include multiple tiers.

· FAST Cache – Shows whether or not FAST Cache is enabled on this pool. This is not available on UnityVSA.

· Drive Type – Shows the drives that are in this pool. This can be:

– Capacity Tier – Pool with only NL-SAS drives.

– Extreme Multitier – Multitiered pool, including Flash drives.

– Extreme Performance Tier – Pool with only Flash drives.

– Multitier – Multitiered pool, without any Flash drives.

– Performance Tier – Pool with only SAS drives.

· RAID Level – Shows the RAID level used for this pool. If there are different RAID levels for each tier, this displays “Mixed”. This is not available on UnityVSA.

· Space Efficiency – A standard VMware-defined capability that shows the thin and thick capabilities of this pool.

If the Capability Profile was not created during pool creation, it can also be created on an existing pool in the Capability Profiles tab of the VMware page.

You should create one Capability Profile for each storage pool that is used for VVols. This page displays all of the existing Capability Profiles and their associated pools.

You can also click the Edit button to see and update the details of the Capability Profile.

clip_image061

VMware Storage Policies and Rules

Virtual machine storage policies are essential to virtual machine provisioning. These policies help you define storage requirements for the virtual machine and control which type of storage is provided for the virtual machine, how the virtual machine is placed within the storage, and which data services are offered for the virtual machine.

When you define a storage policy, you specify storage requirements for applications that run on virtual machines. After you apply this storage policy to a virtual machine, the virtual machine is placed in a specific datastore that can satisfy the storage requirements.

In software-defined storage environments, such as Virtual SAN and Virtual Volumes, the storage policy also determines how the virtual machine storage objects are provisioned and allocated within the storage resource to guarantee the required level of service. In environments with third-party I/O filters installed, you can use storage policies to enable an additional level of data services, such as caching and replication, for virtual disks.

Rules that you include in a storage policy can be based on storage-specific data services and tags, or the rules can be common.

· Common Rules

Common rules are based on data services that are generic for all types of storage and do not depend on a datastore. These additional services become available in the VM Storage Policies interface when you install third-party I/O filters developed through vSphere APIs for I/O Filtering. You can reference these data services in a VM storage policy.

Unlike storage-specific rules, common rules do not define storage placement and storage requirements for a virtual machine, but ensure that additional data services, such as I/O filters, become enabled for the virtual machine. No matter which datastore the virtual machine runs on, the enabled filters can provide the following services:

o Caching. Configures a cache for virtual disk data. The filter can use a local cache or a flash storage device to cache the data and increase the Input/Output Operations Per Second and hardware utilization rates for the virtual disk.

o Replication. Replicates virtual machine or virtual disks to external targets such as another host or cluster.

· Rules Based on Storage-Specific Data Services

These rules are based on data services that storage entities such as Virtual SAN and Virtual Volumes advertise.

To supply information about underlying storage to vCenter Server, Virtual SAN and Virtual Volumes use storage providers, also called VASA providers. Storage information and datastore characteristics appear in the VM Storage Policies interface of the vSphere Web Client as data services offered by the specific datastore type.

A single datastore can offer multiple services. The data services are grouped in a datastore profile that outlines the quality of service that the datastore can deliver.

When you create rules for a VM storage policy, you reference data services that a specific datastore advertises. To the virtual machine that uses this policy, the datastore guarantees that it can satisfy the storage requirements of the virtual machine. The datastore also can provide the virtual machine with a specific set of characteristics for capacity, performance, availability, redundancy, and so on.

· Rules Based on Tags

Rules based on tags reference datastore tags that you associate with specific datastores. You can apply more than one tag to a datastore.

Typically, tags serve the following purposes:

o Attach a broad storage-level definition to datastores that are not represented by any storage providers, for example, VMFS and NFS datastores.

o Encode policy-relevant information that is not advertised through vSphere API for Storage Awareness (VASA), such as geographical location or administrative group.

Similar to storage-specific services, all tags associated with datastores appear in the VM Storage Policies interface. You can use the tags when you define rules for the storage policies.

Procedure

1. From the vSphere Web Client Home, click Policies and Profiles > VM Storage Policies.

clip_image063

2. Click the Create a New VM Storage Policy icon.

3. Select the vCenter Server instance.

4. Type a name and a description for the storage policy.

5. On the Rule Set page, select a storage provider, for example, EMC.VASA10 or EMC.UNITY.VVOL from the Rules based on data services drop-down menu.

clip_image065

The page expands to show data services provided by the storage resource.

6. Select a data service to include and specify its value.

clip_image067

Verify that the values you provide are within the range of values that the data services profile of the storage resource advertises.

7. (Optional) Add tag-based rules.

8. Click Next.

9. On the Storage Compatibility page, review the list of datastores that match this policy and click Next.

To be eligible, the datastore must satisfy at least one rule set and all rules within this set.

clip_image069

10. Review the storage policy settings and make changes by clicking Back to go back to the relevant page.

11. Click Finish.

Check Compliance for a VM Storage Policy

You can check whether a virtual machine uses a datastore that is compatible with the storage requirements specified in the VM storage policy.

Prerequisites

Verify that the virtual machine has a storage policy that is associated with it.

Procedure

1. In the vSphere Web Client, browse to the virtual machine.

2. From the right-click menu, select VM Policies > Check VM Storage Policy Compliance.

3. Click the Summary tab for the virtual machine.

4. View the compliance status in the VM Storage Policies pane.

clip_image071

EMC System-Defined Storage Capabilities

clip_image072

Connectivity

Unity supports access to VVol datastores over block or file protocols.

Fibre Channel or iSCSI can be used for block access and NFS can be used for file access.

For Fibre Channel, ensure the ESXi host is zoned to the FC ports on both SPs for multipathing purposes. Alternatively, the ESXi host can be directly connected to both SPs.

Once the connection and zoning is complete, the Fibre Channel ports log in and the process is complete as soon as the World Wide Names (WWNs) are displayed in the Initiator Paths page.

If iSCSI is used, iSCSI interfaces must be created on both SPs. In the vSphere Web Client, add one of the iSCSI interfaces that was just created as a Dynamic Discovery target.

The iSCSI interface on the other SP is discovered automatically. Afterwards, run a rescan of the storage adapter and confirm the Initiator IQNs are displayed in the Initiator Paths page.

File protocol access requires the creation of a NAS Server, which holds file protocol configuration information such as networking, protocols, and DNS settings.

It is highly advisable to create at least one VVols-enabled NAS server on each SP for load balancing and high availability purposes.

In order to access VVol datastores, an NFS and VVols-enabled NAS server must be created.

clip_image074

VVOL Datastores

VVols can only be stored on VVol Datastores; they cannot be stored on traditional NFS or VMFS datastores. In VMware terms, VVol datastores on Unity are also known as storage containers.

Unity supports both Block and File VVol datastores for access using NFS or iSCSI/Fibre Channel, respectively.

When creating a new VVol datastore, choose the appropriate option depending on the connectivity method that is configured.

clip_image076

All of the capability profiles that have been created on the system are displayed in the datastore creation wizard. This allows you to select which Capability Profiles to include in this datastore.

Note that selecting multiple Capability Profiles creates a VVol datastore that spans across multiple storage pools, which is a feature that is unique to Unity VVol datastores.

Doing this enables the datastore to be compatible with multiple VMware storage policies so VMs can be easily migrated by updating the storage policy.

If a datastore only contains a single Capability Profile, it can still be migrated to using Storage vMotion.

In addition to selecting the Capability Profiles for this datastore, you can also configure how much storage to thinly allocate from each Capability Profile.

clip_image078

Regardless of which protocol is used for VVol datastore access, the ESXi hosts should be registered on the Unity system so they can be granted access to the datastore.

The registration process can be automated by importing the VMware environment.

ESXi hosts that attempt to mount datastores that they do not have access to show up as inaccessible.

Host access can be added during creation or configured on an existing VVol datastore.

Create a VMware VVol datastore

Before you begin

You must create capability profiles before creating a VVol datastore.

Procedure

1. Under Storage, select VMware > Datastores.

clip_image079

2. Click the Add icon.

3. On the Type page, select VVOL (File) or VVOL (Block).

clip_image076

4. Enter a Name and optionally a Description for the VVol datastore.

clip_image081

5. Select one or more capability profiles that will be used by the VVols datastore.

a. Optionally, click on the current size or Edit in the Datastore Size (GB) column to adjust the space allocated from the pool to each selected capability profile.

b. Adjust the size and/or unit of measure (TBs or GBs) of the capability profile.

c. Click OK.

clip_image082

6. Select the hosts that will have Access to the datastore.

clip_image084

7.Review the Summary and then Click Finish.

clip_image086

8. Review the Results and then Click Close.

clip_image088

Add VVOL Datastores to vCenter

Once the VASA Vendor Provider has been added to the vCenter, the VVol-related information on the Unity system can be passed to vSphere. This enables VVol datastores to be mounted to ESXi hosts for use.

Note that unlike NFS and VMFS datastores, VVol datastores are not mounted automatically to the ESXi hosts that are granted access.

Instead, the administrator needs to add it using the New Datastore wizard in the vSphere Web Client. When adding the datastore, select Type: VVOL.

clip_image090

This displays a list of VVol datastores that are currently available on the Unity system.

Select the appropriate datastore and provide a name to mount the VVol datastore to an ESXi host.

Once the datastore has been mounted on one ESXi host, you can right-click it to easily mount the same datastore to other ESXi hosts.

clip_image092

Protocol Endpoints

Protocol Endpoints (PE) are used as IO access points from the ESXi host to the Unity system.

PEs are automatically created when host access is configured for VVol datastores.

Unisphere includes a page that lists all of the PEs that currently exist on the system.

The behavior of PEs depends on whether block or file VVol datastores are used.

For File VVol datastores, the NAS PE looks similar to a NFS mount point.

For each file VVol datastore, a single Protocol Endpoint is created on each VVols-enabled NAS Server, regardless of how many ESXi hosts have access to the datastore.

All of the ESXi hosts share the same PE as a single IO access point.

It is highly advisable to create at least one VVols-enabled NAS server on each SP for load balancing and high availability purposes.

 clip_image094

For block VVol datastores, the SCSI PE looks similar to a proxy LUN. However, unlike NAS PEs, each ESXi host that has access to the datastore has its own set of dedicated SCSI PEs.

For each ESXi host that is granted access to a datastore, two SCSI PEs are automatically created for multipathing purposes. If the same host is granted access to another datastore, two additional SCSI PEs are also created.

clip_image096

VM Storage Policies

VM Storage Policies, used for Storage Policy Based Management (SPBM), are authored by the VMware administrator to describe the desired capabilities when provisioning a VM.

The storage admin and virtualization admins can discuss policies in advance so the Unity system has CPs that are compliant with the configured VM Storage Policies.

Any of the capabilities that are included in the Capability Profiles can be used to create policies.

For example, the virtualization admin can create a Usage Tag: Database policy for when database VMs are being deployed. This ensures the database VMs are deployed on the storage pool that the storage administrator has designated for this purpose.

Another example is creating a Service Level: Platinum policy for when the best performance is required for a VM. When creating a VM Storage Policy, select EMC.UNITY.VVOL and then choose the desired capabilities.

clip_image098

Note that low-level Storage Properties can also be selected when creating VM Storage Policies. These include characteristics such as Drive Type, FAST Cache, RAID Type, and Tiering Policy.

These are designed to enable advanced VMware administrators who are familiar with storage capabilities, to customize their VM Storage Policies.

The next screen in the wizard displays a list of compatible and incompatible datastores based on the selected capabilities.

clip_image100

After VM Storage Policies are configured, VMs can be deployed on VVol datastores similar to NFS and VMFS datastores.

Note that on the Select Storage step of the new VM wizard, there is a dropdown that displays all of the VM Storage Policies that were created in vSphere.

When a policy is selected, its capabilities are compared with the capabilities of the available datastores.

These datastores are categorized as compatible and incompatible categories to allow the administrator to easily identify datastores that are appropriate for that VM.

clip_image102

After a VM is deployed using a VM Storage Policy, vSphere continues to periodically monitor the datastore to ensure continued compliance.

The current compliance status and the last checked date is displayed in the VM’s Summary page or the VM Storage Policy’s Monitor page.

You can also initiate an on-demand compliance check on either of these pages.

If the datastore falls out of compliance with the specified policy, this is displayed in vSphere Web Client to warn the administrator about the status.

clip_image104

It is possible to migrate VMs that reside on a VVol datastore by updating its VM Storage Policy or using Storage vMotion.

This provides the ability to move VMs to the datastores with the appropriate capabilities if the requirements change. Note that VMs that have snapshots or fast clones cannot be migrated.

Migrating the VVol by updating the VM Storage Policy can only be done if the new policy that you want to use is also available on the same VVol datastore.

For example, if a single datastore contains two Capability Profiles for the Platinum and Bronze service levels, a VM Storage Profile update can be used to automatically migrate the VM’s VVols from one storage pool to the other.

To edit the VM Storage Policy assigned to a VM, right-click on the VM to open the Manage VM Storage Policies page.

clip_image106

Since each VM hard disk is stored on individual VVols, you also have the ability to apply a different VM Storage Policy to each individual hard disk.

For example, for a database VM, you could put the database hard disk on the platinum service level and put the log hard disk on the gold service level.

If the current datastore does not have the required capabilities for the new VM Storage Policy, a VM Storage Policy update cannot be made. Instead, use Storage vMotion to migrate the VM to a different VVol datastore that has the appropriate capabilities.

To do this, right-click on the VM and click Migrate to move the VM to different storage. On the select storage page, note there is a dropdown for the VM Storage Policy.

You can choose to keep the existing VM Storage Policy or select a new one on the new datastore. Just like when deploying a new VM, vSphere automatically categorizes the available datastores to help identify which datastores are compatible with this policy.

clip_image108

Virtual Volumes

Traditionally, a VM’s data is stored as a collection of files on a VMFS or NFS datastore, which is a LUN or file system on the storage system. With VVol datastores, each file is stored on a dedicated storage object, called a Virtual Volume, on the storage system. Unity keeps track of all of the different types of VVols and maps them to the VM to which they belong.

This enables data services to be applied only to the VVols that are associated with a particular VM, instead of to all of the VMs on the entire datastore.

Depending on the type of data that is being stored on the VVol, a certain type of VVol is provisioned:

VMDK (Data) VVol

The VMDK VVol, displayed as Data VVol in Unisphere, contains the vDisk file, or the hard disk drive, for the VM.

Config VVol

The Config VVol contains settings, configuration, and state information for the VM. This includes .vmx, nvram, and log files.

Memory VVol

The Memory VVol contains a complete copy of the VM memory as part of a with-memory VM snapshot.

Swap VVol

The Swap VVol is created when VMs are powered on and contain copies of the VM memory pages that are not retained in memory.

At a minimum, three VVols are required for each powered on VM – Data for the hard disk, Config for the configuration, and Swap for the memory pages.

Unisphere provides the ability to view a list of the VVols that exist on the system.

This is only for visibility and troubleshooting purposes as the management of VVols is handled automatically.

Unity uses the VASA 2.0 protocol to communicate with vSphere to create, bind, unbind, and deletes VVols as needed.

clip_image110

You also have the ability to view more details of each type of VVol by opening the properties page. clip_image112

VMs could potentially utilize several more VVols, depending on the configuration. For example, if additional hard disks are added to a VM, an additional Data VVol is created for each hard disk. Another example is when a snapshot is taken of a VM, a new Data VVol is created to store the snapshot.

If the VM is powered on and its memory is also included in the snapshot, a Memory VVol is created to store the contents of the VM’s memory.

As additional VVols are created, the Virtual Volumes page in Unisphere is updated with the latest information.

Data Services

Data services, such as snapshots and clones, can be applied at a VM-level granularity by applying them only to the VVols that are related to a specific VM. Also, VASA 2.0 allows vSphere to communicate with the Unity system to facilitate offloading of these storage-related tasks to the Unity system.

Since these tasks are processed by the array, the ESXi host’s CPU, memory, and network utilization is reduced.

When a snapshot of a VM is initiated using the vSphere Web Client, it is offloaded to the Unity system.

Unity creates a snapshot of the associated VVols for that particular VM, leaving other VMs on the same datastore unaffected as they are stored on different VVols.

After the snapshot is created, a new Data VVol is created to store the snapshots contents.

clip_image114

Fast Clones allows you to leverage snapshots to create a clone of a VM, which are offloaded to the Unity system. Since snapshots are used and a full clone is not required, space efficiency is increased and they can be deployed very quickly.

Note that these clones are linked to the parent VM so they must exist on the same VM Storage Policy as the parent VM.

Any changes on the parent VM or the snapshot do not have any effect on each other.

VM full cloning operations are also offloaded to the Unity system.

This significantly reduces ESXi host and network utilization as the data does not need to travel from the Unity system to the ESXi host for reading and then repeated again for writing.

The VM Storage Policy can be selected when creating a clone, which allows for placement of the clone on to a different policy. After the clone is created, it can be managed as an independent VM.

Note that any snapshots on the source VM do not get propagated to the clone. Unity automatically creates a set of VVols for the clone.

VVOL Metrics

Real-time VVol metrics are available using Unisphere CLI (UEMCLI). When viewing VVol metrics, VVols are identified using their UUID, which is a unique ID that’s assigned to each VVol by VMware vSphere.

Real-Time View:

uemcli /metrics/metric –availability real-time show | grep vvol

Historical View:

uemcli /metrics/metric –availability historical show | grep vvol

clip_image115

UnityVSA (Virtual Storage Appliance) is a software defined storage platform that provides users greater agility and flexibility. UnityVSA is deployed on a VMware ESXi host.

UnityVSA is available in two editions – Professional Edition (PE) and Community Edition (CE). Professional Edition is a licensed product available at capacity levels of 10 TB, 25 TB, and 50 TB. Community Edition is a free downloadable 4 TB solution recommended for non-production use.

 

clip_image001

Key features:

  • Provides a software-defined version of EMC Unity storage
  • Sets up for NAS and SAN in minutes
  • Unifies block, file, and VMware VVols support
  • Enables VMware administrators to manage storage from VMware vCenter

System Requirements:

vCenter version

5.5 update 2

ESXi

5.x, 6.x

Virtual Requirements:

RAM for each UnityVSA VM > 12 GB

vCPU for each UnityVSA VM > 2 vCPU

Maximum Usable Capacity > 4 TB (Community Edition); 10 TB; 25 TB; and 50 TB licenses available

VMware Integration:

  • VMware vStorage APIs for Storage Integration (VAAI) for File and Block: improves performance by leveraging more efficient, array-based operations.
  • vStorage APIs for Storage Awareness (VASA): provides storage awareness for VMware administrators.

Connectivity:

  • UnityVSA provides flexible NAS or SAN connectivity options through Ethernet and supports a wide range of protocols including CIFS (SMB1, SMB2, and SMB3), NFSv3, and iSCSI.

 

Deploy UnityVSA directly to an ESXi host:

1. Download UnityVSA Community Edition https://www.emc.com/products-solutions/trial-software-download/unity-vsa.htm

2. Launch VMware vSphere Client to access your ESXi host.

3. In the vSphere Client, select File > Deploy OVF Template.

4. Specify the source location and click Next

clip_image003

5. View the OVF Template Details page and click Next.

clip_image005

6. (Optional) Edit the name and click Next.

clip_image007

7. Select a datastore to store the deployed OVF, and click Next.

For optimal performance, EMC recommends that you deploy the UnityVSA VM on a datastore on different physical disks than the datastore in which you will later create the virtual disks used to provide user data storage to UnityVSA.

clip_image009

8. Select the disk format to store the disks, and click Next.

EMC recommends using Thick Provision Eager Zero.

clip_image011

9. Select the networks the deployed VM should use, making sure that:

·         The management network is on a network accessible by the workstation used to access Unisphere.

·         The data networks are on networks accessible by the host that will attach to the UnityVSA.

clip_image013

10. Confirm the settings and then select Finish to deploy the OVF template.

clip_image015

Note: When you create the UnityVSA VM, three virtual disks (vmdks) are automatically created for the VM’s system data. (These are always the virtual disks identified as 1-3.) Do not modify or delete these disks.

You must add at least one virtual disk for user data. You can add more virtual disks, up to the system limit (16), when additional storage for user data is needed. Please allow up to sixty seconds for UnityVSA to recognize and display the newly attached vdisks.

11. Open VM Properties and Add 3 Hard Disks to be used as Storage Tiers ( Gold, Silver and Bronze), Then Press Ok and Power on the VM.

Note: The minimum virtual disk size is 10 GB. A disk appears as faulted if it is smaller than 10 GB and larger than the storage size allowed by the UnityVSA edition and version.

clip_image017

12. To determine when the UnityVSA VM is fully up and running, monitor the DNS Name field on the Summary tab. When the DNS Name field displays a system name, the UnityVSA VM is ready.

13. If you are not running the UnityVSA VM on a dynamic network using DHCP or SLAAC and you did not configure the management interface when you deployed the OVF template, you must open the vSphere Console and login using the service account name “service” and password, “service” and run the svc_initial_config command to assign an IP address.

For an IPv4 address, enter: svc_initial_config -4 “<ipv4_address> <ipv4_netmask> <ipv4_gateway>”

e.g. svc_initial_config -4 “192.168.1.140 255.255.255.0 192.168.1.1”

 

Configure UnityVSA  (initial settings)

1. Launch Unisphere. To do so, enter the management IP address in your browser.

2. Log in with the default username “admin” and password, “Password123#”.

clip_image018

3. The Configuration Wizard runs when you log in to Unisphere for the first time.

clip_image020

4. Accept the license agreement, then click Next.

clip_image022

5. You must change the administrator password during this first login. You can also change the service password at this time, either making it the same as the administrator password or creating a separate service password.

 

clip_image024

6. Obtain your Unisphere license file online http://www.emc.com/auth/elmeval.htm  using the System UUID , then upload/install your license file.

clip_image026

 

clip_image028

clip_image030

7. Add DNS IP Address, then Click Next.

clip_image032

8. Add NTP IP Address, then Click Next.

If you do not configure NTP, UnityVSA gets its time from the ESXi host.

clip_image034

9.To create the Storage Tier, Click Create Pools,.

Unisphere scans for the virtual disks available to the UnityVSA VM that can be used in a pool. You add virtual disks to the UnityVSA VM using vSphere. When you create a pool, you must specify pool tiering information (Capacity, Performance, or Extreme Performance).

clip_image036

a. Add a name and description, Then Click Next.

b. Assign a Tier for the Storage Pool.

clip_image038

c. Select the Storage Tier, then click Next.

clip_image040

d. Select the available virtual disks and then click Next.clip_image042

e. Create a Capability Profile to be used by VMware.

clip_image044

f. Create a Tag for the storage pool, then click Next.

clip_image046

g. Review the Settings, then Click Finish.

clip_image048

 

clip_image050

10. On the Alert Settings Page add the SMTP Server IP address and recipient emails.

clip_image052

11. on the ISCSI interfaces Pages, Click Add.

When you add an iSCSI interface on the storage system, you associate it with one or both Storage Processors (SPs). So, at a given point, there will be multiple iSCSI interfaces on each SP. These become the available paths hosts, with relevant access privileges, can use to access the relevant storage resources.

clip_image054

12. On the NAS Servers, Click Next to complete the configuration.

clip_image056

13. Review The Results and then click Close.

clip_image058

 

System Management (Service Tasks)

The Service Tasks page provides tools for servicing your storage system, including repairing or troubleshooting the system and Storage Processors (SPs), collecting system and configuration information for assisting your service provider with a service request, and changing the Service password.

1. Under System, select Service > Service Tasks.

clip_image060

 

Change Unisphere Name/IP addresses

1.       To change the IP management address, select the Settings icon, and then select Management > Unisphere IPs.

2.       Change the Name and then Click Apply.

clip_image062

3.       On the Notification Window, Click Yes to restart and apply the new settings.

clip_image063

 

Manage System Time and NTP

1.       Select the Settings icon, and then select Management > System Time and NTP.

2.       To synchronize the storage system time with other nodes in the domain or network, select Enable NTP synchronization, select Add and specify the IP address of the NTP server.

clip_image065

 

Configure LDAP server credentials

1.       Select the Settings icon, and then select Users and Groups > Directory Services.

2.       Specify the LDAP server credentials. The following information is required:

·         LDAP Domain: Domain name of the LDAP authentication server

·         Distinguished name: (DN) of the Service Account used for LDAP Authentication.

·         LDAP server: Network name of the LDAP server

·         LDAP password: Password the system uses for authentication with the LDAP server

clip_image067

 

Manage users and groups

1.    Select the Settings icon, and then select Users and Groups > User Management.

2.    Select the Add icon.

clip_image069

3.    Select the type of user or group to add. e.g. LDAP Group.

clip_image071

4.  Specify account/Group information. e.g. “Domain Admins”

clip_image073

5. Specify the privileges that will be available to the user/group.

clip_image075

6.  Review the Selections and then click Finish.

clip_image077

 

clip_image079

When users log in to Unisphere with an LDAP account, they must specify their username in the following format: domain.example.com/Administrator.

clip_image080

 

Add a VMware vCenter server or ESXi host

1.       Under Access, select VMware > vCenters.

2.       Select Add.

3.       On the Add vCenter or ESXi Host window, enter the relevant details, and click Find.

clip_image082

4.       From the list of discovered entries, select the relevant ESXi hosts, and click Next.

 

5.       On the Summary page, review the ESXi hosts, and click Finish.

clip_image084

6.       Review the results and then click Close.

   clip_image086

Create a LUN (block storage)

1. Under Storage, select Block > LUNS.

2. Select the Add icon.

3. Add a name, then Click Next.

clip_image088

4. Select the Pool, Tiering Policy and the size.

UnityVSA supports the Fully Automated Storage Tiering for Virtual Pool (FAST VP) feature for both block and file data. FAST VP optimizes storage utilization by automatically moving data between and within the storage tiers.

clip_image090

5. Click Add to define the access for the ESXi Host.

clip_image092

6. Skip the Snapshot by clicking Next.

clip_image0947.Skip the Replication by clicking Next.

clip_image096

8. Review the Summary, then click Finish.

clip_image098

9. Review the Result, Then Click Close.

clip_image100

10. Double click the new created LUN and Check the Access Details.

clip_image102

12. From The ESXi Side, add the Unity IPs to the ISCSI initiator under the Dynamic Discovery.

clip_image103

13. Run the iSCSI Rescan to connected to the new LUN.

clip_image105

14. Under the ESXi Storage Settings, Add and Format the New LUN.

clip_image107

 

clip_image109

 

 

References:

·         Unity All-Flash & Hybrid Info Hub  https://community.emc.com/docs/DOC-51785

·         EMC Software License http://www.emc.com/auth/elmeval.htm

·         Installation Guide http://www.emc.com/collateral/TechnicalDocument/docu69318.pdf

·         Configuration Worksheet http://www.emc.com/collateral/TechnicalDocument/docu69357.pdf

·         EMC Unity With Native Virtual Volumes Support https://blogs.vmware.com/virtualblocks/2016/05/04/emc-launches-unity-with-native-virtual-volumes-support/