Nutanix Cluster Commands

  • Manual Cluster Creation

nutanix@cvm$ cluster –cluster_name= –cluster_external_ip= –dns_servers= –ntp_servers= –redundancy_factor=2 -s cvm1_IP,cvm2_IP,cvm3_IP create

  • Change Cluster TimeZone

nutanix@cvm$ ncli cluster set-timezone timezone=choose_timezone

  • Run Health Checks

nutanix@cvm$ ncc health_checks run_all

  • To check controller VM (CVM) IP addressees on nutanix cluster

nutanix@cvm$ svmips

  • To check Hypervisor IP addresses on nutanix cluster

nutanix@cvm$ hostips

  • To check IPMI IP addresses on nutanix cluster

nutanix@cvm$ ipmiips

AHV Load Balancing

View the bond mode and active interface with the following AHV command from any CVM

nutanix@CVM$ hostssh ovs-appctl bond/show

In the default configuration of active-backup, output will be similar to the following, where eth2 is the active and eth3 is the backup interface:

Note!! Your Bond name maybe bond0 br0_up br0-up

—- br0_up—-
bond_mode: active-backup
bond-hash-basis: 0
updelay: 0 ms
downdelay: 0 ms
lacp_status: off

slave eth2: enabled
active slave
may_enable: true

slave eth3: enabled
may_enable: true

  • Active-Backup

nutanix@CVM$ host ssh ovs-vsctl set port br0_up bond_mode=active-backup

  • Balance-SLB

nutanix@CVM$ hostssh ovs-vsctl set port br0_up bond_mode=balance-slb

nutanix@CVM$ hostssh ovs-vsctl set port br0_up other_config:bond-rebalance-interval=60000

  • LACP & Link-Aggregation NB!! LACP must be set upstream and you may want to do this one via each hosts SSH session

root@AHV-HOST1$ hostssh ovs-vsctl set port br0_up lacp=active

root@AHV-HOST1$ hostssh ovs-vsctl set port br0_up bond_mode=balance-tcp

root@AHV-HOST1$ hostssh ovs-vsctl set port br0_up other_config:lacp-fallback-ab=true

Reset/Clear Nutanix Licensing violation

Too many times I have gone to do licensing in a newly created cluster (or log onto a clients Nutanix deployment) to be faced with the following

The below command run from any CVM cluster or for Prism Central CLI will clear it

–> ssh to Cluster or Prism Central IP (Depending which you have the message in)

–> ncli license reset-license

Log out of the GUI and back in again

Nutanix vSphere Cluster Settings

Following list provides an overview of the settings to configure as best practice on a Nutanix ESXi deployment

vSphere HA Settings

  • Enable host monitoring
  • Enable admission control and use the percentage-based policy with a value based on the number of nodes in the cluster
  • Set the VM Restart Priority of all Controller VMs toDisabled
  • Set the Host Isolation Response of the cluster to Power Off
  • Set the Host Isolation Response of all Controller VMs to Disabled
  • Set the VM Monitoring for all Controller VMs to Disabled
  • Enable Datastore Heartbeating by clicking Select only from my preferred datastores and choosing the Nutanix NFS datastore – I normally create two small heatbeat datastores, advertise as 10GB capacity and use these. However if the cluster has only one datastore, add an advanced option named das.ignoreInsufficientHbDatastore withValue of true

vSphere DRS Settings

  • Set the Automation Level on all Controller VMs to Disabled
  • Leave power management disabled (set to Off)

Other Cluster Settings

  • Store VM swapfiles in the same directory as the virtual machine
  • Enable EVC in the cluster

Disable SIOC on a Container

It is recommended to disable SIOC on Nutanix because this setting can cause following issues.

  • If SIOC or SIOC in the statistics mode is enabled then storage might become unavailable.
  • If SIOC is enabled and you are using Metro Availability feature, you may face issues with activate and restore operation.
  • If SIOC in the statistics mode is enabled, then this might cause all the hosts to repeatedly create and delete the access and .lck-XXXXXXXX files in the .iorm.sf directory in the root directory of the container.

Perform the following procedure to disable storage I/O statistics collection.

  1. Log into the vSphere Web Client.
  2. Click Storage.
  3. Navigate to the container for your cluster.
  4. Right-click the container and select Configure Storage I/O Controller.
    The properties for the container is displayed. The Disable Storage I/O statistics collection option is unchecked, which means that SIOC is enabled by default.
  5. Select the Disable Storage I/O statistics collection option to disable SIOC, and click OK.
  6. Select the Exclude I/O Statistics from SDRS option, and click OK.

Nutanix Cluster Re-IP

Warning: This step affects the operation of a Nutanix cluster – Schedule a down time before performing

1. Log on to the hypervisor with SSH (vSphere or AHV) or remote desktop connection (Hyper-V), or theIPMI remote console

2. Log on to the any Controller VM
→ vSphere or AHV root@host# ssh nutanix@
→ Hyper-V > ssh nutanix@

3. Change CVM IP
Stop the Nutanix cluster
→ nutanix@cvm$ cluster stop
→ nutanix@cvm$ cluster reconfig
→ nutanix@cvm$ external_ip_reconfg
Follow the prompts to type the new netmask, gateway, and external IP addresses

4. Shutdown each Controller VM in the cluster
→ nutanix@cvm$ cvm_shutdown -P now

5. Change hypervisor IP

6. Change IPMI IP

7. Reboot Hypervisor

8. Log on to the any Controller VM
→ nutanix@cvm$ cluster start

Configure AHV 1GBe Network Links

If you want to configure 1 GbE connectivity for guest VMs, you can aggregate the 1 GbE interfaces (eth0 and eth1) to a bond on a separate OVS bridge, create a VLAN network on the bridge, and then assign guest VM interfaces to the network.

To configure 1 GbE connectivity for guest VMs, do the following:

  1. Log on to the AHV host with SSH.
  2. Log on to the Controller VM.
    root@host# ssh nutanix@

    Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.

  3. Determine the uplinks configured on the host.
    nutanix@cvm$ allssh manage_ovs show_uplinks

    Output similar to the following is displayed:

    Executing manage_ovs show_uplinks on the cluster
    ================== =================
    Bridge br0:
      Uplink ports: br0-up
      Uplink ifaces: eth3 eth2 eth1 eth0
    ================== =================
    Bridge br0:
      Uplink ports: br0-up
      Uplink ifaces: eth3 eth2 eth1 eth0
    ================== =================
    Bridge br0:
      Uplink ports: br0-up
      Uplink ifaces: eth3 eth2 eth1 eth0
  4. If the 1 GbE interfaces are in a bond with the 10 GbE interfaces, as shown in the sample output in the previous step, dissociate the 1 GbE interfaces from the bond. Assume that the bridge name and bond name are br0 and br0-up, respectively.
    nutanix@cvm$ allssh 'manage_ovs --bridge_name br0 --interfaces 10g --bond_name br0-up update_uplinks'

    The command removes the bond and then re-creates the bond with only the 10 GbE interfaces.

  5. Create a separate OVS bridge for 1 GbE connectivity. For example, create an OVS bridge called br1 (bridge names must not exceed 10 characters.).
    nutanix@cvm$ allssh 'ssh root@ /usr/bin/ovs-vsctl add-br br1'
  6. Aggregate the 1 GbE interfaces to a separate bond on the new bridge. For example, aggregate them to a bond named br1-up.
    nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 --interfaces 1g --bond_name br1-up update_uplinks'
  7. Log on to any Controller VM in the cluster, create a network on a separate VLAN for the guest VMs, and associate the new bridge with the network. For example, create a network named vlan10.br1 on VLAN 10.
    nutanix@cvm$ acli net.create vlan10.br1 vlan=10 vswitch_name=br1
  8. To enable guest VMs to use the 1 GbE interfaces, log on to the web console and assign interfaces on the guest VMs to the network.For information about assigning guest VM interfaces to a network, see “Creating a VM” in the Prism Web Console Guide.

Reset Prism stuck/incomplete task

1. SSH to Cluster of CVM IP
2. Enter the following to list running/failed tasks

ecli task.list include_completed=false

3. Enter the following to cancel required task

ergon_update_task --task_uuid='ENTER_TASK_UUID_HERE' --task_status=succeeded

4. Enter “Y” – Remember you do so at you own risk

5. Enter the following to confirm task have been completed/failed

ecli task.list include_completed=false

Change CVM name in AHV

You may need to modify the names of the CVMs displayed in Prism to match the block/serial/hostname or however you reference the node

The CVM name should always start with NTNX- and end with -CVM

1. SSH into the CVM and shut it down

cvm_shutdown -h now

2. Enter the following from the AHV host

virsh list --all
virsh domrename NTNX-Old_Name_Here-CVM NTNX-New_Name_Here-CVM

3. Start the CVM

virsh start NEW_NAME

4. Configure autostart in KVM so the CVM boots up with the host

virsh autostart NTNX-New_name_here-CVM

Change Nutanix CVM Memory – AHV

Log onto the AHV hosting the CVM and issue the following command

[root@NTNX-1 ~]# virsh list –all
Id Name State
8 NTNX-1-CVM running

Wait until the CVM is shutdown

[root@NTNX-1 ~]# virsh shutdown NTNX-1-CVM
[root@NTNX-1 ~]# virsh setmem NTNX-1-CVM 20G –config
[root@NTNX-1 ~]# virsh setmaxmem NTNX-1-CVM 20G –config
[root@NTNX-1 ~]# virsh start NTNX-1-CVM
Domain NTNX-1-CVM started

Proceed to other hosts & CVM’s