All Posts

Configure AHV 1GBe Network Links

If you want to configure 1 GbE connectivity for guest VMs, you can aggregate the 1 GbE interfaces (eth0 and eth1) to a bond on a separate OVS bridge, create a VLAN network on the bridge, and then assign guest VM interfaces to the network.

To configure 1 GbE connectivity for guest VMs, do the following:

  1. Log on to the AHV host with SSH.
  2. Log on to the Controller VM.
    root@host# ssh nutanix@192.168.5.254

    Accept the host authenticity warning if prompted, and enter the Controller VM nutanix password.

  3. Determine the uplinks configured on the host.
    nutanix@cvm$ allssh manage_ovs show_uplinks

    Output similar to the following is displayed:

    Executing manage_ovs show_uplinks on the cluster
    ================== 192.0.2.49 =================
    Bridge br0:
      Uplink ports: br0-up
      Uplink ifaces: eth3 eth2 eth1 eth0
    
    ================== 192.0.2.50 =================
    Bridge br0:
      Uplink ports: br0-up
      Uplink ifaces: eth3 eth2 eth1 eth0
    
    ================== 192.0.2.51 =================
    Bridge br0:
      Uplink ports: br0-up
      Uplink ifaces: eth3 eth2 eth1 eth0
    
    
  4. If the 1 GbE interfaces are in a bond with the 10 GbE interfaces, as shown in the sample output in the previous step, dissociate the 1 GbE interfaces from the bond. Assume that the bridge name and bond name are br0 and br0-up, respectively.
    nutanix@cvm$ allssh 'manage_ovs --bridge_name br0 --interfaces 10g --bond_name br0-up update_uplinks'

    The command removes the bond and then re-creates the bond with only the 10 GbE interfaces.

  5. Create a separate OVS bridge for 1 GbE connectivity. For example, create an OVS bridge called br1 (bridge names must not exceed 10 characters.).
    nutanix@cvm$ allssh 'ssh root@192.168.5.1 /usr/bin/ovs-vsctl add-br br1'
  6. Aggregate the 1 GbE interfaces to a separate bond on the new bridge. For example, aggregate them to a bond named br1-up.
    nutanix@cvm$ allssh 'manage_ovs --bridge_name br1 --interfaces 1g --bond_name br1-up update_uplinks'
  7. Log on to any Controller VM in the cluster, create a network on a separate VLAN for the guest VMs, and associate the new bridge with the network. For example, create a network named vlan10.br1 on VLAN 10.
    nutanix@cvm$ acli net.create vlan10.br1 vlan=10 vswitch_name=br1
  8. To enable guest VMs to use the 1 GbE interfaces, log on to the web console and assign interfaces on the guest VMs to the network.For information about assigning guest VM interfaces to a network, see “Creating a VM” in the Prism Web Console Guide.
Advertisements

Reset Prism stuck/incomplete task

1. SSH to Cluster of CVM IP
2. Enter the following to list running/failed tasks

ecli task.list include_completed=false

3. Enter the following to cancel required task

ergon_update_task --task_uuid='ENTER_TASK_UUID_HERE' --task_status=succeeded

4. Enter “Y” – Remember you do so at you own risk

5. Enter the following to confirm task have been completed/failed

ecli task.list include_completed=false

Rename AHV Hosts

1. Log on to the AHV host with SSH
2. sudo nano /etc/sysconfig/network
3. Edit the following line

HOSTNAME=new_hostname_here.mycompany.com

Tip: use FQDN as per DNS entries to fix the NCC “FQDN lookup error”
4. Restart the Acropolis host

Change CVM name in AHV

You may need to modify the names of the CVMs displayed in Prism to match the block/serial/hostname or however you reference the node

Note:
The CVM name should always start with NTNX- and end with -CVM

1. SSH into the CVM and shut it down

cvm_shutdown -h now

2. Enter the following from the AHV host

virsh list --all
virsh domrename NTNX-Old_Name_Here-CVM NTNX-New_Name_Here-CVM

3. Start the CVM

virsh start NEW_NAME

4. Configure autostart in KVM so the CVM boots up with the host

virsh autostart NTNX-New_name_here-CVM

Change Nutanix CVM Memory – AHV

Log onto the AHV hosting the CVM and issue the following command

[root@NTNX-1 ~]# virsh list –all
Id Name State
—————————————————-
8 NTNX-1-CVM running

Wait until the CVM is shutdown

[root@NTNX-1 ~]# virsh shutdown NTNX-1-CVM
[root@NTNX-1 ~]# virsh setmem NTNX-1-CVM 20G –config
[root@NTNX-1 ~]# virsh setmaxmem NTNX-1-CVM 20G –config
[root@NTNX-1 ~]# virsh start NTNX-1-CVM
Domain NTNX-1-CVM started

Proceed to other hosts & CVM’s

Create Nutanix Phoenix boot ISO

Step 1. Grab Hyper ISO

To generate an ISO image from a particular version of AHV, do the following:

  1. Log into the latest Foundation VM
  2. Download the latest AHV bundle
  3. Kill the Foundation service running on the Foundation VM.
    $ sudo pkill -9 foundation
  4. Create an AHV ISO file
    $ cd /home/nutanix/foundation/bin
    $ ./generate_iso kvm ahv_tar_archive

    Replace ahv_tar_archive with the full path to the AHV tar archive, host_bundle_el6.nutanix.version#.tar.gz. The command generates an AHV ISO file named kvm.version#.iso in the current directory

Step2, Create the Pheonix Boot ISO

To generate a Phoenix ISO image, do the following:

  1. Log into the latest Foundation VM
  2. Download the latest AOS bundle
  3. Kill the Foundation service running on the Foundation VM.
    $ sudo pkill -9 foundation
  4. Navigate to the /home/nutanix/foundation/nos directory and unpack the compressed AOS tar archive.
    $ gunzip nutanix_installer_package-version#.tar.gz
  5. Generate the Phoenix ISO file.
    $ cd /home/nutanix/foundation/bin
    $ ./generate_iso phoenix --aos-package AOS_PACKAGE --kvm KVM --skip-space-check

    Replace AOS_PACKAGE with the full path to the AOS tar archive, nutanix_installer_package-version#.tar.

    Replace KVM with the full path to the AHV ISO image or bundle.

    Replace ESX with the full path to the ESXi ISO image.

    Replace HYPERV with the full path to the Hyper-V ISO image.

  6. Check in /home/nutanix/foundation/tmp/ for the Phoenix ISO

Mount SAMBA share on ubuntu

sudo apt-get install samba cifs-utils
sudo nano /home/<USERNAME>/.smbcredentials

username=guest # or whatever account credentials are required
password=guest # or whatever account credentials are required

sudo nano /etc/fstab

//<SERVER_IP>/<SHARE1>/ /mnt/nas/<SHARE1>/ cifs credentials=/home/<USERNAME)/.smbcredentials,iocharset=utf8,gid=1000,uid=1000,file_mode=0777,dir_mode=0777 0 0

//<SERVER_IP>/<SHARE2>/ /mnt/nas/<SHARE2>/ cifs credentials=/home/<USERNAME)/.smbcredentials,iocharset=utf8,gid=1000,uid=1000,file_mode=0777,dir_mode=0777 0 0

sudo mkdir /mnt/nas/<SAHRE1>
sudo mkdir /mnt/nas/<SHARE2>
sudo mount -a