Install NGT using cli

Install NGT to Windows VM through CLI

connect to cluster CVM ncli: ncli
list all VM’s to identiefy the vm-uuid: vm list
Enable NGT: ngt enable vm-id=<vm-uuid>
Mount NGT: ngt mount vm-id=<vm-uuid>
Validate this with the following command: ngt list

Connect to VM console
Open command prompt : cmd
Run the following to install NGT : d:\setup.exe /quiet ACCEPTEULA=yes

Run the command to unmount the NGT image on ncli: ngt unmount vm-id=<vm-uuid>

Enable SSR on the VM with the following command: ngt enable-applications vm-id=<vm-uuid> application-names=”file level restore”
Verify NGT: ngt list

Install NGT to Linux VM through CLI

connect to cluster CVM ncli: ncli
list all VM’s to identiefy the vm-uuid: vm list or vm list name=”vm-name”
Enable NGT: ngt enable vm-id=<vm-uuid>
Mount NGT: ngt mount vm-id=<vm-uuid>
Validate this with the following command: ngt list

Connect to VM console
Open command prompt
Mount NGT iso: mount /dev/sr0 /mnt
change directory: cd /mnt/installer/linux
Run the following to install NGT : ./install_ngt.py

Run the command to unmount the NGT image on ncli: ngt unmount vm-id=<vm-uuid>

Enable SSR on the VM with the following command: ngt enable-applications vm-id=<vm-uuid> application-names=”file level restore”
Verify NGT: ngt list

Nutanix Node Shutdown

To shutdown a node:

  1. Log on to the CVM with SSH and note the value of the Hypervisor IP for the node you want to shut down.
  2. Place the node in maintenance mode:
    $ acli host.enter_maintenance_mode host_ID [wait=”{ true | false }” ]
  3. Shut down the Controller VM: $ cvm_shutdown -P now
  4. Log on to the AHV host with SSH.
  5. Shut down the host: $ shutdown -h now

To start a Node:

  1. If the node is turned off, turn it on (otherwise, go to the next step).
  2. Log on to the AHV host with SSH.
  3. Find the name of the CVM by executing the following on the host:
    virsh list –all | grep CVM
  4. Examining the output from the previous command, if the CVM is OFF, start it from the prompt on the host: virsh start cvm_name

    Note: The cvm_name is obtained from the command run in step 3.
  5. If the node is in maintenance mode, log on to the CVM over SSH and take it out of maintenance mode:
    acli host.exit_maintenance_mode AHV-hypervisor-IP-address
  6. Log on to another CVM in the cluster with SSH.
  7. Confirm that cluster services are running on the CVM (make sure to replace cvm_ip_addr accordingly): ncli cluster status | grep –A 15 cvm_ip_addr
    1. Alternatively, you can use the following command to check if any services are down in the cluster: cluster status | grep -v UP
  8. Verify that all services are up on all CVMs.

Nutanix Cluster Shutdown

Follow the tasks listed below if a cluster shutdown is needed:

  1. Verify the cluster status with the cluster status command on a CVM.
  2. If there are issues with the cluster, you can run the NCC checker with the command nutanix@cvm$ ncc health_checks run_all.
  3. Verify cluster data resiliency status on the Prism Home dashboard.
  4. Shut down all guest VMs (aside from CVMs).
  5. Shutdown the cluster with the command cluster stop. Use the cluster status command to see the current status of all cluster processes.
    1. Cluster stop command nutanix@cvm$ cluster stop
    2. Cluster status command nutanix@cvm$ cluster status
  6. Power off CVMs with the command nutanix@cvm$ sudo shutdown -P now.
  7. Log on to the hosts and power off using the shutdown -h now command.

Huawei OceanStor basics

Initial Setup Array

Default username and password: admin / Admin@storage
Connect using serial speed 115200

Set Management IP Address

Execute the following command to identify the current ip addresses and the name of the interfaces:
# show system management_ip

To change the ip address use the following command:
# change system management_ip eth_port_id=CTE0.MGMT0.0 ip_type=ipv4_address ipv4_address=192.168.190.2 mask=255.255.0.0 gateway_ipv4=192.168.0.1

Useful Nutanix CLI Commands

There are two commands for CLI acli and ncli

Acropolis Command-Line Interface (aCLI)
Acropolis provides a command-line interface for managing hosts, networks, snapshots, and VMs.

Accessing the Acropolis CLI
To access the Acropolis CLI, log on to a Controller VM in the cluster with SSH and type acli at the shell prompt.
To exit the Acropolis CLI and return to the shell, type exit at the prompt.

Nutanix Command-Line Interface (nCLI)
The Nutanix command-line interface (nCLI) allows you to run system administration commands against the Nutanix cluster from any of the following machines:

Your local machine (preferred) ncli needs to be downloaded and installed first.
Any Controller VM in the cluster

Viewing Network configuration log on to the Controller VM (CVM)

To view link speed and status
nutanix@cvm$ manage_ovs show_interfaces
name mode link speed
eth0 1000 True 1000
eth1 1000 True 1000
eth2 10000 True 10000
eth3 10000 True 10000

To show the ports and interfaces that are configured as uplinks
nutanix@cvm$ manage_ovs –bridge_name bridge show_uplinks
Bridge: br0
Bond: br0-up
bond_mode: active-backup
interfaces: eth3 eth2 eth1 eth0
lacp: off
lacp-fallback: false
lacp_speed: slow

Brocade FC Switch config

Connect with GUI using java
javaws http://<ip-address>/switchExplorer_installed.html
javaws https://<ip-address>/switchExplorer_installed.html

Create Alias
alicreate “<alias-name>”,”<WWN>”

Zone Create
zonecreate “<zone-name>”,”<alias-1;alias-2″

Add zone to existing config
cfgadd “<cfg-name>”,”<zone-name>”

Useful commands

admin> sfpshow
Port 1: id (sw) Vendor: BROCADE Serial No: HAF618240000FSD Speed: 4,8,16_Gbps
Port 2: id (sw) Vendor: BROCADE Serial No: HAF318430000DVJ Speed: 4,8,16_Gbps
Port 3: id (sw) Vendor: BROCADE Serial No: HAF618240000257 Speed: 4,8,16_Gbps

admin>configshow

admin>…….

Add FC Initiator via CLI for PowerStore

Make sure you download and install the latest pstcli from the Dell Support page!

This will save the credentials local to your pc
pstcli -d <ip-address> -u admin -p <myPassword> -save_cred

This will show current hosts on the system
pstcli -d <ip-address> -header host show

This will add a FC initiator to a specific host using new WWN
pstcli -d <ip-address> -header host -name <hostname> set -add_initiators -port_name <00:00:00:00:00:00> -port_type FC

Troubleshooting SunFire SFxx00 Systems

CLI / SP commands

showboards -v
-p part shows only a specific part and can be:
board shows the board status.
clock shows the system clock status.
cpu shows CPU type, speed, and Ecache size.
io shows I/O information.
memory shows memory information for each board.
power shows grid information.
version shows version information.
showcomponent -v
showdomain -v
showenvironment -v
-p currents Displays currents (power supplies only)
fans Displays fan states.
faults Displays values that are suspected to be invalid.
temps Displays temperatures only.
voltage Displays voltages only
showlogs -v
showplatform -v

Cisco 6500 power status

https://www.cisco.com/c/en/us/support/docs/switches/catalyst-6500-series-switches/24053-193.html

#show power status all
Power-Capacity PS-Fan Output Oper
PS   Type               Watts   A @42V Status Status State
—- —————— ——- —— —— —— —–
1    WS-CAC-2500W       2331.00 55.50  OK OK on
2    none
Pwr-Requested  Pwr-Allocated  Admin Oper
Slot Card-Type          Watts   A @42V Watts   A @42V State State
—- —————— ——- —— ——- —— —– —–
1    WS-X6K-S2U-MSFC2    142.38  3.39   142.38  3.39  on    on
2    WSSUP1A-2GE         142.38  3.39   142.38  3.39  on    on
3    WS-X6516-GBIC       231.00  5.50   231.00  5.50  on    on
4    WS-X6516-GBIC       231.00  5.50   231.00  5.50  on    on
5    WS-X6500-SFM2       129.78  3.09   129.78  3.09  on    on
6    WS-X6502-10GE       226.80  5.40   226.80  5.40  on    on

 

show power
system power redundancy mode = redundant
system power total = 27.460A
system power used = 25.430A
system power available = 2.030A
FRU-type       #    current   admin state oper
power-supply   1    27.460A   on          on
power-supply   2    27.460A   on          on
module         1    3.390A    on          on
module         2    3.390A    on          on
module         3    5.500A    on          on
module         5    3.090A    on          on
module         7    5.030A    on          on
module         8    5.030A    on          on
module 9 5.030A on off (FRU-power denied).

show environment status
backplane:
operating clock count: 2
operating VTT count: 3
fan-tray 1:
fan-tray 1 fan-fail: failed

SP Collects using CLI

Using navicli or naviseccli

/opt/Navisphere/bin/naviseccli -h <ipaddress> spcollect

/opt/Navisphere/bin/naviseccli -h <ipaddress> managefiles -list

/opt/Navisphere/bin/naviseccli -h <ipaddress> managefiles -retrieve -file <filename just created>

Posted in EMC