Introduction
This page lists the important Proxmox VE and Debian command line tools. All CLI tools have also manual pages. KVM specific qm
qm - qemu/kvm manager - see Manual: qm and Qm manual OpenVZ specific vzps
This utility program can be run on the Node just as the standard Linux ps. For information on the ps utility please consult the corresponding man page, vzps provides certain additional functionality related to monitoring separate Containers running on the Node.
The vzps utility has the following functionality added:
The -E CT_ID command line switch can be used to show only the processes running inside the Container with the specified ID.
pvectl
pvectl - vzctl wrapper to manage OpenVZ containers - see Pvectl manual vzctl
vzctl - utility to control an OpenVZ container - see Vzctl manual vztop
This utility program can be run on the Node just as the standard Linux top . For information on the top utility please consult the corresponding man page, vztop provides certain additional functionality related to monitoring separate Containers running on the Node.
The vztop utility has the following functionality added:
The -E CT_ID command line switch can be used to show only the processes running inside the Container with the ID specified. If -1 is specified as CT_ID, the processes of all running Containers are displayed. The e interactive command (the key pressed while top is running) can be used to show/hide the CTID column, which displays the Container where a particular process is running (0 stands for the Hardware Node itself). The E interactive command can be used to select another Container the processes of which are to be shown. If -1 is specified, the processes of all running Containers are displayed.
vztop - display top CPU processes
10:28:52 up 31 days, 11:18, 1 user, load average: 0.07, 0.06, 0.02 197 processes: 196 sleeping, 1 running, 0 zombie, 0 stopped CPU0 states: 0.2% user 0.1% system 0.0% nice 0.0% iowait 99.2% idle CPU1 states: 1.3% user 2.1% system 0.0% nice 0.0% iowait 96.1% idle CPU2 states: 6.3% user 0.1% system 0.0% nice 0.0% iowait 93.1% idle CPU3 states: 2.0% user 1.0% system 0.0% nice 0.0% iowait 96.4% idle Mem: 16251688k av, 16032764k used, 218924k free, 0k shrd, 364120k buff
4448576k active, 10983652k inactive
Swap: 15728632k av, 36k used, 15728596k free 14170784k cached
PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND
390694 root 20 0 759M 528M 2432 S 6.7 3.3 244:53 1 kvm 566767 root 20 0 40464 8908 5320 S 6.7 0.0 0:54 0 apache2 7898 root 20 0 181M 34M 4076 S 0.3 0.2 73:12 2 pvestatd
1 root 20 0 10604 848 744 S 0.0 0.0 0:16 0 init 2 root 20 0 0 0 0 SW 0.0 0.0 0:00 2 kthreadd 3 root RT 0 0 0 0 SW 0.0 0.0 0:00 0 migration/0 4 root 20 0 0 0 0 SW 0.0 0.0 0:19 0 ksoftirqd/0 5 root RT 0 0 0 0 SW 0.0 0.0 0:00 0 migration/0 6 root RT 0 0 0 0 SW 0.0 0.0 0:02 0 watchdog/0 7 root RT 0 0 0 0 SW 0.0 0.0 0:00 1 migration/1 8 root RT 0 0 0 0 SW 0.0 0.0 0:00 1 migration/1 9 root 20 0 0 0 0 SW 0.0 0.0 0:24 1 ksoftirqd/1 10 root RT 0 0 0 0 SW 0.0 0.0 0:01 1 watchdog/1 11 root RT 0 0 0 0 SW 0.0 0.0 0:01 2 migration/2 12 root RT 0 0 0 0 SW 0.0 0.0 0:00 2 migration/2 13 root 20 0 0 0 0 SW 0.0 0.0 0:12 2 ksoftirqd/2 14 root RT 0 0 0 0 SW 0.0 0.0 0:01 2 watchdog/2 15 root RT 0 0 0 0 SW 0.0 0.0 0:07 3 migration/3
.. ..
user_beancounters
cat /proc/user_beancounters
Version: 2.5
uid resource held maxheld barrier limit failcnt 101: kmemsize 11217945 16650240 243269632 268435456 0 lockedpages 0 418 65536 65536 0 privvmpages 134161 221093 9223372036854775807 9223372036854775807 0 shmpages 16 3232 9223372036854775807 9223372036854775807 0 dummy 0 0 0 0 0 numproc 56 99 9223372036854775807 9223372036854775807 0 physpages 96245 122946 0 131072 0 vmguarpages 0 0 0 9223372036854775807 0 oomguarpages 53689 78279 0 9223372036854775807 0 numtcpsock 49 82 9223372036854775807 9223372036854775807 0 numflock 8 20 9223372036854775807 9223372036854775807 0 numpty 0 6 9223372036854775807 9223372036854775807 0 numsiginfo 0 33 9223372036854775807 9223372036854775807 0 tcpsndbuf 927856 1619344 9223372036854775807 9223372036854775807 0 tcprcvbuf 802816 1343488 9223372036854775807 9223372036854775807 0 othersockbuf 152592 481248 9223372036854775807 9223372036854775807 0 dgramrcvbuf 0 4624 9223372036854775807 9223372036854775807 0 numothersock 124 152 9223372036854775807 9223372036854775807 0 dcachesize 6032652 12378728 121634816 134217728 0 numfile 629 915 9223372036854775807 9223372036854775807 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 dummy 0 0 0 0 0 numiptent 20 20 9223372036854775807 9223372036854775807 0 0: kmemsize 34634728 65306624 9223372036854775807 9223372036854775807 0 lockedpages 1360 6721 9223372036854775807 9223372036854775807 0 privvmpages 317475 507560 9223372036854775807 9223372036854775807 0 shmpages 4738 9645 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numproc 190 220 9223372036854775807 9223372036854775807 0 physpages 3769163 3867750 9223372036854775807 9223372036854775807 0 vmguarpages 0 0 0 0 0 oomguarpages 182160 205746 9223372036854775807 9223372036854775807 0 numtcpsock 12 29 9223372036854775807 9223372036854775807 0 numflock 9 13 9223372036854775807 9223372036854775807 0 numpty 4 12 9223372036854775807 9223372036854775807 0 numsiginfo 3 84 9223372036854775807 9223372036854775807 0 tcpsndbuf 249512 1760544 9223372036854775807 9223372036854775807 0 tcprcvbuf 198920 1142000 9223372036854775807 9223372036854775807 0 othersockbuf 233512 276832 9223372036854775807 9223372036854775807 0 dgramrcvbuf 0 2576 9223372036854775807 9223372036854775807 0 numothersock 179 193 9223372036854775807 9223372036854775807 0 dcachesize 18688898 47058779 9223372036854775807 9223372036854775807 0 numfile 1141 1410 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 dummy 0 0 9223372036854775807 9223372036854775807 0 numiptent 20 20 9223372036854775807 9223372036854775807 0
vzlist
example:
vzlist
CTID NPROC STATUS IP_ADDR HOSTNAME 101 26 running - localhost.fantinibakery.com 102 121 running 10.100.100.18 mediawiki.fantinibakery.com 114 49 running - fbc14.fantinibakery.com
From PVE 3.0 onwards, the display will be:
vzlist
CTID NPROC STATUS IP_ADDR HOSTNAME 101 26 running - localhost 102 121 running 10.100.100.18 mediawiki 114 49 running - fbc14
The fields for (-o option) selective display are: ctid, nproc, status, ip, hostname. All are case sensitive and are used with the options -H (no header) and -o [field1, field2, ...] The binary is at: /usr/sbin/vzlist by default, vzlist lists only RUNNING CTs, stopped ones won't appear in its output (qm list for VMs, instead, lists also stopped ones)
USAGE
Usage: vzlist [-a | -S] [-n] [-H] [-o field[,field…] | -1] [-s [-]field]
[-h pattern] [-N pattern] [-d pattern] [CTID [CTID ...]]
vzlist -L | –list
Options: -a, –all list all containers -S, –stopped list stopped containers -n, –name display containers' names -H, –no-header suppress columns header -t, –no-trim do not trim long values -j, –json output in JSON format -o, –output output only specified fields -1 synonym for -H -octid -s, –sort sort by the specified field
('-field' to reverse sort order)
-h, –hostname filter CTs by hostname pattern -N, –name_filter filter CTs by name pattern -d, –description filter CTs by description pattern -L, –list get possible field names
Backup vzdump
vzdump - backup utility for virtual machine - see Vzdump manual vzrestore
vzrestore - restore OpenVZ vzdump backups - see Vzrestore manual qmrestore
qmrestore - restore KVM vzdump backups - see Qmrestore manual Cluster management pveca
PVE Cluster Administration Toolkit USAGE
pveca -l # show cluster status pveca -c # create new cluster with localhost as master pveca -s [-h IP] # sync cluster configuration from master (or IP) pveca -d ID # delete a node pveca -a [-h IP] # add new node to cluster pveca -m # force local node to become master pveca -i # print node info (CID NAME IP ROLE)
Software version check pveversion
Proxmox VE version info - Print version information for Proxmox VE packages. USAGE
pveversion [–verbose]
without any argument shows the version of pve-manager, something like:
pve-manager/1.5/4660
or
pve-manager/3.0/957f0862
with -v argument it shows a list of programs versions related to pve, like:
pve-manager: 1.5-7 (pve-manager/1.5/4660) running kernel: 2.6.18-2-pve proxmox-ve-2.6.18: 1.5-5 pve-kernel-2.6.18-2-pve: 2.6.18-5 pve-kernel-2.6.18-1-pve: 2.6.18-4 qemu-server: 1.1-11 pve-firmware: 1.0-3 libpve-storage-perl: 1.0-10 vncterm: 0.9-2 vzctl: 3.0.23-1pve8 vzdump: 1.2-5 vzprocps: 2.0.11-1dso2 vzquota: 3.0.11-1 pve-qemu-kvm-2.6.18: 0.9.1-5
or
pve-manager: 3.0-23 (pve-manager/3.0/957f0862) running kernel: 2.6.32-20-pve proxmox-ve-2.6.32: 3.0-100 pve-kernel-2.6.32-20-pve: 2.6.32-100 lvm2: 2.02.95-pve3 clvm: 2.02.95-pve3 corosync-pve: 1.4.5-1 openais-pve: 1.1.4-3 libqb0: 0.11.1-2 redhat-cluster-pve: 3.2.0-2 resource-agents-pve: 3.9.2-4 fence-agents-pve: 4.0.0-1 pve-cluster: 3.0-4 qemu-server: 3.0-20 pve-firmware: 1.0-22 libpve-common-perl: 3.0-4 libpve-access-control: 3.0-4 libpve-storage-perl: 3.0-8 vncterm: 1.1-4 vzctl: 4.0-1pve3 vzprocps: 2.0.11-2 vzquota: 3.1-2 pve-qemu-kvm: 1.4-13 ksm-control-daemon: 1.1-1
aptitude
Standard Debian package update tool LVM
Most of the commands in LVM are very similar to each other. Each valid command is preceded by one of the following:
Physical Volume = pv Volume Group = vg Logical Volume = lv
USAGE
Physical Volume Volume Group Logical Volume LVM PV VG LV
s No Yes
Yes
Yes
display No Yes Yes Yes create No Yes Yes Yes rename No No Yes Yes change Yes Yes Yes Yes move No Yes Yes No extend No No Yes Yes reduce No No Yes Yes resize No Yes No Yes split No No Yes No merge No No Yes No convert No No Yes Yes import No No Yes No export No No Yes No importclone No No Yes No cfgbackup No No Yes No cfgrestore No No Yes No ck No Yes Yes No scan diskscan Yes Yes Yes mknodes No No Yes No remove No Yes Yes Yes dump Yes No No No
iSCSI DRBD
See DRBD Debian Appliance Builder dab
See Debian Appliance Builder Other useful tools pveperf
Simple host performance test.
(from man page) USAGE
pveperf [PATH]
DESCRIPTION
Tries to gather some CPU/Hardisk performance data on the hardisk mounted at PATH (/ is used as default)
It dumps on the terminal:
CPU BOGOMIPS: bogomips sum of all CPUs REGEX/SECOND: regular expressions per second (perl performance test), should be above 300000 HD SIZE: harddisk size BUFFERED READS: simple HD read test. Modern HDs should reach at least 40 MB/sec AVERAGE SEEK TIME: tests average seek time. Fast SCSI HDs reach values < 8 milliseconds. Common IDE/SATA disks get values from 15 to 20 ms. FSYNCS/SECOND: value should be greater than 200 (you should enable "write back" cache mode on you RAID controller - needs a battery backed cache (BBWC)). DNS EXT: average time to resolve an external DNS name DNS INT: average time to resolve a local DNS name
Note: this command may require root privileges (or sudo) to run, otherwise you get an error after “HD SIZE” value, like: «sh: /proc/sys/vm/drop_caches: Permission denied unable to open HD at /usr/bin/pveperf line 149.» Example output
CPU BOGOMIPS: 26341.80 REGEX/SECOND: 1554770 HD SIZE: 94.49 GB (/dev/mapper/pve-root) BUFFERED READS: 49.83 MB/sec AVERAGE SEEK TIME: 14.16 ms FSYNCS/SECOND: 1060.47 DNS EXT: 314.58 ms DNS INT: 236.94 ms (mypve.com)
pvesubscription
For managing a node's subscription key Usage
To set the key use:
pvesubscription set <key>
The following updates the subscription status
pvesubscription update -force
To print subscription status use
pvesubscription get
USAGE: pvesubscription <COMMAND> [ARGS] [OPTIONS]
pvesubscription get pvesubscription set <key> pvesubscription update [OPTIONS]
pvesubscription help [<cmd>] [OPTIONS]