Skip to content

🧰 Helpful Proxmox Commands

Battle‑tested CLI commands for administering Proxmox VE: VMs (QEMU), LXC containers, storage, networking, cluster, HA, services, backups, GPU passthrough/reset, and troubleshooting.

Tip

Replace placeholders like <VMID>, <CTID>, <storage>, <node>, <pool>, <iface>.

Warning

Commands that modify hardware, storage, networking, or cluster state can disrupt running workloads. Prefer maintenance windows, backups, and snapshots where appropriate. Many operations require root; use sudo or run as root on the PVE node.


🧭 Basics — system and services

Check versions and node health

1
2
3
4
5
6
pveversion -v                        # Proxmox + package versions
pvesh get /version                   # API version
hostnamectl; uptime
free -h
df -h -x tmpfs -x devtmpfs
lsblk -o NAME,SIZE,TYPE,FSTYPE,MOUNTPOINT

Key services and quick restarts

1
2
3
4
systemctl status pveproxy pvedaemon pvestatd pve-cluster corosync pve-firewall
journalctl -u pveproxy -b -n 200 -f
journalctl -u pve-cluster -b -n 200
systemctl restart pveproxy pvedaemon   # Restarts Web UI/daemon

Stale locks

qm unlock <VMID>
pct unlock <CTID>

Task stream and per-guest logs

1
2
3
4
tail -f /var/log/pve/tasks/active
tail -f /var/log/pve/tasks/index
tail -f /var/log/pve/qemu-server/<VMID>.log
tail -f /var/log/pve/lxc/<CTID>.log

🖥️ QEMU VMs (qm)

List, inspect, start/stop

1
2
3
4
5
6
7
qm list
qm status <VMID>
qm config <VMID>
qm start <VMID>
qm shutdown <VMID>
qm stop <VMID>            # Hard stop
qm reset <VMID>

Console, monitor, send keys

1
2
3
qm terminal <VMID>        # Serial console (if configured)
qm monitor <VMID>         # QEMU monitor
qm sendkey <VMID> ctrl-alt-delete

Resources and devices

1
2
3
4
5
qm set <VMID> -memory 16384 -cores 8
qm set <VMID> -agent enabled=1
qm set <VMID> -net0 virtio,bridge=vmbr0
qm set <VMID> -scsi0 <storage>:vm-<VMID>-disk-0
qm resize <VMID> scsi0 +20G

Snapshots and rollback

1
2
3
4
qm snapshot <VMID> pre-upgrade --description "Before upgrade"
qm listsnapshot <VMID>
qm rollback <VMID> pre-upgrade
qm delsnapshot <VMID> pre-upgrade

Import disks and images

1
2
3
4
5
6
# Import disk image into storage, then attach it
qm importdisk <VMID> /path/to/image.qcow2 <storage>
qm set <VMID> -scsi1 <storage>:vm-<VMID>-disk-1

# Optional: convert formats
qemu-img convert -p -O qcow2 source.vmdk dest.qcow2

Live/online migration

1
2
3
qm migrate <VMID> <targetnode> --online
# If local disks exist:
qm migrate <VMID> <targetnode> --online --with-local-disks

📦 LXC Containers (pct)

Basics

1
2
3
4
5
6
7
8
9
pct list
pct config <CTID>
pct start <CTID>
pct stop <CTID>
pct reboot <CTID>
pct console <CTID>          # Attach console
pct enter <CTID>            # Enter shell
pct exec <CTID> -- bash -lc "apt update && apt -y upgrade"
pct set <CTID> -memory 4096 -cores 2

Snapshots and restore

1
2
3
4
5
pct snapshot <CTID> safe-point
pct listsnapshot <CTID>
pct rollback <CTID> safe-point
pct restore <CTID> /mnt/pve/<storage>/dump/vzdump-lxc-<CTID>-*.tar.zst \
  --storage <storage>

Migrate

pct migrate <CTID> <targetnode> --online

Mount/unmount rootfs (offline maintenance)

1
2
3
pct mount <CTID>
# ... operate on /var/lib/lxc/<CTID>/rootfs ...
pct unmount <CTID>

💾 Backups and Restore (vzdump, qmrestore, pct restore)

Create backups

1
2
3
4
5
6
# VM backup
vzdump <VMID> --storage <storage> --mode snapshot --compress zstd \
  --notes-template "{{node}}/{{vmid}} {{guestname}} {{date}}-{{time}}"

# Container backup
vzdump <CTID> --storage <storage> --mode snapshot --compress zstd

List backup files

pvesm list <storage>
ls -lh /mnt/pve/<storage>/dump

Restore VM and CT

1
2
3
4
5
6
7
# Restore VM to new VMID
qmrestore /mnt/pve/<storage>/dump/vzdump-qemu-<OLD>-*.vma.zst <NEW_VMID> \
  --storage <storage>

# Restore Container
pct restore <NEW_CTID> \
  /mnt/pve/<storage>/dump/vzdump-lxc-<OLD>-*.tar.zst --storage <storage>

🗄️ Storage (pvesm, ZFS, LVM)

Proxmox storage CLI

1
2
3
4
pvesm status
pvesm list <storage>
pvesm nfsscan <server>
pvesm iscsiscan <server>

ZFS basics

1
2
3
4
5
6
7
8
zpool status
zpool list
zfs list -o name,used,avail,mountpoint
zpool scrub <pool>
zpool clear <pool>
# Import/export (maintenance)
zpool export <pool>
zpool import -f <pool>

LVM/LVM-thin

1
2
3
4
5
pvs
vgs
lvs -a -o +devices,lv_size,data_percent,metadata_percent
# Example: check thin pool usage
lvs -a -o name,vg_name,lv_size,metadata_percent,data_percent

Replication (built-in)

1
2
3
4
pvesr status
pvesr list
# Run a job immediately
pvesr run --id <jobid>

🌐 Networking and Firewall

Interfaces and bridges

1
2
3
4
ip -c a
ip -c r
bridge link show
grep -R "vmbr" /etc/network/interfaces /etc/network/interfaces.d || true

Apply interface changes (ifupdown2)

1
2
3
ifreload -a
# Fallback:
systemctl restart networking

Connectivity and ports

1
2
3
ss -tulpn | grep -E "8006|22"      # Web UI and SSH
ping -c 3 <host-or-ip>
traceroute <host-or-ip>            # apt install traceroute if missing

Firewall (PVE 8 uses nftables backend)

1
2
3
4
pve-firewall status
pve-firewall compile
pve-firewall reload
nft list ruleset | less

Packet capture (example on vmbr0)

tcpdump -ni vmbr0 port 8006

🧩 Cluster and Quorum (pvecm, corosync)

Status and nodes

1
2
3
4
5
pvecm status
pvecm nodes
corosync-quorumtool -s
systemctl status corosync pve-cluster
journalctl -u corosync -b -n 200

Quorum override for maintenance (use with care)

# Temporarily set expected votes (e.g., in a 1-node surviving scenario)
pvecm expected 1

Add/remove nodes

1
2
3
4
5
# From the new node:
pvecm add <IP-of-cluster-node>

# From a healthy cluster node:
pvecm delnode <nodename>

PMXCFS check

ls -l /etc/pve            # FUSE filesystem
getfacl /etc/pve 2>/dev/null || true

🛟 High Availability (ha-manager)

Status and configuration

ha-manager status
ha-manager config

Add/remove a VM to HA

1
2
3
ha-manager add vm:<VMID> --group default --state started
ha-manager remove vm:<VMID>
ha-manager set vm:<VMID> --state stopped

🔐 Certificates and Web UI

Renew/recreate local certs and restart UI

pvecm updatecerts -f
systemctl restart pveproxy

Inspect current cert

openssl x509 -in /etc/pve/local/pve-ssl.pem -noout -subject -dates
ss -tnlp | grep 8006

🧠 GPU Passthrough and Reset

Identify GPU and driver bindings

lspci -nnk | grep -iEA3 "vga|3d|display|nvidia|amd|intel"
dmesg | grep -iE "IOMMU|DMAR|VFIO|AMD-Vi"

List IOMMU groups (useful for isolation)

1
2
3
4
5
6
7
for g in /sys/kernel/iommu_groups/*; do
  echo "Group ${g##*/}"
  for d in "$g"/devices/*; do
    lspci -nns "${d##*/}"
  done
  echo
done
Note about escaping colons

In normal Linux shells, you do not need to escape colons in sysfs PCI paths (e.g., 0000:c1:00.0). If you prefer, escaping them with backslashes also works, but it is not required.

Quick GPU device reset and PCI rescan

1
2
3
# Example device path: /sys/bus/pci/devices/0000:c1:00.0
echo 1 > /sys/bus/pci/devices/0000:c1:00.0/remove
echo 1 > /sys/bus/pci/rescan

Function-level reset (if supported)

echo 1 > /sys/bus/pci/devices/0000:01:00.0/reset

Safer unbind/bind to vfio-pci (host must not use the GPU)

1
2
3
4
5
GPU=0000:01:00.0
VENDOR_DEVICE=$(lspci -nns "$GPU" | awk -F'[][]' '{print $2}')
echo "$VENDOR_DEVICE" > /sys/bus/pci/drivers/vfio-pci/new_id
echo -n "$GPU" > /sys/bus/pci/devices/$GPU/driver/unbind
echo -n "$GPU" > /sys/bus/pci/drivers/vfio-pci/bind

⚙️ GPU Passthrough Workarounds

Ubuntu/Nvidia gdm3 Display Fix

When passing an Nvidia GPU through to an Ubuntu VM (Desktop version), the default display manager, gdm3, often conflicts with Proxmox's virtual display (SPICE/VNC). This can result in a black screen on the virtual console after the guest drivers are installed, making it difficult to manage the VM without a physical monitor attached to the GPU.

The most reliable solution is to switch the guest VM's display manager from gdm3 to lightdm, which is more compatible with this type of setup. Execute these commands inside the Ubuntu guest VM's terminal.

# Update package list and install lightdm
sudo apt update
sudo apt install lightdm -y

# If you were not prompted to choose a default display manager during
# installation, run this command and select lightdm from the menu.
sudo dpkg-reconfigure lightdm

# Reboot the VM for the change to take full effect.
sudo reboot

🧾 Logs and Troubleshooting

System and Proxmox services

1
2
3
4
journalctl -xe
journalctl -b -u pveproxy -u pvedaemon -u pvestatd -u pve-cluster \
  -u corosync -u pve-firewall --no-pager | less
dmesg -T | less

Guest-specific logs

tail -f /var/log/pve/qemu-server/<VMID>.log
tail -f /var/log/pve/lxc/<CTID>.log

Network diagnostics

1
2
3
4
ip -c a
ip -c r
nft list ruleset | less
tcpdump -ni <iface> host <ip-or-host> and port <port>

Stuck tasks and locks

1
2
3
ps aux | grep -E "qm .*<VMID>|vzdump|lxc"
qm unlock <VMID>
pct unlock <CTID>

⬆️ Updates and Repositories

Check repos

1
2
3
4
cat /etc/apt/sources.list
ls -1 /etc/apt/sources.list.d/
cat /etc/apt/sources.list.d/pve-enterprise.list 2>/dev/null || true
cat /etc/apt/sources.list.d/pve-no-subscription.list 2>/dev/null || true

Update safely

1
2
3
4
apt update
apt full-upgrade
pveversion -v
reboot

🔌 Proxmox API CLI (pvesh)

Quick queries

1
2
3
4
pvesh get /cluster/resources
pvesh get /nodes
pvesh get /nodes/$(hostname)/qemu
pvesh get /nodes/$(hostname)/lxc

Example: get a VM’s status via API

pvesh get /nodes/$(hostname)/qemu/<VMID>/status/current