Self hosting: Difference between revisions

From 太極
Jump to navigation Jump to search
 
(43 intermediate revisions by the same user not shown)
Line 68: Line 68:
</ul>
</ul>


=== Linux VM ===
* Make sure the storage is LVM so we can extend it later
* Install and start spice and qemu agents.
<syntaxhighlight lang='sh'>
sudo apt install spice-vdagent # may be installed already
sudo systemctl start spice-vdagent # needed
sudo apt install qemu-guest-agent
sudo systemctl start qemu-guest-agent
</syntaxhighlight>
=== Windows VM  ===
=== Windows VM  ===
* [https://engineerworkshop.com/blog/how-to-unlock-a-proxmox-vm/ How to Unlock a Proxmox VM]. Then reboot pve.
* [https://engineerworkshop.com/blog/how-to-unlock-a-proxmox-vm/ How to Unlock a Proxmox VM]. Then reboot pve.
Line 113: Line 122:
** Shift + F10. Type OOBE\PASSNRO . See [https://answers.microsoft.com/en-us/insider/forum/all/set-up-windows-11-without-internet-oobebypassnro/4fc44554-b416-4ecb-8961-6f79fd55ae0f Set up Windows 11 without internet].
** Shift + F10. Type OOBE\PASSNRO . See [https://answers.microsoft.com/en-us/insider/forum/all/set-up-windows-11-without-internet-oobebypassnro/4fc44554-b416-4ecb-8961-6f79fd55ae0f Set up Windows 11 without internet].
** For internet connection, attach Virtio ISO (I'm using the version 0.1.240) and run "virtio-win-gt-x64". Go to Device Manager and check the ethernet problem is gone.
** For internet connection, attach Virtio ISO (I'm using the version 0.1.240) and run "virtio-win-gt-x64". Go to Device Manager and check the ethernet problem is gone.
** It shows 48.6GB free of 63.1GB.


=== Mac VM ===
=== Mac VM ===
Line 127: Line 137:
** The current IP address can be found by '''ifconfig''' command or ''' ipconfig getifaddr en0''' or System preferences-Network.
** The current IP address can be found by '''ifconfig''' command or ''' ipconfig getifaddr en0''' or System preferences-Network.
* [https://github.com/luchina-gabriel/OSX-PROXMOX OSX-PROXMOX - Run macOS on ANY Computer - AMD & Intel]
* [https://github.com/luchina-gabriel/OSX-PROXMOX OSX-PROXMOX - Run macOS on ANY Computer - AMD & Intel]
* [https://www.youtube.com/watch?v=0mvRF4bAhHs Hackintosh Install Script For Proxmox]


=== Upgrade ===
=== Upgrade ===
Line 152: Line 163:
* ZFS (Zettabyte File System): A file system developed by Sun Microsystems for use in their Solaris operating system. It is now available on many other operating systems.
* ZFS (Zettabyte File System): A file system developed by Sun Microsystems for use in their Solaris operating system. It is now available on many other operating systems.
* Yes, ZFS can be used without LVM. Even on a workstation, you could use ZFS to pool your disks into a single large pool of storage rather than keep them separate or rely on LVM. [https://www.howtogeek.com/272220/how-to-install-and-use-zfs-on-ubuntu-and-why-youd-want-to/ How to Install and Use ZFS on Ubuntu (and Why You’d Want To)]  
* Yes, ZFS can be used without LVM. Even on a workstation, you could use ZFS to pool your disks into a single large pool of storage rather than keep them separate or rely on LVM. [https://www.howtogeek.com/272220/how-to-install-and-use-zfs-on-ubuntu-and-why-youd-want-to/ How to Install and Use ZFS on Ubuntu (and Why You’d Want To)]  
* [https://youtu.be/GoZaMgEgrHw?t=329 How to configure Proxmox storage (ZFS + RAID10)] from the video 'Before I do anything on Proxmox, I do this first...'.
* [https://youtu.be/GoZaMgEgrHw?t=329 How to configure Proxmox storage (ZFS + RAID10)] from the video 'Before I do anything on Proxmox, I do this first...'.
* ZFS vs RAID-0
* ZFS vs RAID-0
** ZFS is not like RAID-0. RAID-0 is a type of RAID that stripes data across multiple disks without any redundancy. If one disk fails, all data is lost. ZFS, on the other hand, provides '''data redundancy''' and '''checksumming''' to avoid silent data corruption.
** ZFS is not like RAID-0. RAID-0 is a type of RAID that stripes data across multiple disks without any redundancy. If one disk fails, all data is lost. ZFS, on the other hand, provides '''data redundancy''' and '''checksumming''' to avoid silent data corruption.
* ZFS cons
* ZFS cons
** [https://blog.servermania.com/xfs-vs-zfs-linux-raid/ XFS vs ZFS vs Linux Raid]. ZFS is a very CPU-intensive filesystem. This can lead to slower performance on systems with limited CPU resources.
** [https://blog.servermania.com/xfs-vs-zfs-linux-raid/ XFS vs ZFS vs Linux Raid]. ZFS is a very CPU-intensive filesystem. This can lead to slower performance on systems with limited CPU resources.
* [https://pve.proxmox.com/wiki/Installation Proxmox installation].  
* [https://pve.proxmox.com/wiki/Installation Proxmox installation].  
** The default file system is ext4. ZFS is an alternative to ext4.  
** The default file system is ext4. ZFS is an alternative to ext4.  
** [https://pve.proxmox.com/wiki/ZFS_on_Linux ZFS on linux]
** The Proxmox VE installer, which partitions the local disk(s) with ext4, XFS, BTRFS (technology preview), or ZFS and installs the operating system.
** The Proxmox VE installer, which partitions the local disk(s) with ext4, XFS, BTRFS (technology preview), or ZFS and installs the operating system.
** The main advantage of ZFS over EXT4 is guaranteed data integrity . ZFS protects your data by enabling '''volume management''' on filesystem level. EXT4 does not offer volume management on the filesystem level
** The main advantage of ZFS over EXT4 is guaranteed data integrity . ZFS protects your data by enabling '''volume management''' on filesystem level. EXT4 does not offer volume management on the filesystem level
** (Video) [https://www.youtube.com/watch?v=AP61_ETd2GE Setting Up NAS Server On Proxmox]
* [https://history-computer.com/zfs-vs-ext4/ ZFS vs EXT4: Best File System for Linux and Other Operating Systems]
* [https://history-computer.com/zfs-vs-ext4/ ZFS vs EXT4: Best File System for Linux and Other Operating Systems]
* RAM
* RAM
Line 169: Line 187:
[https://techviewleo.com/create-users-groups-permissions-proxmox/?expand_article=1 Create Users, Groups and Assign Permissions in Proxmox VE]
[https://techviewleo.com/create-users-groups-permissions-proxmox/?expand_article=1 Create Users, Groups and Assign Permissions in Proxmox VE]


== Change subscription repository ==
=== Live session ===
* To create a VM that is not meant to be installed to a disk, just make sure no disks have been added.
* Tested on Ubuntu 24.04 desktop.
* For the RAM,
** If I use 6144 as minimum and 8192 as max. "df -h" shows 3.7G as "/" and 3.7G as "/tmp".
** If I use 8192 as minimum and 8192 as max. "df -h" shows 3.9G as "/" and 3.9G as "/tmp".
** If I use 5120 as minimum and 6144 as max. "df -h" shows 2.4G as "/" and 2.5G as "/tmp".
* It took about 1 minute to show up the desktop no matter what RAM allocation is used. I can press "x" to close a window that asked about a few questions. I allocate 4 VCPU. The CPU on the host is i5-8500T @2.1GHz.
 
== After installation ==
[https://www.youtube.com/watch?v=VAJWUZ3sTSI Don’t run Proxmox without these settings!]
 
=== Change subscription repository ===
* [https://pve.proxmox.com/wiki/Package_Repositories#sysadmin_no_subscription_repo Package Repositories]. Comment out the line in '''/etc/apt/sources.list.d/pve-enterprise.list''' and add a line '''deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription''' to "/etc/apt/sources.list" file.
* [https://pve.proxmox.com/wiki/Package_Repositories#sysadmin_no_subscription_repo Package Repositories]. Comment out the line in '''/etc/apt/sources.list.d/pve-enterprise.list''' and add a line '''deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription''' to "/etc/apt/sources.list" file.
* See [https://www.virtualizationhowto.com/2022/08/proxmox-update-no-subscription-repository-configuration/ Proxmox Update No Subscription Repository Configuration] for Proxmox 8.
* See [https://www.virtualizationhowto.com/2022/08/proxmox-update-no-subscription-repository-configuration/ Proxmox Update No Subscription Repository Configuration] for Proxmox 8.


== Removing ProxMox Subscription Notice ==
=== Removing ProxMox Subscription Notice ===
* [https://www.reddit.com/r/Proxmox/comments/tgojp1/removing_proxmox_subscription_notice/ Removing ProxMox Subscription Notice].
* [https://www.reddit.com/r/Proxmox/comments/tgojp1/removing_proxmox_subscription_notice/ Removing ProxMox Subscription Notice].
* [https://dannyda.com/2020/05/17/how-to-remove-you-do-not-have-a-valid-subscription-for-this-server-from-proxmox-virtual-environment-6-1-2-proxmox-ve-6-1-2-pve-6-1-2/ How to: Remove “You do not have a valid subscription for this server….” from Proxmox Virtual Environment/Proxmox VE (PVE 6.1 to 7.1 and up)] works. I find I need to use '''Ctrl + F5''' to force a page to reload, ignoring the cache files for that page. We may need to restart the browser too. On macOS, we can use '''Shift + Cmd + R'''. No reboot is needed!
* [https://dannyda.com/2020/05/17/how-to-remove-you-do-not-have-a-valid-subscription-for-this-server-from-proxmox-virtual-environment-6-1-2-proxmox-ve-6-1-2-pve-6-1-2/ How to: Remove “You do not have a valid subscription for this server….” from Proxmox Virtual Environment/Proxmox VE (PVE 6.1 to 7.1 and up)] works. I find I need to use '''Ctrl + F5''' to force a page to reload, ignoring the cache files for that page. We may need to restart the browser too. On macOS, we can use '''Shift + Cmd + R'''. No reboot is needed!


== .bashrc ==
== .bashrc and ls colors ==
nano .bashrc and uncomment 5 lines so "ls" output has colors.
* nano .bashrc and uncomment 5 lines so "ls" output has colors.
* nano .bashrc and add the following. It will change the "ls" directory color to CYAN.
<pre>
export LS_COLORS="di=1;36"
</pre>


== Login timeout ==
== Login timeout ==
Line 189: Line 223:
** [https://en.wikipedia.org/wiki/Linux_PAM Linux PAM Standard Authentication]. [https://support.posit.co/hc/en-us/articles/221303588-What-is-my-username-on-my-RStudio-Workbench-RStudio-Server-installation- What is my username on my RStudio Workbench / RStudio Server installation?].
** [https://en.wikipedia.org/wiki/Linux_PAM Linux PAM Standard Authentication]. [https://support.posit.co/hc/en-us/articles/221303588-What-is-my-username-on-my-RStudio-Workbench-RStudio-Server-installation- What is my username on my RStudio Workbench / RStudio Server installation?].
* How to save PVE Web Loginpassword on Firefox/Chrome?
* How to save PVE Web Loginpassword on Firefox/Chrome?
* [https://pve.proxmox.com/wiki/Proxmox_VE_Mobile Mobile browser]


== SPICE ==
== SPICE ==
=== Display, copy-and-paste ===
=== Display, copy-and-paste ===
* [https://novnc.com/noVNC/ noVNC] is still a type of VNC.  
<ul>
** noVNC is an open source VNC client that '''runs well in any modern browser''' including mobile browsers (iOS and Android). It is both a VNC client JavaScript library as well as an application built on top of that library. noVNC follows the standard VNC protocol, but unlike other VNC clients it does require WebSockets support.  
<li>[https://novnc.com/noVNC/ noVNC] is still a type of VNC.  
** [https://www.kali.org/docs/general-use/novnc-kali-in-browser/ Kali In The Browser (noVNC)]
* noVNC is an open source VNC client that '''runs well in any modern browser''' including mobile browsers (iOS and Android). It is both a VNC client JavaScript library as well as an application built on top of that library. noVNC follows the standard VNC protocol, but unlike other VNC clients it does require WebSockets support.  
* [https://pve.proxmox.com/wiki/VNC_Client_Access VNC Client Access]
* [https://www.kali.org/docs/general-use/novnc-kali-in-browser/ Kali In The Browser (noVNC)]
* By default, Proxmox assigns a standard '''VGA''' device for BIOS-based virtual machines and a '''QXL''' device for UEFI-based virtual machines.
<li>[https://pve.proxmox.com/wiki/VNC_Client_Access VNC Client Access]
* For Windows OS, we can use the default (noVNC)
<li>By default, Proxmox assigns a standard '''VGA''' device for BIOS-based virtual machines and a '''QXL''' device for UEFI-based virtual machines.
** QXL: [https://linuxhint.com/install_virtio_drivers_kvm_qemu_windows_vm/ How to Install virtio Drivers on KVM-QEMU Windows Virtual Machines]
<li>For Windows OS, we can use the default (noVNC)
* For Linux OS, SPICE is better (clipboard in noVNC does not work). Screen can be scaled as we want. Copy and paste still does not work after I installed '''spice-vdagent'''. [https://serverfault.com/a/874316 virt-manager Spice copy paste doesn't work]. But the copy-paste function works in Debian VM launched by [https://virt-manager.org/ Virtual Machine Manager] (actually the menu bar are different. One is called 'Remote Viewer'/remote-viewer and the other embedded viewer from ''/usr/bin/qemu-system-x86_64'' is called 'QEMU/KVM').
* QXL: [https://linuxhint.com/install_virtio_drivers_kvm_qemu_windows_vm/ How to Install virtio Drivers on KVM-QEMU Windows Virtual Machines]
** (Debian11 and antiX VMs) When I use "ps -ef | grep spice", I got '''/usr/sbin/spice-vdagentd''' & '''/usr/bin/spice-vdagent''' as shown in the screenshot [https://www.linux-kvm.org/page/SPICE here] for the VM launched by QEMU/KVM. But I did not see '''/usr/bin/spice-vdagent''' in the VM launched by Proxmox.
<li>For Linux OS, SPICE is better (clipboard in noVNC does not work). Screen can be scaled as we want. Copy and paste still does not work after I installed '''spice-vdagent'''. [https://serverfault.com/a/874316 virt-manager Spice copy paste doesn't work]. But the copy-paste function works in Debian VM launched by [https://virt-manager.org/ Virtual Machine Manager] (actually the menu bar are different. One is called 'Remote Viewer'/remote-viewer and the other embedded viewer from ''/usr/bin/qemu-system-x86_64'' is called 'QEMU/KVM').
** B/C a hint from above, I found a solution [https://github.com/biglinux/spice-vdagent-autostart-kde/blob/main/spice-vdagent-autostart-kde.desktop here]. After I run '''/usr/bin/spice-vdagent ''', copy-and-paste works!
* (Debian11 and antiX VMs) When I use "ps -ef | grep spice", I got '''/usr/sbin/spice-vdagentd''' & '''/usr/bin/spice-vdagent''' as shown in the screenshot [https://www.linux-kvm.org/page/SPICE here] for the VM launched by QEMU/KVM. But I did not see '''/usr/bin/spice-vdagent''' in the VM launched by Proxmox.
** In summary, 1) '''sudo apt install spice-vdagent''' 2) '''/usr/bin/spice-vdagent'''
* B/C a hint from above, I found a solution [https://github.com/biglinux/spice-vdagent-autostart-kde/blob/main/spice-vdagent-autostart-kde.desktop here]. After I run '''/usr/bin/spice-vdagent ''', copy-and-paste works!
** (Fedora 35). Copy-and-paste works out of box (vdagentd & vdagent are automatically running in the background). Maybe it's because Fedora is a Red Hat based Linux OS.
* (Fedora 35). Copy-and-paste works out of box (vdagentd & vdagent are automatically running in the background). Maybe it's because Fedora is a Red Hat based Linux OS.
** How to add spice-vdagent to VirtIO VM?
* How to add spice-vdagent to VirtIO VM?
 
<li>Summary,
<syntaxhighlight lang='sh'>
sudo apt install spice-vdagent
sudo systemctl start spice-vdagent
</syntaxhighlight>
<li>Comparison
{| class="wikitable"
{| class="wikitable"
|-
|-
Line 220: Line 260:
| It requires more services than noVNC.
| It requires more services than noVNC.
|}
|}
</ul>


=== Sound/audio ===
=== Sound/audio ===
Line 261: Line 302:
* On Ubuntu VM,
* On Ubuntu VM,
<pre>
<pre>
apt-get install qemu-guest-agent
sudo apt-get install qemu-guest-agent
sudo systemctl start qemu-guest-agent
</pre>
</pre>
<li>For Windows VM,
<li>For Windows VM,
Line 297: Line 339:
* [https://askubuntu.com/a/860889 Read-only file system on proxmox server]. The filesystem will usually go into read-only while the system is running if there is a filesystem consistency issue. This is specified in fstab as errors=remount-ro and will occur when a FS access fails.
* [https://askubuntu.com/a/860889 Read-only file system on proxmox server]. The filesystem will usually go into read-only while the system is running if there is a filesystem consistency issue. This is specified in fstab as errors=remount-ro and will occur when a FS access fails.
* '''journalctl -b''' showed EXT4-fs error.
* '''journalctl -b''' showed EXT4-fs error.
== Memory usage ==
* [https://forum.proxmox.com/threads/pve-showing-high-memory-usage-but-vm-is-not.113463/ PVE showing high memory usage but VM is not]. It's just the cache. Please look at the yellow bar in htop.
* Reduce the minimum memory in Hardware settings.


== Network ==
== Network ==
Line 347: Line 393:
<li>This LXC looks very much like a server VM not Docker (we can also install desktop environment in an LXC) </li>
<li>This LXC looks very much like a server VM not Docker (we can also install desktop environment in an LXC) </li>
<li>[https://www.reddit.com/r/Proxmox/comments/u9ru5c/convert_docker_image_to_proxmox_lxc/ Convert Docker image to Proxmox lxc] </li>
<li>[https://www.reddit.com/r/Proxmox/comments/u9ru5c/convert_docker_image_to_proxmox_lxc/ Convert Docker image to Proxmox lxc] </li>
<li>[https://tteck.github.io/Proxmox/ Proxmox Help Scripts] </li>
<li>[https://virtualizeeverything.com/2021/12/08/using-a-desktop-with-a-lxc-proxmox-7/ Using a Desktop with a LXC Proxmox 7], [https://www.reddit.com/r/Proxmox/comments/w49z46/is_there_a_gui_for_ubuntu_desktop_in_a_container/ Is there a GUI for Ubuntu Desktop in a container?] </li>
<li>[https://virtualizeeverything.com/2021/12/08/using-a-desktop-with-a-lxc-proxmox-7/ Using a Desktop with a LXC Proxmox 7], [https://www.reddit.com/r/Proxmox/comments/w49z46/is_there_a_gui_for_ubuntu_desktop_in_a_container/ Is there a GUI for Ubuntu Desktop in a container?] </li>
<li>New users (eg brian)
<li>New users (eg brian)
Line 359: Line 404:
</pre>
</pre>
</ul>
</ul>
=== Proxmox help scripts ===
* [https://tteck.github.io/Proxmox/ Proxmox Help Scripts]
* [https://www.youtube.com/watch?v=kcpu4z5eSEU Proxmox Automation with Proxmox Helper Scripts!]
* [https://www.youtube.com/watch?v=TJ8-oKRrwjE 40+ Scripts To Streamline Your Proxmox Homelab]


=== LXC images ===
=== LXC images ===
* https://images.linuxcontainers.org/images/. See [https://youtu.be/oe1_JVl63a0?t=463 Installing Proxmox 8.1 on Raspberry Pi 5]
* https://images.linuxcontainers.org/images/. See [https://youtu.be/oe1_JVl63a0?t=463 Installing Proxmox 8.1 on Raspberry Pi 5]
* [https://forum.proxmox.com/threads/download-templates-lxc-containers.57894/ Download templates LXC containers]. Rename '''rootfs.tar.xz''' to be the name you want for the template ie ''debian_bookworm_20230714.tar.xz''. You can then use it as a container template as normal.
* [https://discuss.linuxcontainers.org/t/variants-cloud-vs-default/5607 Variants, cloud vs default] images.
* [https://discuss.linuxcontainers.org/t/desktop-images-for-lxd-vm/11723 Desktop images for LXD VM]
* Desktop with LXC:
** installing a desktop environment in an Ubuntu LXC container can work, but there are some considerations to keep in mind. LXC containers are designed to be lightweight and do not include all the components of a full virtual machine, which can affect how a desktop environment operates within them.
** you may encounter issues with services that expect to interact with hardware directly, as containers are more restricted compared to virtual machines.
* To use images from https://images.linuxcontainers.org/
** Install LXC/LXD
** Initialize LXD: '''lxd init'''
** Use the '''lxc''' command to download the desired desktop image; e.g., '''lxc launch images:ubuntu/20.04/desktop --vm'''
** Launch the container
** Access the container
** Install a desktop environment
** Set up remote access
** Connect to the desktop environment


=== Pi hole ===
=== Pi hole ===
Line 374: Line 439:
pihole -a -p # change to a simpler password
pihole -a -p # change to a simpler password
</pre>
</pre>
== QCOW2 ==
* [https://ostechnix.com/import-qcow2-into-proxmox/ How To Import QCOW2 Image Into Proxmox]
* [https://en.wikipedia.org/wiki/Qcow qcow]
* The “disks.qcow2” file is a disk image format used by [https://wiki.debian.org/QEMU QEMU] virtualization software, and it’s not typically used directly for LXC containers in Proxmox. A QCOW2 file is a disk image saved in the second version of the QEMU Copy On Write (QCOW2) format, which is used by QEMU virtualization software.
* DietPi. See Unraid case for the kernel panic issue.


== SMART and wearout ==
== SMART and wearout ==
Line 567: Line 638:


=== Passthrough a USB or a physical drive to VM ===
=== Passthrough a USB or a physical drive to VM ===
* GUI [https://dannyda.com/2020/08/26/how-to-passthrough-usb-devices-in-proxmox-ve-pve-6-2-easy-and-quick/ How to: Passthrough USB devices in Proxmox VE (PVE) 6.2 (Easiest and quick)]. Datacenter -> node name -> VM -> hardware -> Add -> USB Device -> Select the correct USB device to passthrough ('''lsusb''' command shows my USB storage is ''Bus 001 Device 002: ID '''152d:0576''' JMicron Technology Corp.'' so I choose the '''Use USB Vendor/Device ID''' option). Now when I go to the OpenMediaVault - Storage - Disks, I would be able to see the USB disk (Done). This is very easy compared to LXC case. To remove the USB drive, we just need to remove the USB device from the VM.
* '''Concept''': if we passthrough a usb disk to a vm from Proxmox, does it mean Proxmox won't see the usb disk and only the VM can see and use the usb disk?
** Answer: Yes, that’s correct. When you pass through a USB disk to a virtual machine (VM) from Proxmox, the USB disk becomes directly accessible to the VM and not to the Proxmox host (including '''fdisk''' or '''lsblk'''). This is because the USB device is being assigned directly to the VM, making it appear as if the device is connected to the VM rather than the host.
* Proxmox documentation
** [https://pve.proxmox.com/wiki/USB_Devices_in_Virtual_Machines USB Devices in Virtual Machines]
*** [https://pve.proxmox.com/wiki/USB_Devices_in_Virtual_Machines#Reassign_to_Host Reassign to Host].
***# Identify the VM and USB device,
***# Remove the USB device from the VM's configuration (If using the Proxmox web interface, navigate to the VM’s hardware tab, select the USB device, and click on "Remove", If using the command-line interface, you can edit the VM’s configuration file directly. The configuration files are located in /etc/pve/qemu-server/ and are named after the VM’s ID (e.g., 100.conf for a VM with ID 100). In the configuration file, remove the line that corresponds to the USB device),
***# Restart the VM.
** [https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM) Passthrough Physical Disk to Virtual Machine (VM)]
 
* GUI [https://dannyda.com/2020/08/26/how-to-passthrough-usb-devices-in-proxmox-ve-pve-6-2-easy-and-quick/ How to: Passthrough USB devices in Proxmox VE (PVE) 6.2 (Easiest and quick)].  
** '''My example''': Datacenter -> node name -> VM -> hardware -> Add -> USB Device -> Select the correct USB device to passthrough ('''lsusb''' command shows my USB storage is ''Bus 001 Device 002: ID '''152d:0576''' JMicron Technology Corp.'' so I choose the '''Use USB Vendor/Device ID''' option).  
** Now when I go to the OpenMediaVault - Storage - Disks, I would be able to see the USB disk (Done). This is very easy compared to LXC case. To remove the USB drive, we just need to remove the USB device from the VM.
** No '''qm set''' command is needed.
 
* [https://www.wundertech.net/how-to-pass-through-usb-devices-in-proxmox/ How to Pass Through USB Devices in Proxmox]
** The screenshot of the output of '''lsusb''' is similar to what I saw on my PVE.


* https://pve.proxmox.com/wiki/Passthrough_Physical_Disk_to_Virtual_Machine_(VM)
* Command Line [https://poweradm.com/passthrough-disk-vm-proxmox/ Passthrough Physical Disk or USB to VM on Proxmox VE]
* Command Line [https://poweradm.com/passthrough-disk-vm-proxmox/ Passthrough Physical Disk or USB to VM on Proxmox VE]
* Command Line [https://virtualizeeverything.com/2022/05/18/passing-usb-storage-drive-to-proxmox-lxc/ Passing USB Storage Drive to Proxmox LXC]
* Command Line [https://virtualizeeverything.com/2022/05/18/passing-usb-storage-drive-to-proxmox-lxc/ Passing USB Storage Drive to Proxmox LXC]
Line 587: Line 673:
* [https://forum.proxmox.com/threads/how-to-safely-remove-a-usb-hdd-spindown.103979/ How to 'safely' remove a USB HDD (spindown)]. OMV case.
* [https://forum.proxmox.com/threads/how-to-safely-remove-a-usb-hdd-spindown.103979/ How to 'safely' remove a USB HDD (spindown)]. OMV case.
* '''lsblk''' has a column '''MOUNTPOINTS''' showing if a disk is mounted or not.
* '''lsblk''' has a column '''MOUNTPOINTS''' showing if a disk is mounted or not.
* Assume I insert a second USB drive and the drive has not been used. If I just remove the 2nd USB drive, the 1st USB drive will be affected and not seen by '''lsusb''' command by PVE. So in order to safely remove the 2nd USB drive, I need to use the '''eject /dev/sdb''' command where "/dev/sdb" is determined by '''fdisk -l''' command.


=== Upgrade storage ===
=== Upgrade storage ===
[https://www.reddit.com/r/Proxmox/comments/13fndny/resizing_after_copy_to_bigger_ssd_my_experience/ Resizing after copy to bigger SSD my experience]
* https://pve.proxmox.com/wiki/Storage
* [https://forum.proxmox.com/threads/upgrade-data-disk.113824 Upgrade data disk]
*# stop all guests
*# Backup all guests to NAS, USB Disk using Vzdump or send it to a PBS
*# remove the Storage at "Datacenter -> Storage"
*# shutdown server and replace disk
*# wipe new disk (can be done using webUI since PVE 7.X, otherwise do it manually using CLI) at "YourNode -> Disks -> select new disk -> wipe"
*# use the webUI to create a new VM/LXC storage (LVM-Thin, ZFS or whatever you like) at "YourNode -> Disks -> LVM-Thin/ZFS -> Create: Thinpool/ZFSpool"
*# restore backups o new VM/LXC storage
 
* [https://opentechtips.com/how-to-add-extra-storage-to-proxmox/ How to Add Extra Storage to Proxmox]
* [https://www.reddit.com/r/Proxmox/comments/13fndny/resizing_after_copy_to_bigger_ssd_my_experience/ Resizing after copy to bigger SSD my experience]


=== Shared storage ===
=== Shared storage ===
Line 609: Line 707:
* The last option '''x-systemd.device-timeout=10''' sets the timeout for the device to 10 seconds. If the device is not available within this time when you or a process attempt to mount it manually, systemd will stop trying to mount it.  
* The last option '''x-systemd.device-timeout=10''' sets the timeout for the device to 10 seconds. If the device is not available within this time when you or a process attempt to mount it manually, systemd will stop trying to mount it.  
* The 0 0 at the end of the line are two different options: The first 0 refers to dump, a backup utility. By setting it to 0, you’re telling dump to ignore this file system. The second 0 is for fsck, the file system check utility. This 0 tells fsck not to check this file system at boot time.
* The 0 0 at the end of the line are two different options: The first 0 refers to dump, a backup utility. By setting it to 0, you’re telling dump to ignore this file system. The second 0 is for fsck, the file system check utility. This 0 tells fsck not to check this file system at boot time.
<li>If I just run '''mount -a''', it does not show any errors. But the network drive is still not available. If I just run the "mount -t cifs" command in the shell, I got the following message.
<pre>
mount: (hint) your fstab has been modified, but systemd still uses
      the old version; use 'systemctl daemon-reload' to reload.
</pre>
<li>After the Samba network share is available in PVE, we can add it to the web interface
* Datacenter (not hostname)
* Storage -> Add '''Directory'''. The ID will be shown on the PVE LHS panel. The "Directory" refers to the directory mounted on PVE, e.g., /media/share.
</ul>
</ul>


=== Mount a Network Share in a Linux Container ===
=== Mount a Network Share in a Linux Container ===
* [https://steamforge.net/wiki/index.php/How_to_mount_a_Network_Share_in_a_Linux_Container_under_Proxmox How to mount a Network Share in a Linux Container under Proxmox]
* [https://steamforge.net/wiki/index.php/How_to_mount_a_Network_Share_in_a_Linux_Container_under_Proxmox How to mount a Network Share in a Linux Container under Proxmox]
== Ceph stroage ==
* [https://forum.proxmox.com/threads/what-is-ceph.17748/ What is CEPH???] Ceph is a storage CLUSTERING solution.
** You can add any number of disks on any number of machines into one big storage cluster. Then you set up the configuration for ceph, most notably the number of copies of a file.
** If you set this to 2 for instance, it means that the cluster will always have 3 copies of all the objects this setting applies to. Ceph is also self-managing, meaning that it will automatically try to distribute these copies over 3 physical machines (if possible), onto 3 separate disks.
**  When any disk or machine dies, ceph will immediately use the 2 remaining copies of the affected objects and create a 3rd copy in the cluster.
** What this does is eliminate the requirement to manually restock your spare disk in a conventional RAID setup as long as you have enough total storage to fit all the objects 3 times.
* [https://pve.proxmox.com/pve-docs/chapter-pveceph.html Deploy Hyper-Converged Ceph Cluster]


== Plex ==
== Plex ==
Line 619: Line 733:
== Proxmox increase vm disk size ==
== Proxmox increase vm disk size ==
<ul>
<ul>
<li>Important: Make sure '''LVM''' was selected when we installed Linux. Otherwise, we need to boot from a LIVE CD.
* Shut down the VM from Proxmox.
* In Proxmox, add a CD-ROM drive to the VM and attach a Debian or other Linux live ISO.
* Start the VM and boot from the live CD.
* Once booted, open a terminal
<pre>
sudo parted /dev/sda
print
# This will show you the current partition layout
resizepart 2 100%
# This resizes partition 2 to use all available space
print
# Verify the new size
quit
# After exiteing parted, run
sudo partprobe /dev/sda
</pre>
<li>https://pve.proxmox.com/wiki/Resize_disks
<li>https://pve.proxmox.com/wiki/Resize_disks
<li>[https://www.wundertech.net/how-to-increase-vm-disk-size-in-proxmox/ How to Increase VM Disk Size in Proxmox]
<li>[https://www.wundertech.net/how-to-increase-vm-disk-size-in-proxmox/ How to Increase VM Disk Size in Proxmox]
<li>[https://dixmata.com/resize-disk-vm-proxmox/ Proxmox Resize Disk VM / Extend Disk VM LVM]
<li>[https://dixmata.com/resize-disk-vm-proxmox/ Proxmox Resize Disk VM / Extend Disk VM LVM]
<li>[https://unix.stackexchange.com/a/697341 Expand logical volume - Ubuntu on Proxmox]. It works when I increase my Ubuntu22.04 from 16GB to 32GB.  
<li>[https://unix.stackexchange.com/a/697341 Expand logical volume - Ubuntu on Proxmox]. It works when I increase my Ubuntu22.04 from 16GB to 32GB.  
* Step 1: Work in PVE.
* Step 1: Work in PVE. This can be done through Proxmox UI (Hardware -> Disk Action -> Resize disk -> change 0 to 16 for example if we want to increase size by 16GB).  
<pre>
<syntaxhighlight lang='sh'>
# qm resize <vmid> <disk> <size>  
# qm resize <vmid> <disk> <size>  
# qm resize 102 scsi0 +16G
# qm resize 102 scsi0 +16G
</pre>
</syntaxhighlight>
* Step 2: Work in the VM.
* Step 2: Work in the VM. PS: for some reason, running '''lvextend''' does not show the filesystem has been extend. After I called '''gparted''' to extend the partition, then '''lvextend''' will show the filesystem has a new blocks length.
{{Pre}}
<syntaxhighlight lang='sh'>
$ lsblk  #  or df -h
$ lsblk  #  or df -h. We can see the size of sda is larger than the root partition.


# extend the physical volume from the partition
# Make LVM aware of any changes in the size of the underlying partition
# (in this case, /dev/sda3) that contains the physical volume.
# LVM scans the specified device (/dev/sda3) to determine its current size.
# If the size of the partition has changed (usually increased), LVM updates its metadata to reflect the new size of the physical volume.
$ sudo pvresize /dev/sda3
$ sudo pvresize /dev/sda3
   Physical volume "/dev/sda3" changed
   Physical volume "/dev/sda3" changed
   1 physical volume(s) resized or updated / 0 physical volume(s) not resized
   1 physical volume(s) resized or updated / 0 physical volume(s) not resized
$ sudo fdisk  # /dev/sda3  shows 30GB


$ df -h      # /dev/mapper/ubuntu--vg-ubuntu--lv is around 16GB, no changed yet
$ df -h      # /dev/mapper/ubuntu--vg-ubuntu--lv is around 16GB, no changed yet
Line 643: Line 779:
/dev/mapper/ubuntu--vg-ubuntu--lv  15G  12G  2.4G  84% /
/dev/mapper/ubuntu--vg-ubuntu--lv  15G  12G  2.4G  84% /


# extend LV to use up all space from VG
# Extend LV to use up all space from VG
$ sudo lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
$ sudo lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
   Size of logical volume ubuntu-vg/ubuntu-lv changed from <15.00 GiB (3839 extents) to <30.00 GiB (7679 extents).
   Size of logical volume ubuntu-vg/ubuntu-lv changed from <15.00 GiB (3839 extents) to <30.00 GiB (7679 extents).
Line 658: Line 794:
Filesystem                        Size  Used Avail Use% Mounted on
Filesystem                        Size  Used Avail Use% Mounted on
/dev/mapper/ubuntu--vg-ubuntu--lv  30G  12G  17G  42% /
/dev/mapper/ubuntu--vg-ubuntu--lv  30G  12G  17G  42% /
</pre>
</syntaxhighlight>
</ul>
</ul>


Line 676: Line 812:
* The backup file can be seen on the GUI under $STORAGE (pve) -> Backups where $STORAGE is the storage name (e.g. local). From there, it has a 'Restore' button where we can restore it with an option to enter a new VM ID.
* The backup file can be seen on the GUI under $STORAGE (pve) -> Backups where $STORAGE is the storage name (e.g. local). From there, it has a 'Restore' button where we can restore it with an option to enter a new VM ID.
* If the backup file is saved in local (pve), the backup file can also be seen under the VM|Backup menu.  
* If the backup file is saved in local (pve), the backup file can also be seen under the VM|Backup menu.  
* The backup VM has a filename ''vzdump-qemu-$ID-$Date-$Time.vma.zst'' (compression by default). If we select the file, we can click the 'Restore' button to restore the VM.
* After restoring, the new VM has a new ID but the VM name is still the same as the original one (so we can only use the ID to distinguish the VMs). Therefore, if we use the static IP in a VM, it is better to shut down the old one before we Start the new VM.
* After restoring, the new VM has a new ID but the VM name is still the same as the original one (so we can only use the ID to distinguish the VMs). Therefore, if we use the static IP in a VM, it is better to shut down the old one before we Start the new VM.
* If we remove/delete the restored VM, the backup file is not affected (not deleted).
* If we remove/delete the restored VM, the backup file is not affected (not deleted).
* It seems '''backup + restore''' = '''clone'''.
* It seems '''backup + restore''' = '''clone'''.
* If we back up a container, the backup file name has a format ''vzdump-lxc-$ID-$Date-$Time.tar.zst''.
* The backup speed is quick. Using the "top" command it shows it is the command '''zstd''' running for the backup.
* The backup speed is quick. Using the "top" command it shows it is the command '''zstd''' running for the backup.
</li>
</li>
Line 703: Line 837:
<li>[https://forum.proxmox.com/threads/lxc-container-backup-suspend-mode-exit-code-23.93497/ lxc container backup suspend mode exit code 23]. [https://www.reddit.com/r/Proxmox/comments/vaveq1/lxc_backup_failed_code_23/ LXC backup failed - code 23]. Choose Backup Mode from snapshot to '''stop'''. The LXC will auto-restart after backup is finished.
<li>[https://forum.proxmox.com/threads/lxc-container-backup-suspend-mode-exit-code-23.93497/ lxc container backup suspend mode exit code 23]. [https://www.reddit.com/r/Proxmox/comments/vaveq1/lxc_backup_failed_code_23/ LXC backup failed - code 23]. Choose Backup Mode from snapshot to '''stop'''. The LXC will auto-restart after backup is finished.
</ul>
</ul>
=== The current guest configuration does not support taking new snapshots ===
* [https://dannyda.com/2021/10/19/how-to-fix-workaround-proxmox-ve-pve-with-tpm-2-0-cant-have-snapshots-the-current-guest-configuration-does-not-support-taking-new-snapshots/ How to Fix (Workaround) Proxmox VE (PVE) with TPM 2.0 can’t have snapshots “The current guest configuration does not support taking new snapshots”]
* [https://4sysops.com/archives/snapshots-in-proxmox-ve/ Snapshots in Proxmox VE]
=== Change backup file names ===
* [https://www.reddit.com/r/Proxmox/comments/13r1aae/how_do_you_set_the_vzdump_output_file_name_to/ How do you set the vzdump output file name to include guestname in the name?]
* The backup VM has a filename '''vzdump-qemu-$ID-$Date-$Time.vma.zst''' (compression by default). If we select the file, we can click the 'Restore' button to restore the VM.
* If we back up a container, the backup file name has a format '''vzdump-lxc-$ID-$Date-$Time.tar.zst'''.
=== Restore error: data corruption ===
<ul>
<li>An example of log from a failed restoration.
{{Pre}}
restore vma archive: zstd -q -d -c /media/wd2t/dump/vzdump-qemu-201-2024_06_29-08_59_07.vma.zst | vma extract -v -r /var/tmp/vzdumptmp552388.fifo - /var/tmp/vzdumptmp552388
...
progress 1% (read 343605248 bytes, duration 2 sec)
progress 2% (read 687210496 bytes, duration 3 sec)
progress 3% (read 1030815744 bytes, duration 4 sec)
_29-08_59_07.vma.zst : Decoding error (36) : Data corruption detected
vma: restore failed - short vma extent (3282432 < 3797504)
</pre>
<li>Run zstd command
{{Pre}}
# zstd -q -d -c /media/wd2t/dump/vzdump-qemu-201-2024_06_29-08_59_07.vma.zst > /var/tmp/vzdump-qemu-201-2024_06_29-08_59_07.vma
_29-08_59_07.vma.zst : Decoding error (36) : Data corruption detected
</pre>
<li>How does zstd detected data corruption
* File Format Verification: Zstandard begins by checking the magic number of the file, which identifies it as a Zstandard compressed file. If this initial identifier is missing or incorrect, Zstandard will immediately flag the file as corrupted.
* Frame Header Check: The decompression process involves reading the frame headers, which contain metadata about the compressed data, such as block sizes and checksums. If these headers are malformed or inconsistent, Zstandard will detect corruption.
* Checksum Verification: Zstandard uses checksums to verify the integrity of each block of data. When a file is compressed, Zstandard computes and stores a checksum for each block. During decompression, it recomputes the checksum for each decompressed block and compares it to the stored value. If they don't match, it indicates data corruption.
* Block Integrity: Each block of compressed data is decompressed independently. If any block is incomplete, truncated, or contains unexpected data patterns that do not conform to the expected compression format, Zstandard will detect this as corruption.
* End of Stream Marker: Zstandard expects a specific marker at the end of the stream to signify the end of the compressed data. If this marker is missing or incorrect, it indicates that the file may be incomplete or corrupted.
<li>Example workflow to verify integrity
<pre>
sha256sum original_file.vma
zstd original_file.vma
zstd -t original_file.vma.zst  # 't'est integrity
# If there are no errors, the file should be intact.
</pre>
</ul>
=== VM locked after I stopped the backup ===
[https://forum.proxmox.com/threads/vm-locked-after-failed-backup-cant-unlock.9099/ vm locked after failed backup, can't unlock]
<pre>
qm unlock <vmid>
qm start <vmid>
</pre>
I did not try above commands. However, after I did a reboot and the locking disappeared.


=== rclone ===
=== rclone ===
Line 746: Line 929:
=== High Availability ===
=== High Availability ===
[https://youtu.be/1nEs1ZvGbTM Proxmox VE Full Course: Class 16 - High Availability]
[https://youtu.be/1nEs1ZvGbTM Proxmox VE Full Course: Class 16 - High Availability]
== NVIDIA GPU drivers ==
[https://linuxhint.com/install-official-nvidia-gpu-drivers-proxmox-ve-8/ How to Install the Official NVIDIA GPU Drivers on Proxmox VE 8]


== USB passthrough ==
== USB passthrough ==
Line 751: Line 937:
* [https://manjaro.site/how-to-passthrough-usb-disk-to-a-virtual-machine-in-proxmox-6-2/ How to Passthrough USB Disk to a Virtual Machine in Proxmox 6.2], [https://youtu.be/XQOV70JW2zE Adding USB Devices to Proxmox VM] (video). This assumes the USB device is on the proxmox host.
* [https://manjaro.site/how-to-passthrough-usb-disk-to-a-virtual-machine-in-proxmox-6-2/ How to Passthrough USB Disk to a Virtual Machine in Proxmox 6.2], [https://youtu.be/XQOV70JW2zE Adding USB Devices to Proxmox VM] (video). This assumes the USB device is on the proxmox host.
* [https://www.reddit.com/r/Proxmox/comments/13adiiv/share_ext_ssd_for_samba_and_vms/ Share ext SSD for samba and VMs]
* [https://www.reddit.com/r/Proxmox/comments/13adiiv/share_ext_ssd_for_samba_and_vms/ Share ext SSD for samba and VMs]
* (video)  
* (video) [https://youtu.be/VfGnAGT8eRI Adding USB Redirection to the Raspberry Pi + Proxmox Thin Client]. The USB device is on the client.
* [https://youtu.be/VfGnAGT8eRI Adding USB Redirection to the Raspberry Pi + Proxmox Thin Client]. The USB device is on the client.
** [https://youtu.be/I5zA1lU5Tw0 Converting Any USB Device to A Wireless USB], [https://www.virtualhere.com/home VirtualHere] ($49, locked to one device). The server (USB devices are plugged in) can be any where.
** [https://youtu.be/I5zA1lU5Tw0 Converting Any USB Device to A Wireless USB], [https://www.virtualhere.com/home VirtualHere] ($49, locked to one device). The server (USB devices are plugged in) can be any where.
* My example
* My example
Line 760: Line 945:
** If the USB is a '''game controller''', in the search box in Taskbar, enter "game controller" and find the one "Set up USB game controllers" in Control panel. it should show the name of the controller. Open edge and go to https://gamepadtester.net to test each buttons of the controller.
** If the USB is a '''game controller''', in the search box in Taskbar, enter "game controller" and find the one "Set up USB game controllers" in Control panel. it should show the name of the controller. Open edge and go to https://gamepadtester.net to test each buttons of the controller.
** If we are done with the USB device, we shut down the VM first. Go to the Hardware in Proxmox and remove the USB item.
** If we are done with the USB device, we shut down the VM first. Go to the Hardware in Proxmox and remove the USB item.
* Unraid.
** [https://pve.proxmox.com/wiki/Qemu/KVM_Virtual_Machines#qm_virtual_machines_settings Virtual Machines Settings]
** [https://simplyexplained.com/blog/howto-virtualize-unraid-on-proxmox-host/ Howto Virtualize Unraid on a Proxmox host]. It basically passes through the USB drive to the Proxmox VM. However, it includes a solution for resolving the kernel panic error.


== Thin client ==
== Thin client ==
Line 766: Line 955:
== Android app ==
== Android app ==
[https://play.google.com/store/apps/details?id=com.proxmox.app.pve_flutter_frontend Proxmox Virtual Environment]
[https://play.google.com/store/apps/details?id=com.proxmox.app.pve_flutter_frontend Proxmox Virtual Environment]
== Nested virtualization==
[https://www.xda-developers.com/why-you-should-set-up-nested-virtualization/ 3 reasons why you should set up nested virtualization on your home lab]


== Android emulator and nested virtualization ==
== Android emulator and nested virtualization ==
Line 774: Line 966:


== OpenWRT router ==
== OpenWRT router ==
[https://m.youtube.com/watch?v=3mPbrunpjpk Must-Have OpenWrt Router Setup For Your Proxmox]
* See [[Dual_boot#Proxmox|Netboot]].
* [https://openwrt.org/toh/views/toh_available_16128_ax-wifi Table of Hardware: Ideal for OpenWrt + Wifi6 (802.11ax) supported]
* [https://slickdeals.net/f/17687091-gl-inet-gl-mt3000-beryl-ax-pocket-sized-wi-fi-6-wireless-travel-gigabit-router-73-84-w-free-shipping?src=SiteSearchV2Algo1 GL.iNet GL-MT3000]
* [https://liliputing.com/openwrt-one-wifi-6-router-is-now-available-for-89/ OpenWrt One WiFi 6 router samples are now available for $89]
** [https://docs.banana-pi.org/en/OpenWRT-One/BananaPi_OpenWRT-One Banana Pi OpenWrt One Router] page


== Error 401: no ticket ==
== Error 401: no ticket ==
Line 811: Line 1,007:
= Proxmox Backup Server/PBS =
= Proxmox Backup Server/PBS =
* [https://ostechnix.com/install-proxmox-backup-server/ How To Install Proxmox Backup Server Step by Step]
* [https://ostechnix.com/install-proxmox-backup-server/ How To Install Proxmox Backup Server Step by Step]
* [https://www.youtube.com/watch?v=wyTuuMVA0Gs Install PBS in VM] (video)
* [https://ostechnix.com/getting-started-with-proxmox-backup-server/ Getting Started With Proxmox Backup Server]
* [https://ostechnix.com/getting-started-with-proxmox-backup-server/ Getting Started With Proxmox Backup Server]
* [https://www.reddit.com/r/Proxmox/comments/12zumo6/dumb_question_can_pbs_be_used_as_standalone/ Can PBS be used as standalone backup server with no integration to proxmox ve?]
* [https://www.reddit.com/r/Proxmox/comments/12zumo6/dumb_question_can_pbs_be_used_as_standalone/ Can PBS be used as standalone backup server with no integration to proxmox ve?]

Latest revision as of 09:37, 30 October 2024

Resource

Proxmox Virtual Environment

Set up

Linux VM

  • Make sure the storage is LVM so we can extend it later
  • Install and start spice and qemu agents.
sudo apt install spice-vdagent # may be installed already
sudo systemctl start spice-vdagent # needed
sudo apt install qemu-guest-agent
sudo systemctl start qemu-guest-agent

Windows VM

  • How to Unlock a Proxmox VM. Then reboot pve.
  • For Windows installation, if we are dropped at the EFI shell, we can use reset -s to shut down the system.
  • Windows VirtIO Drivers & Creating Windows virtual machines using virtIO drivers from fedoraproject.org
  • (Videos) Launching a Windows VM in Proxmox (Win10), Virtualize Windows 10 with Proxmox VE (Win10)
  • In OS tab, be sure to choose the right Guest OS Version; if we choose (10/2016/2019), we will select 2019 in the driver folder locations OR if choose (11/2022), we will select the corresponding driver folders. This affects the later steps when we select the drivers to install.
  • There are 3 drivers we shall install.
    • vioscsi\win10\amd64\vioscsi.inf (we need it in order to see the virtual disk)
    • NetKVM\w10\amd64\netkvm.inf (in order to get network working, this can be installed later from Windows Device Manager)
    • Balloon\win10\amd64\balloon.inf (memory balloon driver, this can be installed later)
    • Guest agent (Qemu-guest-agent). Make sure we have installed all missing drivers from Device Manager. We can go to virtual machine Summary tab to check if the IPs shows the Windows IP.
  • Balloon driver
    • If balloon driver is not installed on Windows Virtual Machines then memory metrics will not be collected for windows virtual machine. Steps to Install balloon driver on Windows Virtual Machines
    • A balloon driver is a component of the VMware Tools package of drivers and utilities that help virtual machines (VMs) run better. The purpose of the balloon driver is to take physical RAM from a VM and release it back to the VMkernel. This reclaim usually happens when the ESXi server is short on RAM and this specific VM is the loser in the competition for physical RAM. How the balloon driver helps VM performance
  • Change network to use a static IP.
  • Enable RDP.
  • (Related to Audio device). Change Hardware - Display - SPICE (instead of default). W/o doing that, we'll get an error "Can't start vms with audio device(SPICE) in pve 7.3 "
  • For Audio, we need to add it to the hardware list (driver=spice is OK). PS: audio works when I tested using the Microsoft Remote Desktop app on mac. It does not work when I use Remmina app from Ubuntu initially but if I change the sound setting (from "Off" to "Local" in the "Advanced" tab) it works.
  • Windows 11 (10/21/2021). We can still use 10/2016/2019 for the Version selection. When I check the Task Manager (Windows 11 Pro, Version 21H2, OS build 22000.675), it shows 1.8/4GB was used and 101 out of 127GB is free.
    • OS: version 10/2016/2019
    • System: BIOS: OVMF (UEFI). Machine - q35. Add TPM. Qemu Agent. SCSI: VirtIO SCCI
    • HD: Bus-VirtIO Block. Disk size >=32
    • CPU: Cores >=2. Type - Host
    • Memory
    • Network: VirtIO
    • After finish, adding CD/DVD - VirtIO iso
    • Install: Browse VirtIO disk. AMD - win10. Next, browse NetKVM - win10.
    • After reboot. Continue to answer questions. Reboot
    • File manager. VirtIO disk. Run virtio-win-gt-x64.exe.
    • Right click on Start. Device Manager. All clean.
  • Windows 11 VM / Office 2019 install error 0-2054
  • Windows 11 virtual machine on Proxmox 8/19/2022
  • AMD/NVIDIA GPU Passthrough in Window 11 - Proxmox Guide 7/22/2022
  • Windows 11 23H2 VM in Proxmox Tutorial - 2024. It works.
    • ISO: Win11_23H2_English_x64v2.iso. Guest OS: leave it to Linux.
    • System: UEFI. Select UEFI storage. Check TPM. Select TPM storage.
    • The key in all the steps is selecting SATA in Disks. Set disk size to 64.
    • CPU: 2 cores. Memory: 8192.
    • Shift + F10. Type OOBE\PASSNRO . See Set up Windows 11 without internet.
    • For internet connection, attach Virtio ISO (I'm using the version 0.1.240) and run "virtio-win-gt-x64". Go to Device Manager and check the ethernet problem is gone.
    • It shows 48.6GB free of 63.1GB.

Mac VM

Upgrade

Install on Debian

Installing Proxmox VE 7.x on Debian Bullseye for custom partition layout (video)

Cheat sheet

https://github.com/vzamora/Proxmox-Cheatsheet

SSD/HDD choices

Home Server

My Proxmox Home Server Walk-Through

ZFS and RAID

  • ZFS (Zettabyte File System): A file system developed by Sun Microsystems for use in their Solaris operating system. It is now available on many other operating systems.
  • Yes, ZFS can be used without LVM. Even on a workstation, you could use ZFS to pool your disks into a single large pool of storage rather than keep them separate or rely on LVM. How to Install and Use ZFS on Ubuntu (and Why You’d Want To)
  • ZFS vs RAID-0
    • ZFS is not like RAID-0. RAID-0 is a type of RAID that stripes data across multiple disks without any redundancy. If one disk fails, all data is lost. ZFS, on the other hand, provides data redundancy and checksumming to avoid silent data corruption.
  • ZFS cons
    • XFS vs ZFS vs Linux Raid. ZFS is a very CPU-intensive filesystem. This can lead to slower performance on systems with limited CPU resources.
  • Proxmox installation.
    • The default file system is ext4. ZFS is an alternative to ext4.
    • ZFS on linux
    • The Proxmox VE installer, which partitions the local disk(s) with ext4, XFS, BTRFS (technology preview), or ZFS and installs the operating system.
    • The main advantage of ZFS over EXT4 is guaranteed data integrity . ZFS protects your data by enabling volume management on filesystem level. EXT4 does not offer volume management on the filesystem level
    • (Video) Setting Up NAS Server On Proxmox

Users, groups

Create Users, Groups and Assign Permissions in Proxmox VE

Live session

  • To create a VM that is not meant to be installed to a disk, just make sure no disks have been added.
  • Tested on Ubuntu 24.04 desktop.
  • For the RAM,
    • If I use 6144 as minimum and 8192 as max. "df -h" shows 3.7G as "/" and 3.7G as "/tmp".
    • If I use 8192 as minimum and 8192 as max. "df -h" shows 3.9G as "/" and 3.9G as "/tmp".
    • If I use 5120 as minimum and 6144 as max. "df -h" shows 2.4G as "/" and 2.5G as "/tmp".
  • It took about 1 minute to show up the desktop no matter what RAM allocation is used. I can press "x" to close a window that asked about a few questions. I allocate 4 VCPU. The CPU on the host is i5-8500T @2.1GHz.

After installation

Don’t run Proxmox without these settings!

Change subscription repository

Removing ProxMox Subscription Notice

.bashrc and ls colors

  • nano .bashrc and uncomment 5 lines so "ls" output has colors.
  • nano .bashrc and add the following. It will change the "ls" directory color to CYAN.
export LS_COLORS="di=1;36"

Login timeout

Proxmox GUI Session Timeout. The login ticket is valid for 2 hours and gets refreshed every 15 minutes.

Login in browser

SPICE

Display, copy-and-paste

  • noVNC is still a type of VNC.
    • noVNC is an open source VNC client that runs well in any modern browser including mobile browsers (iOS and Android). It is both a VNC client JavaScript library as well as an application built on top of that library. noVNC follows the standard VNC protocol, but unlike other VNC clients it does require WebSockets support.
    • Kali In The Browser (noVNC)
  • VNC Client Access
  • By default, Proxmox assigns a standard VGA device for BIOS-based virtual machines and a QXL device for UEFI-based virtual machines.
  • For Windows OS, we can use the default (noVNC)
  • For Linux OS, SPICE is better (clipboard in noVNC does not work). Screen can be scaled as we want. Copy and paste still does not work after I installed spice-vdagent. virt-manager Spice copy paste doesn't work. But the copy-paste function works in Debian VM launched by Virtual Machine Manager (actually the menu bar are different. One is called 'Remote Viewer'/remote-viewer and the other embedded viewer from /usr/bin/qemu-system-x86_64 is called 'QEMU/KVM').
    • (Debian11 and antiX VMs) When I use "ps -ef | grep spice", I got /usr/sbin/spice-vdagentd & /usr/bin/spice-vdagent as shown in the screenshot here for the VM launched by QEMU/KVM. But I did not see /usr/bin/spice-vdagent in the VM launched by Proxmox.
    • B/C a hint from above, I found a solution here. After I run /usr/bin/spice-vdagent , copy-and-paste works!
    • (Fedora 35). Copy-and-paste works out of box (vdagentd & vdagent are automatically running in the background). Maybe it's because Fedora is a Red Hat based Linux OS.
    • How to add spice-vdagent to VirtIO VM?
  • Summary,
    sudo apt install spice-vdagent
    sudo systemctl start spice-vdagent
  • Comparison
    Tool Pros Cons
    noVNC It is a lighter approach, as it has less services required (less overhead), which allows for a quick “one off connection” solution. It is an open source VNC client JavaScript library as well as an application built on top of that library. It runs well in any modern browser including mobile browsers (iOS and Android). The clipboard does not work. Audio device?
    Spice presents the guest windowing system with an X driver that captures X protocol operations directly. This means that Spice can provide better performance than VNC. It requires more services than noVNC.

Sound/audio

  • https://en.wikipedia.org/wiki/Simple_Protocol_for_Independent_Computing_Environments
  • SPICE (Simple Protocol for Independent Computing Environments) is a communication protocol used for virtual environments. It provides a remote display system, allowing users to view a virtualized desktop on their local machine and interact with it using keyboard and mouse input.
    • SPICE is often used in conjunction with virtualization platforms such as QEMU/KVM, and is widely used in enterprise and cloud computing environments.
  • SPICE (Simple Protocol for Independent Computing Environments) does not have any direct alternatives as it is a specific communication protocol used for remote display in virtualized environments. However, there are other remote display protocols such as RDP (Remote Desktop Protocol), VNC (Virtual Network Computing), and NX (NoMachine's Remote X protocol) that can be used as alternatives to SPICE in certain situations. Nonetheless, the most suitable protocol for a specific use case depends on various factors such as the nature of the application, the network bandwidth available, the desired level of graphics performance, and more.
  • https://pve.proxmox.com/wiki/SPICE
    • Add sound hardware to VM
    • Change Display from default to Spice
    • (For Lubuntu) sudo apt install spice-vdagent spice-webdavd
  • Choose SPICE when launching the VM, it will download a vv file.
  • In ubuntu, "remote-viewer" will be used to open the vv file when we double clicked the downloaded file. Proxmox SPICE console apt install virt-viewer

brew tap jeffreywildman/homebrew-virt-manager
brew install virt-viewer
remote-viewer pve-spice.vv

Share a folder

Remote Desktop through browser

Guest agent

This affects whether we can see IP in the Summary option.

  • For Ubuntu VM,
    • Proxmox -> VM -> Options -> QEMU Guest Agent. Check both options: Use QEMU Guest Agent & Run guest-trim after a disk move or VM migration.
    • On Ubuntu VM,
    sudo apt-get install qemu-guest-agent
    sudo systemctl start qemu-guest-agent
    
  • For Windows VM,
    • Proxmox -> VM -> Options -> QEMU Guest Agent. Check the 1st option is enough.

Improve performance

  • Allocate Sufficient Resources
  • Use VirtIO Drivers
  • Install QEMU Guest Agent:
  • Enable GPU Passthrough (Optional)

can't shutdown a VM

Use the command qm unlock XXX

qm stop XXX
# can't lock file '/var/lock/qemu-server/lock-996.conf' - got timeout

qm unlock XXX
qm stop XXX

Now we can go back to proxmox GUI to remove the vm.

But if the "qm unlock" does not work, we can use the kill command. See Proxmox can’t stop VM – How we fix it!

ps aux | grep "/usr/bin/kvm -id VMID"
kill -9 PID  # VM will stop

Errors and solutions

Read-only system

  • fsck -f -c -y /dev/mapper/pve-root and on the vm drive fsck -f -c -y /dev/nvme0n1p1
  • Read-only file system on proxmox server. The filesystem will usually go into read-only while the system is running if there is a filesystem consistency issue. This is specified in fstab as errors=remount-ro and will occur when a FS access fails.
  • journalctl -b showed EXT4-fs error.

Memory usage

Network

Ethernet port

How many Ethernet ports do I need on my Proxmox?

Linux bridge commands

An introduction to Linux bridging commands and features

LXC

Proxmox help scripts

LXC images

  • https://images.linuxcontainers.org/images/. See Installing Proxmox 8.1 on Raspberry Pi 5
  • Download templates LXC containers. Rename rootfs.tar.xz to be the name you want for the template ie debian_bookworm_20230714.tar.xz. You can then use it as a container template as normal.
  • Variants, cloud vs default images.
  • Desktop images for LXD VM
  • Desktop with LXC:
    • installing a desktop environment in an Ubuntu LXC container can work, but there are some considerations to keep in mind. LXC containers are designed to be lightweight and do not include all the components of a full virtual machine, which can affect how a desktop environment operates within them.
    • you may encounter issues with services that expect to interact with hardware directly, as containers are more restricted compared to virtual machines.
  • To use images from https://images.linuxcontainers.org/
    • Install LXC/LXD
    • Initialize LXD: lxd init
    • Use the lxc command to download the desired desktop image; e.g., lxc launch images:ubuntu/20.04/desktop --vm
    • Launch the container
    • Access the container
    • Install a desktop environment
    • Set up remote access
    • Connect to the desktop environment

Pi hole

Installing Pi-Hole inside a Proxmox LXC Container. 2GB disk, 1 CPU core, and 256MB of memory. The memory usage is pretty flat around 53MB according to Proxmox gui). I am using Debian 11 template.

apt update
apt upgrade
nano /etc/sysctl.conf # disable IPv6 
reboot
apt install curl
curl -sSL https://install.pi-hole.net | bash
pihole -a -p # change to a simpler password

QCOW2

  • How To Import QCOW2 Image Into Proxmox
  • qcow
  • The “disks.qcow2” file is a disk image format used by QEMU virtualization software, and it’s not typically used directly for LXC containers in Proxmox. A QCOW2 file is a disk image saved in the second version of the QEMU Copy On Write (QCOW2) format, which is used by QEMU virtualization software.
  • DietPi. See Unraid case for the kernel panic issue.

SMART and wearout

I saw the wearout is 99% on my host disk (240GB Kingston SSD). I cannot delete a VM. If I use the command "qm destroy XXX", it shows "Unable to create output file '/var/log/pve/tasks/1/UPID:pvv....:qmdestroy:108:root@pam:' - Read-only file system". The host disk uses only 37% of storage on root partition. The solution: reboot Proxmox.

Storage Drive

  • Format a disk Prepare the drive
    fdisk /dev/nvme0n1
     : p
     : d
     :   ENTER
     : p
     : w
     : n ENTER ENTER ENTER
     : p
     : w
     : q 
    
    Now to go GUI, pve -> Disks -> Directory -> Create Dir.
  • lsblk, df -h and more
    # lsblk
    NAME         MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    sda            8:0    0 223.6G  0 disk 
    ├─sda1         8:1    0  1007K  0 part 
    ├─sda2         8:2    0   512M  0 part /boot/efi
    └─sda3         8:3    0 223.1G  0 part 
      ├─pve-swap 253:0    0     8G  0 lvm  [SWAP]
      └─pve-root 253:1    0 215.1G  0 lvm  /
    nvme0n1      259:0    0 465.8G  0 disk 
    └─nvme0n1p1  259:1    0 465.8G  0 part /mnt/pve/vm1
    
    # df -h
    Filesystem            Size  Used Avail Use% Mounted on
    udev                  7.7G     0  7.7G   0% /dev
    tmpfs                 1.6G  1.3M  1.6G   1% /run
    /dev/mapper/pve-root  214G   30G  176G  15% /
    tmpfs                 7.8G   40M  7.7G   1% /dev/shm
    tmpfs                 5.0M     0  5.0M   0% /run/lock
    /dev/nvme0n1p1        458G  2.0G  433G   1% /mnt/pve/vm1
    /dev/sda2             511M  328K  511M   1% /boot/efi
    /dev/fuse             128M   16K  128M   1% /etc/pve
    tmpfs                 1.6G     0  1.6G   0% /run/user/0
    
    # fdisk -l
    Disk /dev/nvme0n1: 465.76 GiB, 500107862016 bytes, 976773168 sectors
    Disk model: WDC WDS500G2B0C                         
    ...
    Device         Start       End   Sectors   Size Type
    /dev/nvme0n1p1  2048 976773134 976771087 465.8G Linux filesystem
    
    Disk /dev/sda: 223.57 GiB, 240057409536 bytes, 468862128 sectors
    Disk model: KINGSTON SA400S3
    ...
    Device       Start       End   Sectors   Size Type
    /dev/sda1       34      2047      2014  1007K BIOS boot
    /dev/sda2     2048   1050623   1048576   512M EFI System
    /dev/sda3  1050624 468862094 467811471 223.1G Linux LVM
    
    Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    Disk /dev/mapper/pve-root: 215.07 GiB, 230925795328 bytes, 451026944 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    
    # pvdisplay
      --- Physical volume ---
      PV Name               /dev/sda3
      VG Name               pve
      PV Size               <223.07 GiB / not usable <3.57 MiB
      Allocatable           yes (but full)
      PE Size               4.00 MiB
      Total PE              57105
      Free PE               0
      Allocated PE          57105
      PV UUID               4taiYE-DDJa-4UdU-v3QY-kd2s-7r5i-shhJ7Q
    
  • Benchmark HD speed
    hdparm -t --direct /dev/nvme0n1p1
    hdparm -t --direct /dev/sda3
    
  • lvresize vs lvextend. lvextend can only increase the size of a volume, whereas lvresize can increase or reduce it. Increase the size of an LVM logical volume.
  • How to Manage and Use LVM (Logical Volume Management) in Ubuntu

Storage type

https://pve.proxmox.com/wiki/Storage

local vs local-lvm

  • What is the difference between “local” and “local-lvm” on Proxmox VE (PVE)? Which to use? Why use local/local-lvm?
  • local-lvm is actually a lvm-thin volume .
  • local: The path is /var/lib/vz and vz is a folder.
    root@pve:~# tree -d /var/lib/vz/
    /var/lib/vz/
    ├── dump
    ├── images
    └── template
        ├── cache
        └── iso
    
    6 directories
    
  • local-lvm: This is not a directory. In a simple way, lvm-thin storage - like a physical hard drive (image), /dev/pve/vm-100-disk-1 like a partition on that drive. local-lvm path?
    root@udoo:~# ls -l /dev/pve
    total 0
    lrwxrwxrwx 1 root root 7 Jul 18 17:40 root -> ../dm-1
    lrwxrwxrwx 1 root root 7 Jul 18 17:40 swap -> ../dm-0
    lrwxrwxrwx 1 root root 7 Jul 18 18:36 vm-100-disk-0 -> ../dm-6
    lrwxrwxrwx 1 root root 7 Jul 18 18:41 vm-101-disk-0 -> ../dm-7
    lrwxrwxrwx 1 root root 7 Jul 18 18:20 vm-102-disk-0 -> ../dm-8
    lrwxrwxrwx 1 root root 7 Jul 18 18:54 vm-103-disk-0 -> ../dm-9
    lrwxrwxrwx 1 root root 8 Jul 19 11:16 vm-104-disk-0 -> ../dm-10
    

lvm-thin

  • Storage: LVM Thin. LVM-thin, or thin provisioning, is a feature of LVM that allows you to create logical volumes with a virtual size that can be larger than the available storage. Blocks in a standard LVM logical volume are allocated when the volume is created, but blocks in a thin-provisioned volume are allocated as they are written. This means that you can create a thin-provisioned volume with a large virtual size, but it will only consume physical storage space as data is written to it1. This can be useful for managing storage more efficiently and cost-effectively.

Single drive

If we have only one drive, we may like to delete local-lvm and then increase the space in local. See How to install Proxmox VE 7.0.

  • DataCenter -> Storage -> local-lvm -> Remove.
  • Go to Shell
    lvremove /dev/pve/data -y
    lvresize -l +100%FREE /dev/pve/root
    resize2fs /dev/mapper/pve-root
    
  • Go to DataCenter -> Node -> Summary -> / HD space to verify the size.

Add a new storage

  • /etc/fstab & mount. The key is to mount the drive first through a terminal. New folders will be created based on the "Content" we choose. The existing files on the drive won't be erased if we don't ask to do that.
  • Add a new physical hard drive to Proxmox VE 4x 5x LVM
  • Storage: LVM Thin
  • (2023/7/16). Added a USB disk. Go to pve -> Disks -> Directory -> "Create: Directory". Choose Disk, Filesystem (ext4) and Name (usb). The new disk will become "/mnt/pve/usb" in Proxmox. Now when I go back to pve -> Disks, I can see it'll be one of devices (/dev/sdb). I can use it for backup (Datacenter -> Backup). For some reason, Proxmox web interface did not work after I plugged in my USB disk but ssh still worked. Reboot the server solved the problem.
  • If I remove the usb disk (Datacenter -> Storage -> Remove) and put the usb disk in a Linux OS, I see it has several directories: dump, images, lost+found, private, snippets, and template. To add the disk back to the node, use (Datacenter -> Storage -> Add -> Directory). ID=usb, Directory=/mnt/pve/usb, Content: all.
  • (2023/7/29). Suppose I have an existing formatted USB disk. I plugged it into the machine. I first manually go to the console to create a new directory /mnt/usb and run chown root:root -R /mnt/usb; chmod 755 -R /mnt/usb. Now in the PVE GUI, I can go to the DataCenter -> Storage -> Add -> Directory. Choose ID=usb, Directory=/mnt/usb, Content: anything I want. Now if I run "ls /mnt/usb", I'll see directories "dump images private template". My original files on the disk are intact. I can use the disk as I like.

Passthrough a HDD

5 Things I Would Do On Fresh Install Of ProxMox. Change to "No subscription", IOMMU, VM Template, and HDD passthrough.

Passthrough a USB to LXC

Passthrough a USB or a physical drive to VM

  • Concept: if we passthrough a usb disk to a vm from Proxmox, does it mean Proxmox won't see the usb disk and only the VM can see and use the usb disk?
    • Answer: Yes, that’s correct. When you pass through a USB disk to a virtual machine (VM) from Proxmox, the USB disk becomes directly accessible to the VM and not to the Proxmox host (including fdisk or lsblk). This is because the USB device is being assigned directly to the VM, making it appear as if the device is connected to the VM rather than the host.
  • Proxmox documentation
    • USB Devices in Virtual Machines
      • Reassign to Host.
        1. Identify the VM and USB device,
        2. Remove the USB device from the VM's configuration (If using the Proxmox web interface, navigate to the VM’s hardware tab, select the USB device, and click on "Remove", If using the command-line interface, you can edit the VM’s configuration file directly. The configuration files are located in /etc/pve/qemu-server/ and are named after the VM’s ID (e.g., 100.conf for a VM with ID 100). In the configuration file, remove the line that corresponds to the USB device),
        3. Restart the VM.
    • Passthrough Physical Disk to Virtual Machine (VM)
  • GUI How to: Passthrough USB devices in Proxmox VE (PVE) 6.2 (Easiest and quick).
    • My example: Datacenter -> node name -> VM -> hardware -> Add -> USB Device -> Select the correct USB device to passthrough (lsusb command shows my USB storage is Bus 001 Device 002: ID 152d:0576 JMicron Technology Corp. so I choose the Use USB Vendor/Device ID option).
    • Now when I go to the OpenMediaVault - Storage - Disks, I would be able to see the USB disk (Done). This is very easy compared to LXC case. To remove the USB drive, we just need to remove the USB device from the VM.
    • No qm set command is needed.

How to safely remove a USB HDD

  • How to 'safely' remove a USB HDD (spindown). OMV case.
  • lsblk has a column MOUNTPOINTS showing if a disk is mounted or not.
  • Assume I insert a second USB drive and the drive has not been used. If I just remove the 2nd USB drive, the 1st USB drive will be affected and not seen by lsusb command by PVE. So in order to safely remove the 2nd USB drive, I need to use the eject /dev/sdb command where "/dev/sdb" is determined by fdisk -l command.

Upgrade storage

  • https://pve.proxmox.com/wiki/Storage
  • Upgrade data disk
    1. stop all guests
    2. Backup all guests to NAS, USB Disk using Vzdump or send it to a PBS
    3. remove the Storage at "Datacenter -> Storage"
    4. shutdown server and replace disk
    5. wipe new disk (can be done using webUI since PVE 7.X, otherwise do it manually using CLI) at "YourNode -> Disks -> select new disk -> wipe"
    6. use the webUI to create a new VM/LXC storage (LVM-Thin, ZFS or whatever you like) at "YourNode -> Disks -> LVM-Thin/ZFS -> Create: Thinpool/ZFSpool"
    7. restore backups o new VM/LXC storage

Shared storage

SAMBA/CIF

  • Adding a Samba share to Proxmox as Storage
    • Directly work on Proxmox interface will show a message: create storage failed: storage 'xxx' is not online (500).
    • This method works.
    • When modifying the file /etc/fstab, //[ip of server]/[name of share] /media/share cifs credentials=/root/.smb,users,rw,iocharset=utf8, pay attention that the name of share is not a directory name. If we mess up the setting, we will get an error can not use mount.cifs: mount error(2): No such file or directory when we call mount -a. No need to use the "vers" option in my situation.
  • I learned that the samba shared directory won't be mounted automatically on boot. The solution samba network share fails to mount at boot time or Fstab - Use SystemD automount works. One long line below.
    //<ip_of_server>/<name_of_share> /media/share cifs credentials=/root/.smb,users,rw,iocharset=utf8,noauto,x-systemd.automount,x-systemd.device-timeout=10 0 0
    
    • noauto: This option means that the device will not be mounted automatically during boot or with the mount -a command. It needs to be mounted explicitly.
    • x-systemd.automount: When this option is used, systemd will enable an “automount unit”, also known as an automount trap, or a mount point (path) where a file system may later be mounted. The file system itself is a separate unit (a “mount unit”) and will only be mounted if there is a subsequent demand to use that path. Attempts to alter the above behavior by setting either “auto” or “noauto” will have no effect.
    • The last option x-systemd.device-timeout=10 sets the timeout for the device to 10 seconds. If the device is not available within this time when you or a process attempt to mount it manually, systemd will stop trying to mount it.
    • The 0 0 at the end of the line are two different options: The first 0 refers to dump, a backup utility. By setting it to 0, you’re telling dump to ignore this file system. The second 0 is for fsck, the file system check utility. This 0 tells fsck not to check this file system at boot time.
  • If I just run mount -a, it does not show any errors. But the network drive is still not available. If I just run the "mount -t cifs" command in the shell, I got the following message.
    mount: (hint) your fstab has been modified, but systemd still uses
           the old version; use 'systemctl daemon-reload' to reload.
    
  • After the Samba network share is available in PVE, we can add it to the web interface
    • Datacenter (not hostname)
    • Storage -> Add Directory. The ID will be shown on the PVE LHS panel. The "Directory" refers to the directory mounted on PVE, e.g., /media/share.

Mount a Network Share in a Linux Container

Ceph stroage

  • What is CEPH??? Ceph is a storage CLUSTERING solution.
    • You can add any number of disks on any number of machines into one big storage cluster. Then you set up the configuration for ceph, most notably the number of copies of a file.
    • If you set this to 2 for instance, it means that the cluster will always have 3 copies of all the objects this setting applies to. Ceph is also self-managing, meaning that it will automatically try to distribute these copies over 3 physical machines (if possible), onto 3 separate disks.
    • When any disk or machine dies, ceph will immediately use the 2 remaining copies of the affected objects and create a 3rd copy in the cluster.
    • What this does is eliminate the requirement to manually restock your spare disk in a conventional RAID setup as long as you have enough total storage to fit all the objects 3 times.
  • Deploy Hyper-Converged Ceph Cluster

Plex

Proxmox LXC Intel Quick Sync Transcode for Plex

Proxmox increase vm disk size

  • Important: Make sure LVM was selected when we installed Linux. Otherwise, we need to boot from a LIVE CD.
    • Shut down the VM from Proxmox.
    • In Proxmox, add a CD-ROM drive to the VM and attach a Debian or other Linux live ISO.
    • Start the VM and boot from the live CD.
    • Once booted, open a terminal
    sudo parted /dev/sda
    print
    # This will show you the current partition layout
    
    resizepart 2 100%
    # This resizes partition 2 to use all available space
    
    print
    # Verify the new size
    quit
    
    # After exiteing parted, run
    sudo partprobe /dev/sda
    
  • https://pve.proxmox.com/wiki/Resize_disks
  • How to Increase VM Disk Size in Proxmox
  • Proxmox Resize Disk VM / Extend Disk VM LVM
  • Expand logical volume - Ubuntu on Proxmox. It works when I increase my Ubuntu22.04 from 16GB to 32GB.
    • Step 1: Work in PVE. This can be done through Proxmox UI (Hardware -> Disk Action -> Resize disk -> change 0 to 16 for example if we want to increase size by 16GB).
    # qm resize <vmid> <disk> <size> 
    # qm resize 102 scsi0 +16G
    • Step 2: Work in the VM. PS: for some reason, running lvextend does not show the filesystem has been extend. After I called gparted to extend the partition, then lvextend will show the filesystem has a new blocks length.
    $ lsblk  #  or df -h. We can see the size of sda is larger than the root partition.
    
    # Make LVM aware of any changes in the size of the underlying partition 
    # (in this case, /dev/sda3) that contains the physical volume.
    # LVM scans the specified device (/dev/sda3) to determine its current size.
    # If the size of the partition has changed (usually increased), LVM updates its metadata to reflect the new size of the physical volume.
    $ sudo pvresize /dev/sda3
      Physical volume "/dev/sda3" changed
      1 physical volume(s) resized or updated / 0 physical volume(s) not resized
    
    $ df -h       # /dev/mapper/ubuntu--vg-ubuntu--lv is around 16GB, no changed yet
    Filesystem                         Size  Used Avail Use% Mounted on
    /dev/mapper/ubuntu--vg-ubuntu--lv   15G   12G  2.4G  84% /
    
    # Extend LV to use up all space from VG
    $ sudo lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
      Size of logical volume ubuntu-vg/ubuntu-lv changed from <15.00 GiB (3839 extents) to <30.00 GiB (7679 extents).
      Logical volume ubuntu-vg/ubuntu-lv successfully resized.
    
    # resize file system
    $ sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
    resize2fs 1.46.5 (30-Dec-2021)
    Filesystem at /dev/mapper/ubuntu--vg-ubuntu--lv is mounted on /; on-line resizing required
    old_desc_blocks = 2, new_desc_blocks = 4
    The filesystem on /dev/mapper/ubuntu--vg-ubuntu--lv is now 7863296 (4k) blocks long.
    
    $ df -h
    Filesystem                         Size  Used Avail Use% Mounted on
    /dev/mapper/ubuntu--vg-ubuntu--lv   30G   12G   17G  42% /

Clone (full clone vs convert to template)

Backup and restore VM

  • Backup and Restore from proxmox wiki. How to Backup Proxmox? Proxmox Backup and Recovery Methods.
  • How to backup and transfer a Proxmox VM to another Proxmox Node
    • Backup file has a timestamp on the filename and it is saved to /var/lib/vz/dump if it is saved in local (pve) or /mnt/pve/vm1/dump if it is saved on my vm1 storage.
    • The backup file can be seen on the GUI under $STORAGE (pve) -> Backups where $STORAGE is the storage name (e.g. local). From there, it has a 'Restore' button where we can restore it with an option to enter a new VM ID.
    • If the backup file is saved in local (pve), the backup file can also be seen under the VM|Backup menu.
    • After restoring, the new VM has a new ID but the VM name is still the same as the original one (so we can only use the ID to distinguish the VMs). Therefore, if we use the static IP in a VM, it is better to shut down the old one before we Start the new VM.
    • If we remove/delete the restored VM, the backup file is not affected (not deleted).
    • It seems backup + restore = clone.
    • The backup speed is quick. Using the "top" command it shows it is the command zstd running for the backup.
  • Restore Proxmox VM from backup – Here are the steps to recover your VM
  • VM ID:
    • One biggest problem is I cannot tell what the backup file is from the file name after I back up the files to another location. The backup file however contains VM ID on its filename. That is the only clue we can use to find out what the VM is on the original Proxmox.
    • The Backup Notes is useful actually. In the backup folders on Proxmox, it also create *.notes files if we create it on the GUI.
    • Maybe it's useful for me to create a text file along with the backup files to show what the files represents.
  • Question: why sometimes my backup files are not shown on GUI.
    Ans: the default backup storage is "local". We need to toggle that from the GUI. PS: the storage we changed cannot be memorized.
  • Question: backup status mailings for "backup success" despite all jobs being set to "On Failure only" 2024/1/19
  • Schedule backup: Backup in Proxmox VE with screenshots.
  • Proxmox VE Full Course: Class 10 - Backups and Snapshots
    • Snapshots (for testing someting). PS: No need to stop the VM. Taking snapshots and rollback is fast. It always saves the snapshots on the same (?) disk.
    • Backup. PS: No need to stop the VM. It will let you choose where to back up and other options. Mode: Snapshot, Suspend, Stop.
    • Automatic backup. Datacenter -> Backup -> Add (Create Backup Job).
  • lxc container backup suspend mode exit code 23. LXC backup failed - code 23. Choose Backup Mode from snapshot to stop. The LXC will auto-restart after backup is finished.

The current guest configuration does not support taking new snapshots

Change backup file names

Restore error: data corruption

  • An example of log from a failed restoration.
    restore vma archive: zstd -q -d -c /media/wd2t/dump/vzdump-qemu-201-2024_06_29-08_59_07.vma.zst | vma extract -v -r /var/tmp/vzdumptmp552388.fifo - /var/tmp/vzdumptmp552388
    ...
    progress 1% (read 343605248 bytes, duration 2 sec)
    progress 2% (read 687210496 bytes, duration 3 sec)
    progress 3% (read 1030815744 bytes, duration 4 sec)
    _29-08_59_07.vma.zst : Decoding error (36) : Data corruption detected 
    vma: restore failed - short vma extent (3282432 < 3797504)
    
  • Run zstd command
    # zstd -q -d -c /media/wd2t/dump/vzdump-qemu-201-2024_06_29-08_59_07.vma.zst > /var/tmp/vzdump-qemu-201-2024_06_29-08_59_07.vma
    _29-08_59_07.vma.zst : Decoding error (36) : Data corruption detected 
    
  • How does zstd detected data corruption
    • File Format Verification: Zstandard begins by checking the magic number of the file, which identifies it as a Zstandard compressed file. If this initial identifier is missing or incorrect, Zstandard will immediately flag the file as corrupted.
    • Frame Header Check: The decompression process involves reading the frame headers, which contain metadata about the compressed data, such as block sizes and checksums. If these headers are malformed or inconsistent, Zstandard will detect corruption.
    • Checksum Verification: Zstandard uses checksums to verify the integrity of each block of data. When a file is compressed, Zstandard computes and stores a checksum for each block. During decompression, it recomputes the checksum for each decompressed block and compares it to the stored value. If they don't match, it indicates data corruption.
    • Block Integrity: Each block of compressed data is decompressed independently. If any block is incomplete, truncated, or contains unexpected data patterns that do not conform to the expected compression format, Zstandard will detect this as corruption.
    • End of Stream Marker: Zstandard expects a specific marker at the end of the stream to signify the end of the compressed data. If this marker is missing or incorrect, it indicates that the file may be incomplete or corrupted.
  • Example workflow to verify integrity
    sha256sum original_file.vma
    zstd original_file.vma
    zstd -t original_file.vma.zst  # 't'est integrity
    # If there are no errors, the file should be intact.
    

VM locked after I stopped the backup

vm locked after failed backup, can't unlock

qm unlock <vmid>
qm start <vmid>

I did not try above commands. However, after I did a reboot and the locking disappeared.

rclone

SYNC Proxmox backups to BACKBLAZE using RCLONE | OFF-SITE Backups | Proxmox Home Server Series

Setup a MediaWiki Server

How to Setup a MediaWiki 1.31 Server on a Debian 10 Proxmox container

Multiple node cluster

Remove a node

# shell in the node we want to keep
pvecm nodes
pvecm delnode [NODE_NAME]
pvecm nodes
  • The instruction asks to power off the node we want to remove before calling "pvecm delnode". If I follow it, I got an error cluster not ready - no quorum?. The solution at here works (without to reboot the main node). However, the 2nd node still showed the 1st node:(
pvecm nodes
pvecm expected 1 # assume my cluster expected 1 node after I removed extras
pvecm delnode udoo
# Could not kill node (error = CS_ERR_NOT_EXIST)
# Killing node 2
pvecm nodes
# Now only 1 node is left

Migration VM

High Availability

Proxmox VE Full Course: Class 16 - High Availability

NVIDIA GPU drivers

How to Install the Official NVIDIA GPU Drivers on Proxmox VE 8

USB passthrough

Thin client

Raspberry Pi THIN CLIENT for Proxmox VMs

Android app

Proxmox Virtual Environment

Nested virtualization

3 reasons why you should set up nested virtualization on your home lab

Android emulator and nested virtualization

Is there a guide to getting Android x86 installed on Proxmox?

Security

Am I compromised? If you need true remote access, set up a VPN that you connect to on your router.

OpenWRT router

Error 401: no ticket

Empty browser's cache. It works.

Warning email about NVME temperature

  • SMART error (Health) detected on host: XXXX. I received an email about the temperature warning from the sender root email created when we set up Proxmox.
  • There is no /var/log/syslog file. To check the log, use journalctl command.

SMB server

Check SMB Share availability before starting a VM

add a startup delay to your VMs and CTs if this is the only time you have this issue. Then set up a script in crontab to run @reboot that just waits x seconds, then remounts the share.

Cloud image, Cloud-init

Proxmox vs. ESXi

VMWare ESXi

Proxmox Backup Server/PBS

proxmox-backup-client

Remote machine management

Self-Hosted Remote Desktop Connection Alternatives

Remotely

RPort