Proxmox VE

Proxmox VE 7/8 Setup #

Tip Try not to do too much customization so that the cluster can be repaired, or rebuilt and migrated quickly.

Networking #

Networking domains must be split based on the bandwidth and security needs.

Host Networking #

During installation and initial cluster setup, each Proxmox VE node should be configured to be part of these networks:

  • management - also used by some of the core VMs;
  • corosync-link - Proxmox cluster health / HA
  • ceph-cluster-link - Ceph cluster state (mon)
  • ceph-public - Ceph-Proxmox communication
  • nas-backups - connections to the NAS system used for ISOs and VM backups
  • secondary-management (optional) - connected to a router that has an independent link to the internet

VM/CT Networking #

Here are the VMs that must/can join the Proxmox management network:

  • Reverse proxy (Traefik) - for accessing Proxmox VM web console.
  • SSH gateway host, also part of another network with a zero trust client (Tailscale).
  • secondary SSH gateway host.
  • Management interfaces of (physical and virtual) routers that handle host networks.

Depending on their purpose, VMs and CTs must join their domain-specific networks.

The Matter network requires a separate 2.4 GHz WiFi without access to the internet.

Commercial Zero-Trust Clients #

Important
For security reasons, it is necessary to always run a set of public gateway VMs in isolated networks described below.

Note While it is generally a good idea to run all VMs in a manner described in this section, managing such environments without significanlty complicating the homelab setup would be too challenging.

At the same time, keeping this idea in mind when creating new VM/CT instances is useful.

Client nodes that connect to externally managed networks (Tailscale, ZeroTier, Cloudflare) and allow other clients to connect to them via public network should access internet through routers that do not allow mapping of the internal IP addresses. This can be achieved with stateless firewall rules.

Storage #

My primary Proxmox cluster uses a combination of:

  • NAS for ISO images - main storage for ISO images and CT templates;
  • NAS for backups
  • local dir - ISO images/CT templates (only when the NAS server is down for maintenance);
  • local-lvm - VM/CT drives (for instances that require low-latency writes);
  • Ceph RBD (hyperconverged) - for all other VM/CT drives, including critical VMs running critical services.

Note: Do not start using ZFS on the primary cluster before investigating its impact on RAM usage.