104 lines
3.0 KiB
Markdown
104 lines
3.0 KiB
Markdown
# Proxmox Virtual Environment and Clustering with ZFS and Ceph - Introductory Course Notes
|
||
|
||
## Introduction
|
||
|
||
### Basics
|
||
|
||
* Debian stable + Ubuntu Kernel
|
||
* Virtualization with KVM
|
||
* LXC Containers
|
||
* Supports ZFS and Ceph
|
||
|
||
### Features
|
||
|
||
* Snapshots
|
||
* KVM Virtual Machines (Windows, Linux, BSD)
|
||
* LXC Containers
|
||
* High Availability (HA) Clustering
|
||
* Live Migration
|
||
* Flexible Storage Options
|
||
* GUI Management
|
||
* Proxmox Data Center Manager
|
||
* Proxmox Backup Server
|
||
|
||
## Virtualization Stack
|
||
|
||

|
||
|
||
## Expansion Options
|
||
|
||
### Single Node
|
||
|
||
* **Storage Pool:**
|
||
* RAID Controller with LVM – Ceph and ZFS don’t support a RAID controller; if a RAID controller is used, change to the mode and use ZFS.
|
||
|
||
### Clustering Without HA
|
||
|
||
* Several Storage Pools
|
||
* Pools are not shared – individual
|
||
|
||
### ZFS Cluster with Asynchronous Replication
|
||
|
||
* At least two pools
|
||
* A quorum device between them replicates the configuration between the nodes.
|
||
|
||
### Ceph Cluster
|
||
|
||
* At least 3 nodes
|
||
* Pool is combined and shared
|
||
|
||
## Installation
|
||
|
||
* It's possible to provide custom TOML files to install Proxmox automatically.
|
||
* ZFS RAID1 for the boot drive.
|
||
* Define the correct hostname; it's not easy to change later.
|
||
* Modify the update repositories as a first step! And update/upgrade the system.
|
||
* Hardware: Always better to have as many interface ports as possible.
|
||
* 2x 1 Gbit for Management
|
||
* 2x 10 Gbit for VMs (redundant, bonded via LACP – requires stacked switches or backup mode)
|
||
* 2x 25 Gbit for Ceph storage clustering
|
||
* Single node: at least 4 ports; for clustering: at least 6 ports.
|
||
* Remove the IP address of the virtual bridge and set it directly on the physical interface. This prevents VMs from seeing the Proxmox IP.
|
||
* Bond ports for VMs (ideally two 10 Gbit ports) and set the virtual bridge on `bond0` so it can be used by VMs (set 'VLAN aware' at the bridge).
|
||
* It’s possible to create a bond for the web UI port.
|
||
|
||
## VM Creation
|
||
|
||
## Todo
|
||
|
||
* [ ] Join the forum
|
||
* [ ] Research about initial TOML file for Proxmox for unattended configuration of PVE - 'proxmox autoinstaller'
|
||
* [ ] Test ZRAID1 on two SSDs at home with a 'turtle' Proxmox host
|
||
|
||
## Support Subscriptions
|
||
|
||
### Basic
|
||
|
||
1 CPU per year / 2 CPUs per year
|
||
* Access to enterprise repositories
|
||
* Stable software updates
|
||
* Support via the customer portal
|
||
* Three support tickets
|
||
* Response time: 1 business day
|
||
|
||
### Standard
|
||
|
||
1 CPU per year / 2 CPUs per year
|
||
* All Basic features
|
||
|
||
**Additionally:**
|
||
* 7 support tickets (total 10)
|
||
* Remote support (via SSH)
|
||
* Offline subscription key activation
|
||
|
||
### Premium
|
||
|
||
1 CPU per year / 2 CPUs per year
|
||
* All Standard features
|
||
|
||
### Q&A
|
||
|
||
- full mesh network for ceph server (directly cable 3 hosts without switches)
|
||
- official nvidia support vor vGPUs in proxmox
|
||
- virtual tpm can not be saved on nfs share
|
||
- one can couple an esxi host in the proxmox ui and directly import VMs from there |