20250514-notes
This commit is contained in:
@@ -0,0 +1,104 @@
|
||||
# Proxmox Virtual Environment and Clustering with ZFS and Ceph - Introductory Course Notes
|
||||
|
||||
## Introduction
|
||||
|
||||
### Basics
|
||||
|
||||
* Debian stable + Ubuntu Kernel
|
||||
* Virtualization with KVM
|
||||
* LXC Containers
|
||||
* Supports ZFS and Ceph
|
||||
|
||||
### Features
|
||||
|
||||
* Snapshots
|
||||
* KVM Virtual Machines (Windows, Linux, BSD)
|
||||
* LXC Containers
|
||||
* High Availability (HA) Clustering
|
||||
* Live Migration
|
||||
* Flexible Storage Options
|
||||
* GUI Management
|
||||
* Proxmox Data Center Manager
|
||||
* Proxmox Backup Server
|
||||
|
||||
## Virtualization Stack
|
||||
|
||||

|
||||
|
||||
## Expansion Options
|
||||
|
||||
### Single Node
|
||||
|
||||
* **Storage Pool:**
|
||||
* RAID Controller with LVM – Ceph and ZFS don’t support a RAID controller; if a RAID controller is used, change to the mode and use ZFS.
|
||||
|
||||
### Clustering Without HA
|
||||
|
||||
* Several Storage Pools
|
||||
* Pools are not shared – individual
|
||||
|
||||
### ZFS Cluster with Asynchronous Replication
|
||||
|
||||
* At least two pools
|
||||
* A quorum device between them replicates the configuration between the nodes.
|
||||
|
||||
### Ceph Cluster
|
||||
|
||||
* At least 3 nodes
|
||||
* Pool is combined and shared
|
||||
|
||||
## Installation
|
||||
|
||||
* It's possible to provide custom TOML files to install Proxmox automatically.
|
||||
* ZFS RAID1 for the boot drive.
|
||||
* Define the correct hostname; it's not easy to change later.
|
||||
* Modify the update repositories as a first step! And update/upgrade the system.
|
||||
* Hardware: Always better to have as many interface ports as possible.
|
||||
* 2x 1 Gbit for Management
|
||||
* 2x 10 Gbit for VMs (redundant, bonded via LACP – requires stacked switches or backup mode)
|
||||
* 2x 25 Gbit for Ceph storage clustering
|
||||
* Single node: at least 4 ports; for clustering: at least 6 ports.
|
||||
* Remove the IP address of the virtual bridge and set it directly on the physical interface. This prevents VMs from seeing the Proxmox IP.
|
||||
* Bond ports for VMs (ideally two 10 Gbit ports) and set the virtual bridge on `bond0` so it can be used by VMs (set 'VLAN aware' at the bridge).
|
||||
* It’s possible to create a bond for the web UI port.
|
||||
|
||||
## VM Creation
|
||||
|
||||
## Todo
|
||||
|
||||
* [ ] Join the forum
|
||||
* [ ] Research about initial TOML file for Proxmox for unattended configuration of PVE - 'proxmox autoinstaller'
|
||||
* [ ] Test ZRAID1 on two SSDs at home with a 'turtle' Proxmox host
|
||||
|
||||
## Support Subscriptions
|
||||
|
||||
### Basic
|
||||
|
||||
1 CPU per year / 2 CPUs per year
|
||||
* Access to enterprise repositories
|
||||
* Stable software updates
|
||||
* Support via the customer portal
|
||||
* Three support tickets
|
||||
* Response time: 1 business day
|
||||
|
||||
### Standard
|
||||
|
||||
1 CPU per year / 2 CPUs per year
|
||||
* All Basic features
|
||||
|
||||
**Additionally:**
|
||||
* 7 support tickets (total 10)
|
||||
* Remote support (via SSH)
|
||||
* Offline subscription key activation
|
||||
|
||||
### Premium
|
||||
|
||||
1 CPU per year / 2 CPUs per year
|
||||
* All Standard features
|
||||
|
||||
### Q&A
|
||||
|
||||
- full mesh network for ceph server (directly cable 3 hosts without switches)
|
||||
- official nvidia support vor vGPUs in proxmox
|
||||
- virtual tpm can not be saved on nfs share
|
||||
- one can couple an esxi host in the proxmox ui and directly import VMs from there
|
||||
106
projects/proxmox/thomas-krenn-schlulung-part1/20250514.md
Normal file
106
projects/proxmox/thomas-krenn-schlulung-part1/20250514.md
Normal file
@@ -0,0 +1,106 @@
|
||||
## Todo
|
||||
|
||||
- [ ] Im forum anmelden
|
||||
- [ ] research about initial toml file for proxmox for unattended confiuration of pve - 'proxmox autoinstaller'
|
||||
- [ ] test zraid1 on two ssd's at home with 'turtle' proxmox host
|
||||
|
||||
## Introduction
|
||||
|
||||
### basis
|
||||
|
||||
- Debian stable + Ubuntu Kernel
|
||||
- Virtualisierung mit KVM
|
||||
- LXC Container
|
||||
- zfs und ceph unterstuetzt
|
||||
|
||||
### features
|
||||
|
||||
- snapshots
|
||||
- kvm virt (windows, linux, bsd)
|
||||
- lxc container
|
||||
- HA clustering
|
||||
- live migration
|
||||
- flexible storage-wahlmoeglichkeiten
|
||||
- gui management
|
||||
- proxmox datacenter manager
|
||||
- proxmox backup server
|
||||
|
||||
### Support Subscriptions
|
||||
|
||||
#### basic
|
||||
|
||||
1cpu pro jahr/ 2cpu pro jahr
|
||||
- zugriff auf enterprise-repos
|
||||
- stable software updates
|
||||
- support ueber das kundenportal
|
||||
- drei support tickets
|
||||
- reaktionszeit: 1 werktag
|
||||
|
||||
#### standard
|
||||
|
||||
1cpu pro jahr/ 2cpu pro jahr
|
||||
- alle basic leistungen
|
||||
|
||||
**zusaetzlich**
|
||||
- 7 support tickets (total 10)
|
||||
- remote support (via ssh)
|
||||
- offline subscrition keyactivation
|
||||
|
||||
#### premium
|
||||
1cpu pro jahr/ 2cpu pro jahr
|
||||
|
||||
- alle standard elistungen
|
||||
|
||||
**zusaetzlich**
|
||||
- unbegrentzt tickets
|
||||
|
||||
## virtualisierungsstack
|
||||
|
||||

|
||||
|
||||
## Ausbau-Varianten
|
||||
|
||||
### single node
|
||||
|
||||
- storage pool
|
||||
- raid controller with lvm - ceph and zfs dont support raid controller - if raid controller used change to mode and use zfs
|
||||
|
||||
### Clustering ohne HA
|
||||
|
||||
- several storage pools
|
||||
- pools not shared - individual
|
||||
|
||||
### zfs cluster with asynchron replication
|
||||
|
||||
- at least two pools
|
||||
- in between quorum device which replicates the config between the nodes
|
||||
|
||||
### ceph cluster
|
||||
|
||||
- at least 3 nodes
|
||||
- pool is combined and shared
|
||||
|
||||
## Installation
|
||||
|
||||
- its possible to provide custom toml files to install proxmox automatically
|
||||
- zfs raid1 for boot drive
|
||||
- define correct hostname; not so easy to change later
|
||||
- modify update repos as first step! and update/upgrade the system
|
||||
- hardware: always better to have as many interface ports as possible
|
||||
- 2x 1 Gbit for mgmt
|
||||
- 2x 10 Gbit for VMs (redundant, bond via LACP- \{needs stacked switches\} or backup-mode)
|
||||
- 2x 25 Gbit for Ceph storage clustering
|
||||
- single node at least 4 ports and for clustering at least 6 ports
|
||||
- remove ip address of virtual bridge and set it directly on physical interface. Thus the VMs cant see the proxmox ip
|
||||
- bond ports for VMs (ideally tow 10 Gbit ports) and setp virtual bridge on bond0 so that it can be used by VMs ( set 'VLAN aware' at bridge)
|
||||
- its possible to create a bond for the web ui port
|
||||
|
||||
|
||||
## VM creation
|
||||
|
||||
- id ranges can be configured in datacenter manager
|
||||
- set tags at creation
|
||||
- choosing guest os type properly as some features are directly set correctly; otherwise performance issues can arrise
|
||||
- qemu agent recommended to install. improves snapshots and other stuff
|
||||
- set at disk settings always: 'discard' and 'ssd emulation'. Otherwise ceph or other clustering solutions dont properly about changes on the disks
|
||||
- best practice is to use virtio for storage and for network as it is the fastest
|
||||
@@ -0,0 +1,102 @@
|
||||
<h1
|
||||
id="proxmox-virtual-environment-and-clustering-with-zfs-and-ceph---introductory-course-notes">Proxmox
|
||||
Virtual Environment and Clustering with ZFS and Ceph - Introductory
|
||||
Course Notes</h1>
|
||||
<h2 id="introduction">Introduction</h2>
|
||||
<h3 id="basics">Basics</h3>
|
||||
<ul>
|
||||
<li>Debian stable + Ubuntu Kernel</li>
|
||||
<li>Virtualization with KVM</li>
|
||||
<li>LXC Containers</li>
|
||||
<li>Supports ZFS and Ceph</li>
|
||||
</ul>
|
||||
<h3 id="features">Features</h3>
|
||||
<ul>
|
||||
<li>Snapshots</li>
|
||||
<li>KVM Virtual Machines (Windows, Linux, BSD)</li>
|
||||
<li>LXC Containers</li>
|
||||
<li>High Availability (HA) Clustering</li>
|
||||
<li>Live Migration</li>
|
||||
<li>Flexible Storage Options</li>
|
||||
<li>GUI Management</li>
|
||||
<li>Proxmox Data Center Manager</li>
|
||||
<li>Proxmox Backup Server</li>
|
||||
</ul>
|
||||
<h2 id="virtualization-stack">Virtualization Stack</h2>
|
||||
<figure>
|
||||
<img src="/files/proxmox/2025-05-14.png" alt="Virtualization Stack" />
|
||||
<figcaption aria-hidden="true">Virtualization Stack</figcaption>
|
||||
</figure>
|
||||
<h2 id="expansion-options">Expansion Options</h2>
|
||||
<h3 id="single-node">Single Node</h3>
|
||||
<ul>
|
||||
<li><strong>Storage Pool:</strong>
|
||||
<ul>
|
||||
<li>RAID Controller with LVM – Ceph and ZFS don’t support a RAID
|
||||
controller; if a RAID controller is used, change to the mode and use
|
||||
ZFS.</li>
|
||||
</ul></li>
|
||||
</ul>
|
||||
<h3 id="clustering-without-ha">Clustering Without HA</h3>
|
||||
<ul>
|
||||
<li>Several Storage Pools</li>
|
||||
<li>Pools are not shared – individual</li>
|
||||
</ul>
|
||||
<h3 id="zfs-cluster-with-asynchronous-replication">ZFS Cluster with
|
||||
Asynchronous Replication</h3>
|
||||
<ul>
|
||||
<li>At least two pools</li>
|
||||
<li>A quorum device between them replicates the configuration between
|
||||
the nodes.</li>
|
||||
</ul>
|
||||
<h3 id="ceph-cluster">Ceph Cluster</h3>
|
||||
<ul>
|
||||
<li>At least 3 nodes</li>
|
||||
<li>Pool is combined and shared</li>
|
||||
</ul>
|
||||
<h2 id="installation">Installation</h2>
|
||||
<ul>
|
||||
<li>It’s possible to provide custom TOML files to install Proxmox
|
||||
automatically.</li>
|
||||
<li>ZFS RAID1 for the boot drive.</li>
|
||||
<li>Define the correct hostname; it’s not easy to change later.</li>
|
||||
<li>Modify the update repositories as a first step! And update/upgrade
|
||||
the system.</li>
|
||||
<li>Hardware: Always better to have as many interface ports as possible.
|
||||
<ul>
|
||||
<li>2x 1 Gbit for Management</li>
|
||||
<li>2x 10 Gbit for VMs (redundant, bonded via LACP – requires stacked
|
||||
switches or backup mode)</li>
|
||||
<li>2x 25 Gbit for Ceph storage clustering</li>
|
||||
<li>Single node: at least 4 ports; for clustering: at least 6
|
||||
ports.</li>
|
||||
</ul></li>
|
||||
<li>Remove the IP address of the virtual bridge and set it directly on
|
||||
the physical interface. This prevents VMs from seeing the Proxmox
|
||||
IP.</li>
|
||||
<li>Bond ports for VMs (ideally two 10 Gbit ports) and set the virtual
|
||||
bridge on <code>bond0</code> so it can be used by VMs (set ‘VLAN aware’
|
||||
at the bridge).</li>
|
||||
<li>It’s possible to create a bond for the web UI port.</li>
|
||||
</ul>
|
||||
<h2 id="vm-creation">VM Creation</h2>
|
||||
<h2 id="todo">Todo</h2>
|
||||
<ul class="task-list">
|
||||
<li><label><input type="checkbox" />Join the forum</label></li>
|
||||
<li><label><input type="checkbox" />Research about initial TOML file for
|
||||
Proxmox for unattended configuration of PVE - ‘proxmox
|
||||
autoinstaller’</label></li>
|
||||
<li><label><input type="checkbox" />Test ZRAID1 on two SSDs at home with
|
||||
a ‘turtle’ Proxmox host</label></li>
|
||||
</ul>
|
||||
<h2 id="support-subscriptions">Support Subscriptions</h2>
|
||||
<h3 id="basic">Basic</h3>
|
||||
<p>1 CPU per year / 2 CPUs per year * Access to enterprise repositories
|
||||
* Stable software updates * Support via the customer portal * Three
|
||||
support tickets * Response time: 1 business day</p>
|
||||
<h3 id="standard">Standard</h3>
|
||||
<p>1 CPU per year / 2 CPUs per year * All Basic features</p>
|
||||
<p><strong>Additionally:</strong> * 7 support tickets (total 10) *
|
||||
Remote support (via SSH) * Offline subscription key activation</p>
|
||||
<h3 id="premium">Premium</h3>
|
||||
<p>1 CPU per year / 2 CPUs per year * All Standard features</p>
|
||||
25
projects/proxmox/thomas-krenn-schlulung-part1/style.css
Normal file
25
projects/proxmox/thomas-krenn-schlulung-part1/style.css
Normal file
@@ -0,0 +1,25 @@
|
||||
body {
|
||||
font-family: sans-serif;
|
||||
line-height: 1.6;
|
||||
}
|
||||
|
||||
h1, h2, h3 {
|
||||
color: #333;
|
||||
}
|
||||
|
||||
ul {
|
||||
list-style-type: disc;
|
||||
margin-left: 1.5em;
|
||||
}
|
||||
|
||||
code {
|
||||
background-color: #f0f0f0;
|
||||
padding: 2px 5px;
|
||||
border: 1px solid #ddd;
|
||||
border-radius: 3px;
|
||||
}
|
||||
|
||||
img {
|
||||
max-width: 100%;
|
||||
height: auto;
|
||||
}
|
||||
Reference in New Issue
Block a user