init of new project root

This commit is contained in:
Petar Cubela
2025-09-20 19:56:03 +02:00
parent 8e6a8991c1
commit 8f7b06f951
8 changed files with 425 additions and 0 deletions

114
README.md Normal file
View File

@@ -0,0 +1,114 @@
# Linux Learning Plattform Students
## init
Build a reliable platform on a pve host to teach students Linux and accompanied other interesting topics such as
- firewalls and routing
- subnetting
- vlans
- join a linux host to a ms ad
- databases (mariadb/mysql)
- web servers
- certificate management
- mail server
- security and network tools like nmap or tcpdump
- and other things
## User Management
Apart from the pve host, the main core of the environment will be a _FreeIPA_ Server which combines
- identity management server (ldap, sso, acl, ...) via a 389 directory server
- DNS server (bind)
- NTP server
- Kerberos
- Dogtag for certificate management
- NFS server or 'advertise' one for home folders of all users and other userful shares when needed in an exercise
We will setup user accounts for each student on the ipa server and a home folder for each one which are shared via nfs
and automatically mounted on user login on any device in the domain/realm.
The student will always have their files available no matter the device they login with their own account.
Define designated groups, `linux_admins`, `linux_students`, `linux_users`, with each its own ACLs mediated via the ipa server.
## Facts
**Domain:**
- domain: `lab.softbox.net`
- realm: `LAB.SOFTBOX.NET`
- ipa:
- hostname: `ipa.lab.softox.net`
- ip address: `10.11.12.65/24`
**Students:**
- username: `firstname.surname`
- mail: `firstname.surname@softbox.de`
- user_ssh_public_key: Created in exercise sheet-00
- uid: `1000-1020`
- gid: `1000-1020`
- groups: `linux_students`
**VMs:**
- hostname: `vm_00`
- IP addresses: `10.11.12.200-220`
## Schedule
Time Slot: Friday 3p.m. to 4p.m.
1. Handout Sheets in this week at 4p.m.
2. Have a 30-60 mins class for the students to ask questions
3. Discuss sheets next week and handout begin at 1. handing out the new sheet
## Ideas for exercises
Separate by different levels of difficulty. In the beginning only easier concepts should be presented.
Write guide which can be followed in order to learn and see concepts such as a manual to install a nextcloud instance.
In the process the student would learn how a mariadb database is setup. Just following some simple commands.
### Level 0
- [ ] base commands - 20 most useful commands: cd, ls, mkdir, mv, rm , cp, touch, find, grep, cat, ssh
- [x] ssh key-exchange authentication
- [x] ssh hardening - not root login, no password hardening
- [x] nginx
- [ ] user and group management
### Level 1
- [x] fail2ban-server
### Level 2
- [ ] git local and remote repo as github and internal gitea
### Level 3
- [ ] git server
- [ ] setup mysql/mariadb database
- [ ] nextcloud setup
### Level 4
- [ ] nmap
- [ ] ip and ipcalc - set ip addresses and routes
- [ ] ufw
### Level 5
- [ ] simple mail server Ports: 25 (smtp), 587 (submission), 143 (imap) (no tls first)
- [ ] tcpdump -> catch clear text passwd with tcpdump at imapsync
- [ ] pki for mutual tls trust
- [ ] build each their own firewall/router with openbsd
- [ ] build together an firewall which will be the sbx_lab firewall

11
TODO.md Normal file
View File

@@ -0,0 +1,11 @@
## TODOs
- [ ] build the student environment on my own lenovo thinkpad
- [ ] change hostnames and correspondingly the dns entries of the student hosts to `student_vm00` -> manage dns via opentofu
- [ ] include all users to freeipa and integrate freeipa as main identity and dns server. Deprecate current bind servers -> automate client installation
- [ ] migrate dns zone to freeipa and manage its bind server with opentofu
- [ ] create nfs shares for the home folder of each user
- [ ] write loop for student_vm deployment -> they are similar
- [ ] include automation which install freeipa-client on each vm via opentofu or ansible executed via opentofu

33
provider.tf Normal file
View File

@@ -0,0 +1,33 @@
terraform {
required_providers {
proxmox = {
source = "telmate/proxmox"
version = "3.0.2-rc04"
}
}
}
variable "proxmox_api_url" {
type = string
}
variable "proxmox_api_token_id" {
type = string
sensitive = true
}
variable "proxmox_api_token_secret" {
type = string
sensitive = true
}
provider "proxmox" {
pm_api_url = var.proxmox_api_url
pm_api_token_id = var.proxmox_api_token_id
pm_api_token_secret = var.proxmox_api_token_secret
pm_tls_insecure = true
}

35
templates/lxc_demo_1.tf Normal file
View File

@@ -0,0 +1,35 @@
# variable "lxc_passwd" {
# type = string
# sensitive = true
# }
#
# resource "proxmox_lxc" "lxc_demo_1" {
# target_node = "pve"
# ostemplate = "local:vztmpl/debian-12-standard_12.7-1_amd64.tar.zst"
# password = var.lxc_passwd
# unprivileged = true
# vmid = "0"
#
# ssh_public_keys = <<-EOT
# ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBzh23ZkSVNbmDKk9esAT9qNkOoYFLhpX2nSLKPJaDVZ petar.cubela@sbx-mac-lab.local
# EOT
#
#
# features {
# nesting = true
# }
# hostname = "lxc-demo-1"
#
# network {
# name = "eth1"
# bridge = "vmbr1"
# ip = ""
# ip6 = "auto"
# }
#
# rootfs {
# storage = "local-zfs"
# size = "8G"
# }
#
# }

65
templates/vm_demo_1.tf Normal file
View File

@@ -0,0 +1,65 @@
#resource "proxmox_vm_qemu" "vm-demo-1" {
# name = "terraform-test-vm"
#
# # Node name has to be the same name as within the cluster
# # this might not include the FQDN
# target_node = "pve"
#
# # The template name to clone this vm from
# clone = "temp-debian-13"
#
# # Activate QEMU agent for this VM
# agent = 1
#
# os_type = "cloud-init"
# vmid = 0
#
# cpu {
# cores = 2
# sockets = 1
# type = "host"
# }
# memory = 2048
# scsihw = "virtio-scsi-single"
#
# # Setup the disk
# disks {
# scsi {
# scsi0 {
# # We have to specify the disk from our template, else Terraform will think it's not supposed to be there
# disk {
# storage = "local-zfs"
# # The size of the disk should be at least as big as the disk in the template. If it's smaller, the disk will be recreated
# size = "8G"
# }
# }
# }
# ide {
# # Some images require a cloud-init disk on the IDE controller, others on the SCSI or SATA controller
# ide1 {
# cloudinit {
# storage = "local-zfs"
# }
# }
# }
# }
# # Setup the network interface and assign a vlan tag: 256
#
# network {
# id = 0
# model = "virtio"
# bridge = "vmbr1"
# macaddr = "bc:24:11:de:ca:28"
# }
#
# boot = "order=scsi0"
#
# # Setup the ip address using cloud-init.
# # Keep in mind to use the CIDR notation for the ip.
# ipconfig0 = "ip6=auto"
# ciuser = "reliyya"
# cicustom = "vendor=local:snippets/qemu-guest-agent.yml" # /var/lib/vz/snippets/qemu-guest-agent.yml
# ciupgrade = true
#
# sshkeys = var.public_ssh_key
#}

70
templates/vm_students.tf Normal file
View File

@@ -0,0 +1,70 @@
resource "proxmox_vm_qemu" "vm_student_00" { # LOOP the resource name
name = "vm_00" # LOOP the name
# Node name has to be the same name as within the cluster
# this might not include the FQDN
target_node = "pve"
# The template name to clone this vm from
clone = var.student_vm_template
# Activate QEMU agent for this VM
agent = 1
os_type = "cloud-init"
vmid = 1000 # LOOP the vmid
vm_state = "running"
cpu {
cores = 2
sockets = 1
type = "host"
}
memory = 2048
scsihw = "virtio-scsi-pci"
# Setup the disk
disks {
scsi {
scsi0 {
# We have to specify the disk from our template, else Terraform will think it's not supposed to be there
disk {
storage = "local-lvm"
# The size of the disk should be at least as big as the disk in the template. If it's smaller, the disk will be recreated
size = "16G"
}
}
}
ide {
# Some images require a cloud-init disk on the IDE controller, others on the SCSI or SATA controller
ide1 {
cloudinit {
storage = "local-lvm"
}
}
}
}
# Setup the network interface and assign a vlan tag: 256
network {
id = 0
model = "virtio"
bridge = "vmbr0"
macaddr = "bc:24:11:de:c0:28" # LOOP the macaddres. each needs a unique one
}
boot = "order=scsi0"
tags = "student,deb"
# Setup the ip address using cloud-init.
# Keep in mind to use the CIDR notation for the ip.
ipconfig0 = "ip=10.11.12.199/24,gw=10.11.12.254" # LOOP the ipconfig. each needs a unique one
ciuser = "sbxadmin"
cicustom = "vendor=local:snippets/qemu-guest-agent.yml" # LOOP the cloud-init file if differences are needed
ciupgrade = true
sshkeys = var.petar_ssh_public_key
}

28
variables.tf Normal file
View File

@@ -0,0 +1,28 @@
## FreeIPA
variable "domain" {
type = string
}
variable "realm" {
type = string
}
## PVE
variable "student_vm_template" {
type = string
}
## General
variable "petar_ssh_public_key" {
type = string
sensitive = true
}
variable "petar_ssh_private_key" {
type = string
sensitive = true
}

69
vm_freeipa.tf Normal file
View File

@@ -0,0 +1,69 @@
resource "proxmox_vm_qemu" "vm-freeipa" {
name = "ipa"
# Node name has to be the same name as within the cluster
# this might not include the FQDN
target_node = "pve"
# The template name to clone this vm from
clone = "temp-fedora-38"
# Activate QEMU agent for this VM
agent = 1
os_type = "cloud-init"
vmid = 111
vm_state = "running"
cpu {
cores = 2
sockets = 1
type = "host"
}
memory = 2048
scsihw = "virtio-scsi-pci"
# Setup the disk
disks {
scsi {
scsi0 {
# We have to specify the disk from our template, else Terraform will think it's not supposed to be there
disk {
storage = "local-lvm"
# The size of the disk should be at least as big as the disk in the template. If it's smaller, the disk will be recreated
size = "16G"
}
}
}
ide {
# Some images require a cloud-init disk on the IDE controller, others on the SCSI or SATA controller
ide1 {
cloudinit {
storage = "local-lvm"
}
}
}
}
# Setup the network interface and assign a vlan tag: 256
network {
id = 0
model = "virtio"
bridge = "vmbr0"
macaddr = "bc:24:11:de:cb:30"
}
nameserver = "9.9.9.9,10.11.12.254"
onboot = true
boot = "order=scsi0"
tags = "ldap,samba,kerberos,dns,pki"
# Setup the ip address using cloud-init.
# Keep in mind to use the CIDR notation for the ip.
ipconfig0 = "ip=10.11.12.65/24,gw=10.11.12.254"
ciuser = "sbxadmin"
cicustom = "vendor=local:snippets/qemu-guest-agent.yml,user=local:snippets/cloud_init_fedora_vm_ipa.yml" # /var/lib/vz/snippets/qemu-guest-agent.yml
ciupgrade = true
sshkeys = var.petar_ssh_public_key
}