Files
notes/projects/neosphere/qumulus/overview-qumulus_and_comp-nodes.md
2025-03-19 17:29:47 +01:00

37 lines
2.1 KiB
Markdown

## Infrastucture
### 5 Node Storage Cluster ([Qumulo](https://qumulo.com/))
- [IT-Glue](https://softbox.eu.itglue.com/2571086747975811/passwords/4086682024952035#partial=&sortBy=name:asc&filters=%5B%5D)
- Each node has 240 TB HDD Storage, which in total makes 1.2 Peta Byte of Storage
- Each node has 256 GB SSD Storage for the boot drive
- Each node has 8x 1 TB Nvme SSD Drives (Not sure for what; probably chaching to speed up read and writes which are limited physically by the spinning HDD drives)
- Each node has a dual 25Gbits NIC which are configured in "fault-tolerance"-mode bond (failover backup if one fails) wiht the IP addresses:
- Primary: 192.168.60.11 - .15
- Secondary (floating): 192.168.60.21 - .25
- The mgmt dashboard is reachable via each of the IP addresses; e.g. <http://192.168.60.13/>
- Files are configured using NFS (network file system) and SMB
- NFS shares to computing cluster (folders `/qumulo-vol0000(1/2)` on the Qumulo)
- NFS and SMB shares to client devices
- The configuration for the shares can be modified in the dashboard and are partly configured by the customer and the Qumulo team
### 3 Node Computing Cluster
- [ssh pass for all ubt's](https://softbox.eu.itglue.com/2571086747975811/passwords/2927816502214824#partial=&sortBy=name:asc&filters=%5B%5D)
- Each node has a dual 25Gbits NIC which are configured in "fault-tolerance"-mode bond (failover backup if one fails) with the IP addresses:
- Primary: 192.168.60.200 - .202
- Secondary (floating): 192.168.60.210 - .212
- Clustered via Software managed by Bjoern Schwalb
- Server ubt-01/02/03 has a NF export of Qumulo node 1/2/3 mounted at `/mnt/qumulo-vol0000(1/2)`
### 2x Aruba Switches (SG48LP00G8)
- [neo-sw-ug-02](https://softbox.eu.itglue.com/2571086747975811/passwords/4083980573294817)
- [neo-sw-ug-03](https://softbox.eu.itglue.com/2571086747975811/passwords/4083981040287975)
- Each Switch has 16x 25 Gbit/s and 2x 100 Gbit/s Interfaces
- the Switches are coupled via a 100 Gbit/s fiber cable
- Each node of the storage/computing cluster has one of its interfaces of the 25 Gbit/s NICs connected to one switch