Hardware Requirements
Industry standard Intel/AMD Servers that supports virtualization
Ethernet Port for IPMI/iDRAC/iLO OOB (required for PAT)
Ethernet Port for Management Bus
Ethernet Port for Private Bus
Operating System Drive (preferably 2 drives in RAID 1)
VM/Container Storage Drive (preferably multiple drives in RAID10)
Access to existing Ceph cluster with RBD configured
Software Requirements
Ubuntu 24.04 LTS (GNU/Linux 6.x kernel)
snap (for LXD)
ceph-common (on each LXD host)
Existing Ceph cluster with MONs and OSDs configured
On each LXD host, install the Ceph client:
sudo apt install ceph-common
On the Ceph host, create the client.lxd key:
sudo ceph auth get-or-create client.lxd \
mon 'profile rbd' \
osd 'profile rbd' \
mgr 'profile rbd' \
-o /etc/ceph/ceph.client.lxd.keyring
Set capabilities:
sudo ceph auth caps client.lxd \
mon 'allow r' \
osd 'allow rw pool=lxd'
On the Ceph host, copy the config and keyring:
scp /etc/ceph/ceph.conf administrator@[<ipv6_lxd-host1>]:/home/administrator
scp /etc/ceph/ceph.client.lxd.keyring administrator@[<ipv6_lxd-host1>]:/home/administrator
On the LXD host, move them into place:
sudo cp /home/administrator/ceph* /etc/ceph/
On the LXD host:
sudo ceph --id lxd -s
sudo snap install lxd
Create the following preseed.yaml:
config:
core.https_address: '[<ipv6_lxd-host1>]:8443'
networks:
- name: lxdbr0
type: bridge
config:
ipv4.address: auto
ipv6.address: auto
ipv4.nat: "true"
ipv6.nat: "false"
storage_pools:
- name: local
driver: zfs
- name: ceph
driver: ceph
config:
ceph.user.name: lxd
ceph.cluster_name: ceph
ceph.osd.pool_name: lxd
profiles:
- name: default
description: "Default CloudCIX profile."
config:
security.privileged: "false"
migration.stateful: "true"
devices:
root:
type: disk
path: /
pool: local
eth0:
type: nic
network: lxdbr0
name: eth0
cluster:
server_name: lxd001
enabled: true
member_config: []
cluster_address: ""
cluster_certificate: ""
server_address: <ipv6_lxd-host1>
Then run:
lxd init --preseed < /path/to/preseed.yaml
sudo snap install lxd
On lxd001, run:
lxd cluster add lxd002
Copy the full join token that appears in the output.
On the joining node (e.g., lxd002):
config:
core.https_address: '[<ipv6_lxd-node2>]:8443'
networks: []
storage_pools: []
profiles: []
cluster:
enabled: true
server_name: <new_node_name>
server_address: '<ipv6_lxd-node2>'
cluster_address: '<ipv6_lxd-node1>'
cluster_token: <PASTE_YOUR_TOKEN_HERE>
Then run:
lxd init --preseed < /path/to/preseed.yaml
Repeat for each additional node (e.g., lxd003).
On any cluster node:
lxd cluster ls
Example output:
+--------+------------------------------------+-----------------+--------------+----------------+-------------+--------+-------------------+
| NAME | URL | ROLES | ARCHITECTURE | FAILURE DOMAIN | DESCRIPTION | STATE | MESSAGE |
+--------+------------------------------------+-----------------+--------------+----------------+-------------+--------+-------------------+
| lxd001 | https://[<ipv6_lxd-host1>]:8443 | database-leader | x86_64 | default | | ONLINE | Fully operational |
| lxd002 | https://[<ipv6_lxd-host2>]:8443 | database | x86_64 | default | | ONLINE | Fully operational |
| lxd003 | https://[<ipv6_lxd-host3>]:8443 | database | x86_64 | default | | ONLINE | Fully operational |
+--------+------------------------------------+-----------------+--------------+----------------+-------------+--------+-------------------+