Running Ceph OSDs on Proxmox from an external cluster

Introduction

I’ve been running a ceph cluster for more than a year now. It runs on a mix of x86 and ARM nodes and it has been useful for HA storage. I got a proxmox node that I use for virtualization and considering that Proxmox has Ceph Pacific packages, it would be nice to make use of that.

This is different from the normal way to use ceph on proxmox and may (probably) screw up your exisitng setup if you have one.

Installing the repo

Make a new file /etc/apt/sources.list.d/ceph.list and put the ff contents:

deb http://download.proxmox.com/debian/ceph-pacific bullseye main

Run apt update and apt upgrade after.

Installation and config

Run apt install ceph next. After that, you need to go to your manager node to copy over the config and creds to your new node.

From the mgr node:

root@mgrnode# scp /var/lib/ceph/bootstrap-osd/ceph.keyring <proxmox IP>:/var/lib/ceph/bootstrap-osd/ceph.keyring
root@mgrnode# scp /etc/ceph/ceph.conf <proxmox IP>:/etc/ceph/ceph.conf 
root@mgrnode# scp /etc/ceph/ceph.client.admin.keyring <proxmox IP>:/etc/ceph/ceph.client.admin.keyring #optional. convinient for a lab setup in my case

If it complains about an invalid destination, you may need to create the dir on the proxmox node.

Next is to prepare the disk and activate the OSD. I will be using /dev/sdb in my case.

$ wipefs -a /dev/sdb
$ ceph-volume lvm prepare --bluestore --data /dev/sdb
$ ceph-volume lvm activate --all

Confirm with ceph -s if the OSD has been added. In my case, I can do this from the proxmox node since I copied over the admin key.

Been running this for a month now and its pretty stable. It will probably stay this way as long as I don’t touch the ceph feature in the web GUI.

Pretty sure this is an unsupported config so you won’t get any support if there are issues.

Formerly on github pages
Built with Hugo
Theme Stack designed by Jimmy