ceph - cephadm - 2024-08-05

Timestamp (UTC)Message
2024-08-05T09:07:18.821Z
<Yonatan Zaken> Hi folks,
When using the following command:
`ceph orch daemon add osd storage-0:data_devices=/dev/vdb,/dev/vdc,encrypted=true`

I noticed the encryption lvm layer was created properly.
But when running:
 `ceph orch daemon add osd storage-0:data_devices=/dev/vdb,/dev/vdc,encrypted=False`

The encryption lvm layer was created as well. Should I just ommit the `encrypted` flag completely if no encryption lvm layer is required?
Is it a bug?
2024-08-05T09:31:14.295Z
<Yonatan Zaken> Hi folks,
When using the following command:
`ceph orch daemon add osd storage-0:data_devices=/dev/vdb,/dev/vdc,encrypted=true`

I noticed the encryption lvm layer was created properly.
But when running:
 `ceph orch daemon add osd storage-0:data_devices=/dev/vdb,/dev/vdc,encrypted=false`

The encryption lvm layer was created as well. Should I just ommit the `encrypted` flag completely if no encryption lvm layer is required?
Is it a bug?
2024-08-05T09:41:20.180Z
<Yonatan Zaken> Judging by the code in: `src/pybind/mgr/orchestrator/module.py`
In the method:
```def _daemon_add_osd(self,
                        svc_arg: Optional[str] = None,
                        method: Optional[OSDMethod] = None) -> HandleCommandResult:```
I would expect `encrypted=false`  to work, but I didn't debug it thoroughly
2024-08-05T09:52:13.226Z
<verdurin> I need to find out why an OSD whose OS was replaced and OSD disks upgraded was only partially provisioned by Cephadm, via the OSD spec file.

Only about a third of the disks are in service.

Suspect that it's because something went wrong during the provisioning and even though I have zapped the disks not in service, the NVMe LVs that were associated with them are still there so it doesn't think there is enough metadata capacity.

Is there a way of confirming this, perhaps via the log of the MGR service on the current active MGT?
2024-08-05T10:03:44.531Z
<verdurin> I need to find out why an OSD whose OS was replaced and OSD disks upgraded was only partially provisioned by Cephadm, via the OSD spec file.

Only about a third of the disks are in service.

Suspect that it's because something went wrong during the provisioning and even though I have zapped the disks not in service, the NVMe LVs that were associated with them are still there so it doesn't think there is enough metadata capacity.

Is there a way of confirming this, perhaps via the log of the MGR service on the current active MGR?
2024-08-05T10:04:49.358Z
<verdurin> Okay, have found a message:
`skipping apply of <hostname> on DriveGroupSpec`

Now to find the logs where it shows the process. Harder when each log entry doesn't have the remote hostname...
2024-08-05T15:08:25.371Z
<Frank> Hi! Using 'mgr/cephadm/secure_monitoring_stack true', will have monitor data tls. node-exporter is doing TLS fine, but I can't see ceph-exporter and haproxy metrics do TLS. They are hapily serving http (unencrypted) data, whilst prometheus really wants https.

Am I missing a configuration somewhere perhaps?

Any issue? please create an issue here and use the infra label.