ceph - cephadm - 2024-10-30

Timestamp (UTC)Message
2024-10-30T14:04:36.652Z
<Ivveh> hi, im currently troubleshooting an issue where **mgr** sends a restart command to nfs-ganesha when changing an ACL or some setting in an export. I'm expecting it to send a `docker kill -s SIGHUP` and not a full restart.. ceph 18.2.4, this just recently started to happen and I'm not quite sure. Any pointers?
```cephadm [INF] Restart service nfs.test```
2024-10-30T14:07:32.759Z
<Ivveh> trying also to find the code for cephadm that actually does this, i dont think it is intended
2024-10-30T14:25:32.475Z
<Ivveh> _oremote nfs -> cephadm.service_action(*(), **{'action': 'restart', 'service_name': '[nfs.xxx](http://nfs.xxx)'})
2024-10-30T14:31:24.159Z
<Ivveh> so adding or removing an export correcly just deletes the rados object in the pool and the cephx user
2024-10-30T14:35:20.257Z
<Ivveh> ```debug 2024-10-30T14:32:46.367+0000 7086ff3ee640  0 [nfs DEBUG nfs.export] Successfully created CEPH export-4 from dict for cluster test
debug 2024-10-30T14:32:46.435+0000 7086ff3ee640  0 [nfs DEBUG root] mon_command: 'auth get-or-create' -> 0 in 0.067s
debug 2024-10-30T14:32:46.435+0000 7086ff3ee640  0 [nfs INFO nfs.export] Export user created is client.nfs.test.4
debug 2024-10-30T14:32:46.435+0000 7086ff3ee640  0 [nfs DEBUG nfs.export] Successfully created user nfs.test.4 for cephfs path /volumes/testing/test/f90bd694-290b-4c60-8761-6acc4466f5b1
debug 2024-10-30T14:32:46.439+0000 7086ff3ee640  0 [nfs DEBUG nfs.export] write configuration into rados object .nfs/test/export-4
debug 2024-10-30T14:32:46.439+0000 7086ff3ee640  0 [nfs DEBUG nfs.export] Added export-4 url to conf-nfs.test
debug 2024-10-30T14:32:46.595+0000 7086e9cc3640  0 [nfs DEBUG root] _oremote nfs -> cephadm.describe_service(*(), **{'service_type': 'nfs'})```
2024-10-30T14:37:51.803Z
<Ivveh> but editing an export like adding an ip address restarts the process entirely
```debug 2024-10-30T14:35:44.090+0000 7086f96f1640  0 [nfs DEBUG root] mon_command: 'auth get-or-create' -> 0 in 0.972s
debug 2024-10-30T14:35:44.090+0000 7086f96f1640  0 [nfs INFO nfs.export] Export user created is client.nfs.test.4
debug 2024-10-30T14:35:44.090+0000 7086f96f1640  0 [nfs DEBUG nfs.export] Successfully created user nfs.test.4 for cephfs path /volumes/testing/test/f90bd694-290b-4c60-8761-6acc4466f5b1
debug 2024-10-30T14:35:44.310+0000 7086f96f1640  0 [nfs DEBUG nfs.export] write configuration into rados object .nfs/test/export-4
debug 2024-10-30T14:35:44.310+0000 7086f96f1640  0 [nfs DEBUG nfs.export] Update export export-4 in conf-nfs.test
debug 2024-10-30T14:35:44.314+0000 7086f96f1640  0 [nfs DEBUG root] _oremote nfs -> cephadm.service_action(*(), **{'action': 'restart', 'service_name': 'nfs.test'})
debug 2024-10-30T14:35:44.314+0000 7086f96f1640  0 [cephadm INFO root] Restart service nfs.test
debug 2024-10-30T14:35:44.314+0000 7086f96f1640  0 log_channel(cephadm) log [INF] : Restart service nfs.test
debug 2024-10-30T14:35:44.774+0000 7086e94c2640  0 [nfs DEBUG root] _oremote nfs -> cephadm.describe_service(*(), **{'service_type': 'nfs'})```
2024-10-30T15:07:00.539Z
<Ivveh> so if i understand this correctly it should be using `describe_service` and not `service_action` during updates? restart of the nfs-ganesha is not necessary..
<https://github.com/ceph/ceph/blob/v18.2.4/src/pybind/mgr/nfs/export.py#L392>
2024-10-30T22:40:32.836Z
<Michael W> If you have cephadm already deployed and a HAProxy container for the unencrypted RGWs, can you add another HAProxy container for encrypted RGWs?

Any issue? please create an issue here and use the infra label.