ceph - ceph-devel - 2024-12-11

Timestamp (UTC)Message
2024-12-11T01:00:54.314Z
<stachecki.tyler> Interesting, do you have any idea/measure of how the fragmentation may be affecting performance? Some time ago I was looking at a trace with a lot of `std::vector<T> push_back` resulting in allocs that I could be seemingly elided with `boost::small_vector<T, 3>` and it had an impact on the profile. Problem is, I've never seen any "low hanging performance fruit" - everything seems mostly picked and the impact of individual changes is hard to measure.
2024-12-11T01:01:22.522Z
<stachecki.tyler> Interesting, do you have any idea/measure of how the fragmentation may be affecting performance? Some time ago I was looking at a trace with a lot of `std::vector<T> push_back` resulting in allocs that could be seemingly elided with `boost::small_vector<T, 3>` and it had an impact on the profile. Problem is, I've never seen any "low hanging performance fruit" - everything seems mostly picked and the impact of individual changes is hard to measure.
2024-12-11T12:11:29.306Z
<david.casier> The following bug <https://tracker.ceph.com/issues/69175> is present on 17.2.8 and absent on 182.4 (but may arrive in the next release).
cela est corrigé dans la branche main, à l'exception de l'implémentation de IPv6.
I proposed a fix <https://github.com/ceph/ceph/pull/61029>, I am waiting for some feedback before proposing the associated backfills.
2024-12-11T12:11:56.474Z
<david.casier> The following bug <https://tracker.ceph.com/issues/69175> is present on 17.2.8 and absent on 182.4 (but may arrive in the next release).
this is fixed in the main branch, except for the IPv6 implementation.
I proposed a fix <https://github.com/ceph/ceph/pull/61029>, I am waiting for some feedback before proposing the associated backfills.
2024-12-11T12:21:10.887Z
<Nitzan Mordechai> please see <https://github.com/ceph/ceph/pull/60881>, it have ipv6 support
2024-12-11T15:33:15.570Z
<Yonatan Zaken> Hi Devs :)

Using Ceph Reef 18.2.4

I am trying to figure out what might cause the SELinux context of the /etc/ceph/ceph.conf file to change.
At some point it is changed from `system_u:object_r:etc_t:s0` to `unconfined_u:object_r:user_tmp_t:s0`
I am wondering if cephadm might be regenerating this file by creating one in the `/tmp`  directory and moving it to the `/etc/ceph`  directory?

Could this be some periodic event?

Thanks
2024-12-11T16:26:02.643Z
<Casey Bodley> weekly rgw meeting starting in ~5min at [ <https://pad.ceph.com/p/rgw-weekly](https://meet.google.com/mmj-uzzv-qce> )
2024-12-11T17:12:05.494Z
<Joseph Mundackal> is this just me?

etherpad rendering fun?: https://files.slack.com/files-pri/T1HG3J90S-F084GQ31JNS/download/image.png
2024-12-11T17:45:14.356Z
<Casey Bodley> yeah, i noticed that yesterday. we've seen it happen to other etherpads also
2024-12-11T18:17:29.541Z
<Joseph Mundackal> Glad it's not some weird browser thing on my side
2024-12-11T18:47:53.181Z
<Kevin Fox> anyone know of any reason ceph-csi-rbd can't be upgraded from 3.7.2 directly to 3.12.3?

Any issue? please create an issue here and use the infra label.