ceph - ceph-devel - 2024-09-20

Timestamp (UTC)Message
2024-09-20T07:20:57.236Z
<cz tan> hi all, for osd deployed with cephadm, which is osd read configuration from? the value of ceph conf set global xxx or from the default value in the [global.yaml.in](http://global.yaml.in)?
2024-09-20T07:23:23.458Z
<cz tan> I've found that values that are false in [global.yaml.in](http://global.yaml.in), even if using ceph conf set to true, still read false unless set true in ceph.conf
2024-09-20T08:13:47.794Z
<Ilya Dryomov> There is `min_last_complete_ondisk` field that is maintained between the primary and replica OSDs
2024-09-20T08:14:09.595Z
<Ilya Dryomov> @Samuel Just can speak more to how it works
2024-09-20T08:15:46.827Z
<Ilya Dryomov> With the introduction of `min_last_complete_ondisk` in the Octopus release, replica reads became safe for general use
2024-09-20T08:19:06.152Z
<Ilya Dryomov> There is an outstanding PR that aims to elevate `CEPH_OSD_FLAG_BALANCE_READS` or `CEPH_OSD_FLAG_LOCALIZE_READS` flags to a config option that would be applied to all ops at the librados level: <https://github.com/ceph/ceph/pull/56180>
2024-09-20T08:22:58.220Z
<Ilya Dryomov> I believe the reason reading from primary is still the default is partly history and partly the fact that in a well-balanced cluster each OSD would serve as primary for a roughly equal number of PGs, so introducing additional randomization (`balance`) doesn't bring much to the table
2024-09-20T08:25:24.736Z
<Ilya Dryomov> ... while localization requires the CRUSH location to be configured on the client side
This doesn't happen automatically, so _just_ defaulting to `localize` wouldn't do much either
2024-09-20T08:26:16.768Z
<Ilya Dryomov> There is an outstanding PR that aims to elevate `CEPH_OSD_FLAG_BALANCE_READS` and `CEPH_OSD_FLAG_LOCALIZE_READS` flags to a config option to be applied to all ops at the librados level: <https://github.com/ceph/ceph/pull/56180>
2024-09-20T08:27:29.466Z
<Ilya Dryomov> I believe the reason reading from primary is still the default is partly history and partly the fact that in a well-balanced cluster each OSD would be serving as primary for roughly the same number of PGs, so introducing additional randomization (`balance`) doesn't bring much to the table
2024-09-20T11:51:18.155Z
<Ilya Dryomov> Hi Adam
Are you aware of anything else happening around out boost repos?
2024-09-20T11:52:37.798Z
<Ilya Dryomov> It looks like the repo for the latest libboost build (<https://shaman.ceph.com/repos/libboost/master/55f34507d322314fb0294629b7c0bb406de07aec/default/319218/>) become inaccessible
2024-09-20T11:52:45.090Z
<Ilya Dryomov> It looks like the repo for the latest libboost build (<https://shaman.ceph.com/repos/libboost/master/55f34507d322314fb0294629b7c0bb406de07aec/default/319218/>) became inaccessible
2024-09-20T11:54:16.399Z
<Ilya Dryomov> Clicking on `Repo URL` there leads to <https://chacra.ceph.com/r/libboost/master/55f34507d322314fb0294629b7c0bb406de07aec/ubuntu/jammy/flavors/default/> which throws 403 error
2024-09-20T11:55:53.959Z
<Ilya Dryomov> This broke Windows PR check:
```Failed to fetch <https://chacra.ceph.com/r/libboost/master/55f34507d322314fb0294629b7c0bb406de07aec/ubuntu/jammy/flavors/default/dists/jammy/main/binary-amd64/Packages>  403  Forbidden [IP: 8.43.84.139 443]```
2024-09-20T11:56:03.390Z
<Ilya Dryomov> E.g. <https://jenkins.ceph.com/job/ceph-windows-pull-requests/47125/>
2024-09-20T12:19:46.502Z
<Ilya Dryomov> Hi Adam
Are you aware of anything else happening around our Boost repos?
2024-09-20T14:38:25.291Z
<Rost Khudov> can someone be assigned to review this PR?

Any issue? please create an issue here and use the infra label.