2024-07-30T13:52:08.814Z | <Utkarsh Bhatt> Hey everyone, I have a test Ceph lab with uneven OSDs (not of same size). I am getting a strange issue when trying to add a new pool with `'ceph', '--id', 'admin', 'osd', 'pool', 'create', '--pg-num-min=32', 'gnocchi', '64'.`
```Error ERANGE: pg_num 64 size 3 for this pool would result in 272 cumulative PGs per OSD (1635 total PG replicas on 6 'in' root OSDs by crush rule) which exceeds the mon_max_pg_per_osd value of 250``` |
2024-07-30T13:52:37.962Z | <Utkarsh Bhatt> ```$ sudo ceph osd df
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
1 0.27280 1.00000 279 GiB 215 MiB 146 MiB 0 B 70 MiB 279 GiB 0.08 1.12 108 up
6 0.36389 1.00000 373 GiB 264 MiB 226 MiB 0 B 38 MiB 372 GiB 0.07 1.03 158 up
5 0.90970 1.00000 932 GiB 592 MiB 544 MiB 0 B 48 MiB 931 GiB 0.06 0.92 322 up
4 0.43669 1.00000 447 GiB 302 MiB 264 MiB 0 B 38 MiB 447 GiB 0.07 0.98 171 up
3 0.21829 1.00000 224 GiB 230 MiB 178 MiB 0 B 52 MiB 223 GiB 0.10 1.49 93 up
2 0.87129 1.00000 892 GiB 564 MiB 520 MiB 0 B 44 MiB 892 GiB 0.06 0.92 303 up
TOTAL 3.1 TiB 2.1 GiB 1.8 GiB 0 B 290 MiB 3.1 TiB 0.07 ``` |
2024-07-30T13:53:20.724Z | <Utkarsh Bhatt> I am not quite sure what's happening, but I doubt that the large variance in OSD disk size may be the cause ? |
2024-07-30T15:46:57.443Z | <yuriw> `reef` builds failing <https://shaman.ceph.com/builds/ceph/wip-yuri8-testing-2024-07-30-0629-reef/> |
2024-07-30T15:47:16.049Z | <yuriw> `reef` builds failing <https://shaman.ceph.com/builds/ceph/wip-yuri8-testing-2024-07-30-0629-reef/>
anybody else is seeing this? |
2024-07-30T15:48:29.401Z | <Casey Bodley> similar complaint in <#C1HFJ4VTN|> |
2024-07-30T19:11:42.896Z | <Æmerson> I'm not going to do it now, but long term, since the new versions of opentelemetry-cpp no longer support the Jaeger exporter, does anyone have any idea whether we'd be better off with OTLP HTTP, OTLP gRPC, or both and let the user configure it? |
2024-07-30T20:42:57.006Z | <Casey Bodley> cc @Deepika Upadhyay @Yuval Lifshitz |