ceph - cephadm - 2024-10-10

Timestamp (UTC)Message
2024-10-10T01:50:20.134Z
<Joshua Blanch> Is there a reason why the limiter option for osd service should be avoided/last resort, from the docs:
```limit is a last resort and shouldn't be used if it can be avoided.```
<https://docs.ceph.com/en/latest/cephadm/services/osd/#limiter>
2024-10-10T09:24:44.715Z
<Brian P> Subjectively, because it does not make a lot of sense to exist in the first place, too speculative.
2024-10-10T11:36:31.962Z
<Ken Carlile> It has its uses.
2024-10-10T11:36:52.665Z
<Ken Carlile> It is hardly perfect, and it can screw you over at times because apparently ceph can't count sometimes
2024-10-10T11:37:29.925Z
<Ken Carlile> I have drives of the same type in my systems that I want to use for very different things, so I've used it, especially when converting from one layout to another
2024-10-10T11:38:31.823Z
<Ken Carlile> but sometimes ceph gets confused and will read limit:3 on db_devices for example, and somehow see 3 in use when there are actually 2 in use. Then you get OSDs that are HDD only when you expected them to be HDD/data + NVMe/db
2024-10-10T11:38:36.881Z
<Ken Carlile> AMHIK
2024-10-10T15:39:44.904Z
<Joshua Blanch> I'm also getting weird cases where the limit doesn't get respected and thinks more are created when there are less in reality
2024-10-10T17:03:29.192Z
<Ken Carlile> that's exactly what I ran into.

Any issue? please create an issue here and use the infra label.