ceph - cephadm - 2024-10-06

Timestamp (UTC)Message
2024-10-06T21:32:18.259Z
<Yonatan Zaken> Hi All 🙂

A question about this configuration parameter: `osd_memory_target_cgroup_limit_ratio`
Does anyone know if it is functional in a cephadm cluster? I couldn't find any reference to it in the Ceph documentation, but did find something in the source control stating that:
```    Option("osd_memory_target_cgroup_limit_ratio", Option::TYPE_FLOAT, Option::LEVEL_ADVANCED)
    .set_description("Set the default value for osd_memory_target to the cgroup memory limit (if set) times this value")
    .set_long_description("A value of 0 disables this feature.")
    .set_default(0.8)
    .set_min_max(0.0, 1.0)
    .add_see_also({"osd_memory_target"}),```
I do use `osd_memory_target`  and `osd_memory_target_autotune` in ceph reef release, but can anyone can share his experience using `osd_memory_target_cgroup_limit_ratio` ?
The description describes a behavior I would like to adopt.

Any issue? please create an issue here and use the infra label.