ceph - cephfs - 2024-10-16

Timestamp (UTC)Message
2024-10-16T01:36:17.159Z
<erikth> thanks! I'll give that a try
2024-10-16T01:43:18.485Z
<jcollin> do `mkdir -p /mnt/mountpoint/foo/.snap/snap1` and `mkdir -p /mnt/mountpoint/mirror/.snap/snap1` . Then check `peer status`.
2024-10-16T01:54:30.112Z
<erikth> yeah I was just trying that 😄 but I think my permissions are incorrect because I'm unable to create the snapshot, so I'm trying to fix that first
2024-10-16T02:00:58.066Z
<erikth> yeah once I authorized `rwps` for the client I'm able to make snapshots, and the files are synced to the target cluster. thank you! 🙏 I'll take a closer look at snap-schedule tomorrow
2024-10-16T02:02:25.096Z
<Patrick Donnelly> PSA: WIP kernel development documentation PR here: <https://github.com/ceph/ceph/pull/60341>
2024-10-16T02:02:43.558Z
<Patrick Donnelly> if anyone has questions / comments I'd like to hear them while all this experience is fresh in my head 🙂
2024-10-16T02:02:56.502Z
<Patrick Donnelly> cc @Markuze
2024-10-16T02:19:38.748Z
<gregsfortytwo> @Xiubo Li any thoughts about this?
2024-10-16T02:43:47.958Z
<Xiubo Li> @Patrick Donnelly I just added some comments and please take a look, I never set the develop env so complex as show in this PR.

Please let me know if I can help further.
2024-10-16T04:52:45.356Z
<Venky Shankar> @gregsfortytwo @Xiubo Li seems like the kernel driver already has `read_from_replica` functionality built in. See - <https://tracker.ceph.com/issues/64847> (cc @Markuze).
2024-10-16T06:00:31.426Z
<Xiubo Li> Yeah, Ilya has supported it in <https://www.spinics.net/lists/ceph-devel/msg48570.html>.
2024-10-16T09:30:12.237Z
<jcollin> hi @Xiubo Li @Markuze What is the minimum required kernel version for multiple filesystem `mul_fs` metrics support? We have 2 filesystems (fs1 and fs2) mounted here, but only the second one (fs2) is shown in the global_metrics:

```{"version": 2, "global_counters": ["cap_hit", "read_latency", "write_latency", "metadata_latency", "dentry_lease", "opened_files", "pinned_icaps", "opened_inodes", "read_io_sizes", "write_io_sizes", "avg_read_latency", "stdev_read_latency", "avg_w\
rite_latency", "stdev_write_latency", "avg_metadata_latency", "stdev_metadata_latency"], "counters": [], "client_metadata": {"fs2": {"client.5297": {"hostname": "smithi193", "root": "/", "mount_point": "N/A", "valid_metrics": ["cap_hit", "read_latency", "write_latency", "metadata_latency", "dentry_lease", "opened_files", "pinned_icaps", "opened_inodes", "read_io_sizes", "write_io_sizes", "avg_read_latency", "stdev_read_latency", "avg_write_latency", "stdev_write_latency", "avg_metadata_latency", "stdev_metadata_latency"], "kernel_version": "6.12.0-rc1-ge070b5b1baf5", "IP": "192.168.0.1"}}, "global_metrics": {"fs2": {"client.5297": [[29, 25], [0, 0], [0, 11176406], [0, 29271764], [1, 0], [0, 1], [1, 1], [0, 1], [0, 0], [1, 1048576], [0, 0], [0, 0], [0, 11176406], [0, 1], [0, 860938], [41878405677610, 34]]}}, "metrics": {"delayed_ranks": [], "mds.0": {"client.5297": []}}} ```
2024-10-16T10:27:27.693Z
<Venky Shankar> So, there isn't anything to do on the kclient side then. The same option is to be used by the rbd kernel driver. 🙂
2024-10-16T10:29:14.751Z
<Markuze> Ok, thanks. 
2024-10-16T10:30:45.447Z
<Xiubo Li> Jos, do you mean the upstream kernel ? Or distro ones ? It doesn't matter with whether it's single fs or multiple fs.
2024-10-16T10:33:05.850Z
<jcollin> It should be the distro one. We have centos9 here: <https://tracker.ceph.com/issues/68446#note-3>.
2024-10-16T10:45:39.669Z
<Xiubo Li> It should be upstream kernel `overrides/{distro/testing/k-testing`
2024-10-16T10:45:52.448Z
<Xiubo Li> and we should already support it
2024-10-16T10:48:22.220Z
<jcollin> ok
2024-10-16T12:37:19.208Z
<Laimis Juzeliūnas> Hey community, any one could guide us which kernel to use in order to mount cephfs compatible with the latest squid (19.2.0)?
We are a bit struggling to get things going. Our debian server with 6.10 kernel seems to have this client version already:
```ceph --version
ceph version 19.2.0 (16063ff2022298c9300e49a547a16ffda59baf13) squid (stable)```
but Ceph itself recognises it as an older (luminous) release:
```ceph tell mon.2 sessions | jq -r '.[] | "\(.entity_name)  \(.socket_addr.addr) \(.con_features_release)" ' | grep my.server.ip.addr
client.my-fs-name  my.server.ip.addr:0 luminous```
2024-10-16T12:38:03.496Z
<Laimis Juzeliūnas> reef would also work
2024-10-16T12:39:30.781Z
<Laimis Juzeliūnas> Hey community, any one could guide us which kernel to use in order to mount cephfs compatible with the latest squid (19.2.0)?
We are a bit struggling to get things going. Our debian 12 server with 6.10 kernel seems to have this client version already:
```ceph --version
ceph version 19.2.0 (16063ff2022298c9300e49a547a16ffda59baf13) squid (stable)```
but Ceph itself recognises it as an older (luminous) release:
```ceph tell mon.2 sessions | jq -r '.[] | "\(.entity_name)  \(.socket_addr.addr) \(.con_features_release)" ' | grep my.server.ip.addr
client.my-fs-name  my.server.ip.addr:0 luminous```
2024-10-16T13:52:53.723Z
<Patrick Donnelly> Thank you Xiubo. I responded.
2024-10-16T14:24:31.801Z
<gregsfortytwo> Convenient. This is one of a string I made when we were discussing it and unsure if we wanted by-component or global options (and also unsure which ones already existed and were integrated where)
2024-10-16T14:54:21.003Z
<gregsfortytwo> kernel clients don’t have all the same feature bits as the ceph userspace does, so get recognized as older versions (sometimes inaccurately). And the version of the ceph tooling package version has nothing to do with the kernel client.
But this shouldn’t matter as long as the client can mount, which it apparently can…
2024-10-16T16:58:08.404Z
<Erich Weiler> Hi @Venky Shankar - just a ping on this item!  Have you had a moment to revisit it?  Thanks again for you help.

Any issue? please create an issue here and use the infra label.