ceph - cephfs - 2025-01-02

Timestamp (UTC)Message
2025-01-02T11:11:14.016Z
<Venky Shankar> @Dhairya Parmar can you set something up in my cal to discuss <https://tracker.ceph.com/issues/69315> tomorrow?
2025-01-02T11:40:16.912Z
<Dhairya Parmar> sure
2025-01-02T11:41:51.326Z
<Dhairya Parmar> sent
2025-01-02T11:46:55.135Z
<Dhairya Parmar> @Venky Shankar there are no client logs for vshankar-2024-12-19_10:00:45-fs-wip-vshankar-testing-20241219.063429-debug-testing-default-smithi/8044375 with `Description: fs/nfs/{cluster/{1-node} conf/{client mds mgr mon osd} overrides/{ignore_mgr_down ignorelist_health pg_health} supported-random-distros$/{ubuntu_latest} tasks/nfs}` . This doesn't look kclient but then why there are no client logs?
2025-01-02T11:49:22.473Z
<Dhairya Parmar> part of <https://tracker.ceph.com/issues/69347>
2025-01-02T11:51:06.375Z
<Venky Shankar> yeh, no kclient
2025-01-02T11:51:17.118Z
<Venky Shankar> but nfs-ganesha sever is deployed
2025-01-02T11:51:19.433Z
<Venky Shankar> and then mounted
2025-01-02T11:51:40.772Z
<Venky Shankar> if the mount has gone through then it should have client logs somewhere.
2025-01-02T11:51:54.745Z
<Venky Shankar> cc @Kotresh H R
2025-01-02T11:52:04.625Z
<Venky Shankar> @Dhairya Parmar check how the daemons are deployed
2025-01-02T11:52:14.450Z
<Venky Shankar> and where the libcephfs client is mounted.
2025-01-02T11:53:07.032Z
<Dhairya Parmar> I see `ceph-mds.nfs-cephfs.smithi136.yoxbdr`  and `ceph-mds.nfs-cephfs.smithi136.akvkhg`
2025-01-02T11:53:14.548Z
<Dhairya Parmar> should reveal something
2025-01-02T20:49:25.387Z
<Md Mahamudur Rahaman Sajib> Hi folks,
I have a question regarding `ScrubStack`, I found there is not lock/mutex maintained for `scrub_stack` which contains the list of inodes to scrub.
Although I found `ceph_assert(ceph_mutex_is_locked(mdcache->mds->mds_lock))` everywhere but I guess this is not the lock to maintains consistency of `scrub_stack` list. And scrub can happen concurrently I guess. How is it maintained without lock, am I missing something?
<https://github.com/ceph/ceph/blob/5c8c1d844f0e47937a8bdf6f1206f171baeed13b/src/mds/ScrubStack.cc#L233>
2025-01-02T20:58:47.102Z
<Md Mahamudur Rahaman Sajib> Hi folks,
I have a question regarding `ScrubStack`, I found there is not lock/mutex maintained for `scrub_stack` which contains the list of inodes to scrub.
Although I found `ceph_assert(ceph_mutex_is_locked(mdcache->mds->mds_lock))` everywhere but I guess this is not the lock to maintain consistency of `scrub_stack` list. And scrub can happen concurrently I guess. How is it maintained without lock, am I missing something?
<https://github.com/ceph/ceph/blob/5c8c1d844f0e47937a8bdf6f1206f171baeed13b/src/mds/ScrubStack.cc#L233>
2025-01-02T21:37:13.105Z
<gregsfortytwo> It (along with everything else) is covered by the big MDs lock
2025-01-02T21:37:23.220Z
<gregsfortytwo> It (along with everything else) is covered by the big mds lock you’ve identified

Any issue? please create an issue here and use the infra label.