ceph - cephfs - 2024-08-27

Timestamp (UTC)Message
2024-08-27T04:05:51.678Z
<Milind Changire> if filer->zero() is invoked for a 35GiB file range, then should all the relevant backing objects get created on the data pool ?
e.g. `fallocate --length 35GiB /mnt/mycephfs/test_fallocate`
this is w.r.t. implementing fallocate
I see only a single object listed from the data pool
2024-08-27T04:22:48.988Z
<gregsfortytwo> If you invoked zero() I think it would fill in all those objects, but I believe you’re looking at fallocate and a default invocation (no flags) will not actually invoke zero(), it just sets the size?
2024-08-27T04:24:07.827Z
<Milind Changire> okay ... I'll dig deeper in that direction
2024-08-27T04:25:49.765Z
<Milind Changire> oh dear ... the flags are passed through to the OSD
2024-08-27T04:55:35.143Z
<Venky Shankar> you'd need to invoke `fallocate -z ...` to zero out byte ranges.
2024-08-27T04:56:32.748Z
<Anthony D'Atri> I haven’t personally managed CephFS yet.   I’ve heard a claim that it should be avoided because there is t an effective fsck / repair tool.  My sense is that such is very unlikely to be needed — thoughts?
2024-08-27T05:00:28.606Z
<Venky Shankar> CephFS was marked stable to use from the time the repair tools were ready to be used in case of any damage to the file system.
2024-08-27T05:12:59.112Z
<Venky Shankar> You only need the repair tools in case of metadata loss due to say a manual error where someone deltes some objects from the metadata pool or some PGs are lost resulting in loss of critical metadata objects.
2024-08-27T05:54:16.308Z
<Milind Changire> okay, so we have this ...
```  if (mode & ~(FALLOC_FL_KEEP_SIZE | FALLOC_FL_PUNCH_HOLE))
    return -CEPHFS_EOPNOTSUPP;```
looks like no other flags are supported
2024-08-27T06:16:11.676Z
<Laimis Juzeliūnas> @Patrick Donnelly thanks a lot! Balancer has been turned off.
2024-08-27T06:16:43.152Z
<Laimis Juzeliūnas> We noticed that even with static pins for subdirectories on other MDS ranks certain traffic for those subdirs still end up at the Rank 0. Is that expected?
2024-08-27T06:19:31.395Z
<Venky Shankar> @Laimis Juzeliūnas How heavy is the rank0 traffic for the subdirs pinned to other ranks? Basically, I guess rank0 is still holding a replica of the some metadata for the pinned subdir and that might be causing some (read) IOs to be serviced from rank0.
2024-08-27T07:34:18.294Z
<Laimis Juzeliūnas> The traffic itself is not that heavy compared to pinned ranks, definitely looks more like leftovers. However those sessions still hold a lot of CAPs. We did a full MDS rolling restart across daemons but still can see some of the "pinned traffic" going through mds rank 0
2024-08-27T08:49:21.594Z
<Igor Golikov> I am going to skip todays' standup, have a meeting with local health insurance and pension management agency...
2024-08-27T13:26:30.552Z
<Patrick Donnelly> @Venky Shankar et al. PSA: looks like the upgrade issues were fixed by the v19.3.0 tag: <https://pulpito.ceph.com/pdonnell-2024-08-27_13:01:46-fs:upgrade:nofs-wip-pdonnell-testing-20240826.211121-debug-distro-default-smithi/>
2024-08-27T13:26:59.468Z
<Patrick Donnelly> re: <https://tracker.ceph.com/issues/67335>
2024-08-27T13:27:23.073Z
<Patrick Donnelly> I don't think the tentacle pre-kickoff PR is actually necessary to merge
2024-08-27T17:07:18.990Z
<Patrick Donnelly> Need approval for merge: <https://github.com/ceph/ceph/pull/59176>
2024-08-27T17:08:11.191Z
<Patrick Donnelly> and <https://github.com/ceph/ceph/pull/59171>
2024-08-27T17:20:41.267Z
<Patrick Donnelly> tyty
2024-08-27T22:58:00.945Z
<Anthony D'Atri> Thanks.  I remember the classification with .. Jewel I think, but didn’t have context on the fsck factor.  I know to use R4 for the metadata pool.  

Any issue? please create an issue here and use the infra label.