2024-11-14T08:54:00.724Z | <Ivveh> hi, im creating a script (in the design phase atm) that changes the xattr `ceph.dir.layout` as well as `ceph.file.layout` according to <https://docs.ceph.com/en/reef/cephfs/file-layouts/> basically poor mans migration for cephfs to different pool but a little fancier than just creating a new subvolume and doing rclone or whatever.. i wonder a few things, what would be the best way of doing this to avoid data corruption as the file has to be re-created in order to actually move to the new layout? i was thinking of using fcntl to lock it, rename/move the file, change the xattr and then copy it back to its original location/name. is this a bad idea? i don't really care if the operation costs a lot of movement, i just want it to perform the layout-change over time while the filesystem is in use. any pointers would be appreciated 🙏
also, i would be doing this on a separate client while the normal clients continue working against this filesystem that is being "migrated" so i just wonder if any special permissions would be needed as we are changing pool (cephx)? assuming i have something like this on the clients: `mds 'allow r fsname=filesystem, allow rw fsname=filesystem path=/path/to/subvolume' mon 'profile fs-client' osd 'allow rw tag cephfs data=filesystem'`
reason for cephx question is that obviously a client with its permission cant change the layout, but i as an admin can, but will the layout change make the files inaccessible for the client after "migration"? |
2024-11-14T08:58:23.763Z | <Ivveh> im aware that this way of doing things the file wouldn't "exist" for the client while a file is in "migration" but i guess thats the best/secure way of doing it, if there are better way any input would be greatly appreciated
it would be greater if cephfs had a function to re-write the file after a layout had been changed |
2024-11-14T14:22:41.194Z | <Venky Shankar> @Rishabh Dave mind looking at <https://tracker.ceph.com/issues/65766#note-14> please? |
2024-11-14T14:25:09.974Z | <Rishabh Dave> sure, last time i spent time on it i didn't find something conclusive. i'll finish BZs and current features and then get back to this one. |
2024-11-14T14:26:09.954Z | <Venky Shankar> OK, @Milind Changire is seeing this in a quincy run. |
2024-11-14T15:27:33.291Z | <Md Mahamudur Rahaman Sajib> Hi folks,
I was going through the cephfs-mirror code, I have some questions,
1. In `int PeerReplayer::do_synchronize(const std::string &dir_root, const Snapshot ¤t)` function why `sync_stack` (a stack) is used? (I guess here we are doing depth first search on a directory for mirroring).
2. in `int PeerReplayer::do_synchronize(const std::string &dir_root, const Snapshot ¤t, boost::optional<Snapshot> prev)` function `sync_queue` is used? (I guess we are doing breadth first search)
3. If my assumptions are true, why different approach when we have a previous snapshot? (My random guess is depth first search will be less memory intensive as at any point of time we will have the longest path of the tree in the stack but for breadth first search in worst case it can have a all the nodes of the tree existing in the same level in the queue)
4. If my guess in point 3 is true then, isn't it be a possible case where user is taking the second snapshot after a very long time? in that case that breadth first can be very memory intensive?
|
2024-11-14T18:54:01.254Z | <Markuze> Ok, another generic question.
How can I control the size of the FS created by vstart? |