2024-06-28T00:20:37.749Z | <Xiubo Li> Sure, thanks |
2024-06-28T01:56:57.160Z | <Xiubo Li> @Erich Weiler BTW. what's you ceph version ? |
2024-06-28T02:57:19.665Z | <robbat2> anybody know another way to reach p_the_b, who asked about metadata early this morning? |
2024-06-28T03:46:08.239Z | <Erich Weiler> @Xiubo Li I am on Reef 18.2.1-2 |
2024-06-28T03:48:38.016Z | <Erich Weiler> RHEL 9.3 |
2024-06-28T04:14:17.192Z | <Xiubo Li> @Dhairya Parmar There should have the snapseq in all the write |
2024-06-28T04:16:31.292Z | <Xiubo Li> @Dhairya Parmar There should have the snapseq in all the write, yeah it is for this one it really so advanced. |
2024-06-28T04:18:22.408Z | <Xiubo Li> ```commit dad91b5ea7a50e752050a3dc9bd800337932aeb0
Author: Adam C. Emerson <aemerson@linuxbox.com>
Date: Thu May 23 19:30:36 2013 -0700
client: direct read/write methods that bypass cache, cap checks
These methods were created to implement pNFS data server support,
bypassing cap checks since the pNFS MDS holds a cap on behalf of
the client, realized in the recallable layout.
(Includes pieces of API v2 by Matt.)
Signed-off-by: Matt Benjamin <matt@linuxbox.com>``` |
2024-06-28T04:19:06.700Z | <Xiubo Li> It's a API and will be bypass libcephfs' control |
2024-06-28T04:19:30.598Z | <Xiubo Li> so for uplayer APP it should correctly handle the snapseq too. |
2024-06-28T04:20:52.716Z | <Xiubo Li> Okay, get it. Checked the logs, it seems a known bug between `unlink` and `create` , before we have fixed it but the patches were reverted for some reasons. I need to check it more to figure it in detail |
2024-06-28T04:24:45.066Z | <Erich Weiler> That’s great! Do you think a fix can be pushed out? Maybe backported to reef? |
2024-06-28T04:27:26.460Z | <Erich Weiler> And… you don’t think it has to do anything with locking? We can only seem to recreate it when we involve locking. |
2024-06-28T04:27:59.426Z | <Xiubo Li> I am still checking, not confirmed yet |
2024-06-28T04:46:35.317Z | <Rishabh Dave> @Venky Shankar Will you be reviewing the "clone stats" PR today? <https://github.com/ceph/ceph/pull/54620> I am thinking to start a new QA run today, perhaps I can include it, |
2024-06-28T04:48:40.263Z | <Venky Shankar> I will review it today and include it in my branch that I plan to put to test over the weekend. |
2024-06-28T05:04:22.848Z | <Rishabh Dave> Can I put it through QA? It'll be easier for me to spot failures related to it and I don't think I have enough PRs yet. |
2024-06-28T05:05:07.285Z | <Venky Shankar> I would like pick it up 🙂 |
2024-06-28T05:05:24.113Z | <Rishabh Dave> okay |
2024-06-28T05:17:27.114Z | <Rishabh Dave> fyi: i've rebased the PR branch just now since it was based on relatively old version of main branch. |
2024-06-28T05:51:59.831Z | <Rishabh Dave> the commit for docs and release notes got left behind in last push, added it to PR branch now - <https://github.com/ceph/ceph/pull/54620/commits/05438e772e8dd6397009deae37ee887c18ffe4bc> |
2024-06-28T07:33:15.410Z | <Venky Shankar> @Dhairya Parmar @Patrick Donnelly @Xiubo Li do you know where there is `qa/suites/fs/upgrade/upgraded_client/tasks/2-clients/fuse-upgrade.yaml` and then again `qa/suites/fs/upgrade/upgraded_client/tasks/3-workload/stress_tests/0-client-upgrade.yaml` which does the same thing of upgrading ceph on the host running ceph-fuse? |
2024-06-28T07:38:16.022Z | <Venky Shankar> never mind |
2024-06-28T07:38:28.855Z | <Venky Shankar> I was looking at the reef backport which is a buggy backport 😕 |
2024-06-28T08:27:05.310Z | <Dhairya Parmar> so reef backport lacks these yamls? aren't these essential ones? |
2024-06-28T08:28:39.212Z | <Dhairya Parmar> yeah |
2024-06-28T08:29:15.737Z | <Dhairya Parmar> but the question here is, should this be tested in our qa-suite? we don't have any test case for this anywhere |
2024-06-28T08:29:51.528Z | <Dhairya Parmar> @Venky Shankar @Xiubo Li, any thoughts? |
2024-06-28T08:33:44.769Z | <Xiubo Li> IMO, we should. |
2024-06-28T08:34:28.862Z | <Xiubo Li> But it will be a bit compliated if the snapshot is involved. |
2024-06-28T08:39:46.034Z | <Dhairya Parmar> yeah, i have no context behind having snapshot seq in a write call |
2024-06-28T08:44:34.557Z | <Dhairya Parmar> BTW this call(ll_write_block) makes use of objecter interface while the other io code paths use objectcacher. now since im here i wonder how objecter and objectcacher are different? |
2024-06-28T08:52:49.086Z | <Xiubo Li> As you can see `objectcacher` has the suffix `cacher` |
2024-06-28T08:53:44.573Z | <Xiubo Li> It will cache the write, but finally the cacher will call objecter APIs IMO. |
2024-06-28T08:54:16.459Z | <Dhairya Parmar> so a direct call to objecter will mean no cache, just direct persistence to disks |
2024-06-28T08:54:40.407Z | <Xiubo Li> Yeah. |
2024-06-28T08:54:55.107Z | <Xiubo Li> This is what the kclient will do when flushing the buffer |
2024-06-28T08:55:07.473Z | <Dhairya Parmar> gotit |
2024-06-28T08:55:10.666Z | <Dhairya Parmar> got it |
2024-06-28T08:55:56.851Z | <Dhairya Parmar> btw it seems `sync` has no effect here in code of ll_write_block
``` if (true || sync) {
/* if write is stable, the epilogue is waiting on
* flock */
onsafe.reset(new C_SaferCond("Client::ll_write_block flock"));
}``` |
2024-06-28T08:56:26.558Z | <Dhairya Parmar> btw it seems `sync` has no effect here in code of `ll_write_block`
``` if (true || sync) {
/* if write is stable, the epilogue is waiting on
* flock */
onsafe.reset(new C_SaferCond("Client::ll_write_block flock"));
}``` |
2024-06-28T08:58:49.059Z | <Xiubo Li> ```commit a71d829e46e864905199c21c1737adc410099e3e
Author: Sage Weil <sweil@redhat.com>
Date: Mon Feb 17 10:27:23 2014 -0800
client: disable barrier support
The boost interval_set class is not available on centos6/rhel6. Until that
dependency is sorted out, fix the build.
Signed-off-by: Sage Weil <sage@inktank.com>``` |
2024-06-28T08:58:54.884Z | <Xiubo Li> introduced by this |
2024-06-28T09:04:07.544Z | <Dhairya Parmar> this is too old |
2024-06-28T09:04:23.279Z | <Dhairya Parmar> the dependency should be sorted by now |
2024-06-28T09:21:18.329Z | <Dhairya Parmar> the dependency is sorted by now |
2024-06-28T13:00:12.596Z | <Patrick Donnelly> from my home network |
2024-06-28T13:51:04.246Z | <Dhairya Parmar> @Venky Shankar @Patrick Donnelly Has the host key changed for [drop.ceph.com](http://drop.ceph.com)? An upstream user while uploading a file through `ceph-post-file` seems to be getting back a fingerprint different from their known_hosts file. Do you folks have any idea about this? |
2024-06-28T14:19:25.072Z | <Dhairya Parmar> tagging @gregsfortytwo as well |
2024-06-28T15:22:30.250Z | <Venky Shankar> Yes, I guess so, something must have changes during reimaging. |
2024-06-28T15:22:50.073Z | <Venky Shankar> @Patrick Donnelly PTAL - <https://github.com/ceph/ceph/pull/58113#issuecomment-2196764573> |
2024-06-28T15:43:43.191Z | <Dhairya Parmar> okay |