2024-07-23T08:58:38.321Z | <Dhairya Parmar> not sure, why. pinging @Venky Shankar |
2024-07-23T09:34:45.962Z | <Rishabh Dave> @Venky Shankar fine to skip RADOS QA on neeraj's PR? <https://github.com/ceph/ceph/pull/51332> |
2024-07-23T10:43:48.534Z | <Kotresh H R> @Venky Shankar @Rishabh Dave Is this test test_multifs_rootsquash_nofeature passing upstream ? |
2024-07-23T10:44:25.592Z | <Kotresh H R> @Venky Shankar @Rishabh Dave Is this test `test_multifs_rootsquash_nofeature` passing upstream ? |
2024-07-23T10:47:47.755Z | <Rishabh Dave> AFAIR, no. It's located in test_admin.py and I haven't seen it test_admin.py failures recently.
To be sure, please check recent QA runs. |
2024-07-23T10:48:23.374Z | <Kotresh H R> ok. I will check that. |
2024-07-23T10:49:45.037Z | <Kotresh H R> Running`ceph fs authorize` on same client on different filesystem is throwing me error 'asking to removing the caps and re-run' |
2024-07-23T10:49:48.804Z | <Kotresh H R> Is that expected ? |
2024-07-23T10:49:59.359Z | <Kotresh H R> But the test is also doing the same? |
2024-07-23T10:51:19.617Z | <Rishabh Dave> Can't tell without knowing the exact command and it's output. |
2024-07-23T10:52:28.089Z | <Kotresh H R> `ceph fs authorize fs_a client.client1 / rw root_squash`
`ceph fs authorize fs_b client.client1 / rw` |
2024-07-23T10:52:35.623Z | <Kotresh H R> The second cmd throws error |
2024-07-23T10:53:43.841Z | <Rishabh Dave> Seems like a bug. This on main or some release branch? |
2024-07-23T10:54:04.951Z | <Kotresh H R> I am seeing both on main |
2024-07-23T10:54:10.265Z | <Kotresh H R> and on qe setup |
2024-07-23T10:54:10.542Z | <Rishabh Dave> checking... |
2024-07-23T10:54:37.964Z | <Kotresh H R> But the test is essentially doing the same!!! |
2024-07-23T10:57:22.015Z | <Rishabh Dave> Can
```$ ./bin/ceph fs authorize a client.x / rw root_squash
[client.x]
key = AQBRjJ9mul2OHRAAfGabtDENRbVxX5/lnUj6aw==
caps mds = "allow rw fsname=a root_squash"
caps mon = "allow r fsname=a"
caps osd = "allow rw tag cephfs data=a"
$ ./bin/ceph fs authorize b client.x / rw
[client.x]
key = AQBRjJ9mul2OHRAAfGabtDENRbVxX5/lnUj6aw==
caps mds = "allow rw fsname=a root_squash, allow rw fsname=b"
caps mon = "allow r fsname=a, allow r fsname=b"
caps osd = "allow rw tag cephfs data=a, allow rw tag cephfs data=b"
updated caps for client.x``` |
2024-07-23T10:57:32.968Z | <Rishabh Dave> Can't reprpduce it -
```$ ./bin/ceph fs authorize a client.x / rw root_squash
[client.x]
key = AQBRjJ9mul2OHRAAfGabtDENRbVxX5/lnUj6aw==
caps mds = "allow rw fsname=a root_squash"
caps mon = "allow r fsname=a"
caps osd = "allow rw tag cephfs data=a"
$ ./bin/ceph fs authorize b client.x / rw
[client.x]
key = AQBRjJ9mul2OHRAAfGabtDENRbVxX5/lnUj6aw==
caps mds = "allow rw fsname=a root_squash, allow rw fsname=b"
caps mon = "allow r fsname=a, allow r fsname=b"
caps osd = "allow rw tag cephfs data=a, allow rw tag cephfs data=b"
updated caps for client.x``` |
2024-07-23T10:57:59.653Z | <Rishabh Dave> Can't reproduce it -
```$ ./bin/ceph fs authorize a client.x / rw root_squash
[client.x]
key = AQBRjJ9mul2OHRAAfGabtDENRbVxX5/lnUj6aw==
caps mds = "allow rw fsname=a root_squash"
caps mon = "allow r fsname=a"
caps osd = "allow rw tag cephfs data=a"
$ ./bin/ceph fs authorize b client.x / rw
[client.x]
key = AQBRjJ9mul2OHRAAfGabtDENRbVxX5/lnUj6aw==
caps mds = "allow rw fsname=a root_squash, allow rw fsname=b"
caps mon = "allow r fsname=a, allow r fsname=b"
caps osd = "allow rw tag cephfs data=a, allow rw tag cephfs data=b"
updated caps for client.x``` |
2024-07-23T11:02:28.718Z | <Rishabh Dave> Can't reproduce it on main-
```$ ./bin/ceph fs authorize a client.x / rw root_squash
[client.x]
key = AQBRjJ9mul2OHRAAfGabtDENRbVxX5/lnUj6aw==
caps mds = "allow rw fsname=a root_squash"
caps mon = "allow r fsname=a"
caps osd = "allow rw tag cephfs data=a"
$ ./bin/ceph fs authorize b client.x / rw
[client.x]
key = AQBRjJ9mul2OHRAAfGabtDENRbVxX5/lnUj6aw==
caps mds = "allow rw fsname=a root_squash, allow rw fsname=b"
caps mon = "allow r fsname=a, allow r fsname=b"
caps osd = "allow rw tag cephfs data=a, allow rw tag cephfs data=b"
updated caps for client.x``` |
2024-07-23T11:02:39.927Z | <Rishabh Dave> Can't reproduce it on `main` -
```$ ./bin/ceph fs authorize a client.x / rw root_squash
[client.x]
key = AQBRjJ9mul2OHRAAfGabtDENRbVxX5/lnUj6aw==
caps mds = "allow rw fsname=a root_squash"
caps mon = "allow r fsname=a"
caps osd = "allow rw tag cephfs data=a"
$ ./bin/ceph fs authorize b client.x / rw
[client.x]
key = AQBRjJ9mul2OHRAAfGabtDENRbVxX5/lnUj6aw==
caps mds = "allow rw fsname=a root_squash, allow rw fsname=b"
caps mon = "allow r fsname=a, allow r fsname=b"
caps osd = "allow rw tag cephfs data=a, allow rw tag cephfs data=b"
updated caps for client.x``` |
2024-07-23T11:06:13.504Z | <Rishabh Dave> @Kotresh H R ping ^ |
2024-07-23T12:00:42.128Z | <Venky Shankar> I'd say no |
2024-07-23T12:01:13.216Z | <Venky Shankar> Its required, but if we aren't getting any responses from other team, we can ask Yuri to run it through main branch for other suites. |
2024-07-23T12:01:17.125Z | <Venky Shankar> by adding needs-qa |
2024-07-23T12:01:24.241Z | <Venky Shankar> and tagging Yuri in the PR. |
2024-07-23T12:04:38.318Z | <Rishabh Dave> You left a comment on it saying "good to merge", so I was confused. <https://github.com/ceph/ceph/pull/51332#issuecomment-2244441109> |
2024-07-23T12:04:53.111Z | <Venky Shankar> it is good |
2024-07-23T12:04:59.831Z | <Venky Shankar> but really, its a generic change, isn't it? |
2024-07-23T12:05:27.911Z | <Rishabh Dave> Yes, that's why I am confused about needs to be done next. |
2024-07-23T12:05:56.945Z | <Venky Shankar> I was hoping to get an ack or a suite run from rados team for merging |
2024-07-23T12:06:05.202Z | <Venky Shankar> but we haven't |
2024-07-23T12:06:15.030Z | <Venky Shankar> so the next thing is add needs-qa and tag Yuri |
2024-07-23T12:06:20.770Z | <Venky Shankar> but mention that it passes fs suite. |
2024-07-23T12:06:42.497Z | <Venky Shankar> IOW, let's wait to get it run by other suites |
2024-07-23T12:09:01.794Z | <Venky Shankar> ha! |
2024-07-23T12:09:05.876Z | <Venky Shankar> the fix has changes quiet a bit |
2024-07-23T12:09:18.193Z | <Venky Shankar> from the last time I saw |
2024-07-23T12:09:42.840Z | <Rishabh Dave> i tagged yuri just now - <https://github.com/ceph/ceph/pull/51332#issuecomment-2245071986> |
2024-07-23T12:09:46.669Z | <Venky Shankar> Now, it's just a error message change |
2024-07-23T12:09:50.571Z | <Rishabh Dave> yes |
2024-07-23T12:10:08.400Z | <Venky Shankar> earlier it was fiddling with a bunch of other things |
2024-07-23T12:10:25.332Z | <Venky Shankar> its straightforward |
2024-07-23T12:10:37.713Z | <Venky Shankar> anyway, lets get an ack. |
2024-07-23T12:11:00.072Z | <Rishabh Dave> cool. i've tagged yuri, awaiting his response. |
2024-07-23T12:12:54.218Z | <Rishabh Dave> @Venky Shankar are we proceeding to merge the `clone stats` PR? if so, i'll start writing code for the new option you proposed this morning.
<https://github.com/ceph/ceph/pull/54620#issuecomment-2244909888> |
2024-07-23T12:13:27.857Z | <Venky Shankar> yeh, I'll merge it soon. |
2024-07-23T12:13:36.174Z | <Venky Shankar> today or max tomorrow. |
2024-07-23T12:14:00.473Z | <Rishabh Dave> okay, thanks, i'll get started. |
2024-07-23T12:20:06.919Z | <Kotresh H R> ok, it's reproducing on qe setup.
```[root@ceph-amk-weekly-ofsuz2-node8 ~]# ceph fs authorize cephfs_1 client.hrk / rw root_squash
2024-07-23T08:19:07.066-0400 7f26057fa640 10 client.?.objecter ms_handle_connect 0x7f26081032c0
2024-07-23T08:19:07.066-0400 7f26057fa640 10 client.?.objecter resend_mon_ops
2024-07-23T08:19:07.067-0400 7f260db43640 10 client.26050.objecter _maybe_request_map subscribing (onetime) to next osd map
2024-07-23T08:19:07.070-0400 7f26057fa640 10 client.26050.objecter ms_dispatch 0x7f2608000b90 osd_map(2047..2047 src has 1..2047)
2024-07-23T08:19:07.070-0400 7f26057fa640 3 client.26050.objecter handle_osd_map got epochs [2047,2047] > 0
2024-07-23T08:19:07.070-0400 7f26057fa640 3 client.26050.objecter handle_osd_map decoding full epoch 2047
2024-07-23T08:19:07.070-0400 7f26057fa640 20 client.26050.objecter dump_active .. 0 homeless
2024-07-23T08:19:07.072-0400 7f26057fa640 10 client.26050.objecter ms_handle_connect 0x7f25f406f410
[client.hrk]
key = AQC7n59m2JjwERAAyMY+FZNn/ofT6tKWjOoDvw==
2024-07-23T08:19:07.310-0400 7f260db43640 20 client.26050.objecter shutdown clearing up homeless session...
2024-07-23T08:19:07.310-0400 7f260db43640 10 client.26050.objecter successfully canceled tick
[root@ceph-amk-weekly-ofsuz2-node8 ~]# ceph fs authorize cephfs_2 client.hrk / rw
2024-07-23T08:19:13.600-0400 7f4b8a7fc640 10 client.?.objecter ms_handle_connect 0x7f4b98075e20
2024-07-23T08:19:13.600-0400 7f4b8a7fc640 10 client.?.objecter resend_mon_ops
2024-07-23T08:19:13.601-0400 7f4b9f6a5640 10 client.26056.objecter _maybe_request_map subscribing (onetime) to next osd map
2024-07-23T08:19:13.603-0400 7f4b8a7fc640 10 client.26056.objecter ms_dispatch 0x7f4b98000b90 osd_map(2047..2047 src has 1..2047)
2024-07-23T08:19:13.603-0400 7f4b8a7fc640 3 client.26056.objecter handle_osd_map got epochs [2047,2047] > 0
2024-07-23T08:19:13.603-0400 7f4b8a7fc640 3 client.26056.objecter handle_osd_map decoding full epoch 2047
2024-07-23T08:19:13.603-0400 7f4b8a7fc640 20 client.26056.objecter dump_active .. 0 homeless
2024-07-23T08:19:13.604-0400 7f4b8a7fc640 10 client.26056.objecter ms_handle_connect 0x7f4b8406f410
Error EINVAL: client.hrk already has fs capabilities that differ from those supplied. To generate a new auth key for client.hrk, first remove client.hrk from configuration files, execute 'ceph auth rm client.hrk', then execute this command again.
[root@ceph-amk-weekly-ofsuz2-node8 ~]# ``` |
2024-07-23T12:20:29.247Z | <Kotresh H R> ```[root@ceph-amk-weekly-ofsuz2-node8 ~]# ceph --version
ceph version 18.2.1-224.el9cp (e65d95a3893a13895a9089eedaa7d34a37f1003b) reef (stable)
[root@ceph-amk-weekly-ofsuz2-node8 ~]# ``` |
2024-07-23T12:33:25.442Z | <Rishabh Dave> Ah, okay, it's Reef, not main. |
2024-07-23T14:29:05.945Z | <Venky Shankar> BTW I hit this: <https://github.com/ceph/ceph/pull/54620#issuecomment-2245401742> |
2024-07-23T15:03:34.193Z | <Rishabh Dave> from traceback it doesn't seem related... |
2024-07-23T15:05:12.596Z | <Rishabh Dave> the ceph-ci branch's builds have been deleted, can't re-run anymore :\ |
2024-07-23T15:11:05.753Z | <Venky Shankar> yeh, but seeing a couple of failures in that branch |