ceph - ceph-devel - 2024-06-19

Timestamp (UTC)Message
2024-06-19T07:25:35.002Z
<Vikram Kumar Vaidyam> I am using ceph as the storage provisioner and there is an issue which is concerning. I have bumped some volume say ~50GB, into the volume and after sometime i tried deleting this volume. But one thing i observed is even after deleting the volume I see that the space is still occupied.
2024-06-19T10:17:58.152Z
<Vikram Kumar Vaidyam> I am encountering an issue with space reclamation when using Rook-Ceph for dynamic storage provisioning in a Kubernetes cluster. Here are the details:
**Environment Details:**
• **Storage Classes Used**:
    ◦ `livedesign-standard-rwo` (provisioner: `[rook-ceph.rbd.csi.ceph.com](http://rook-ceph.rbd.csi.ceph.com)`)
    ◦ `rook-cephfs` (provisioner: `[rook-ceph.cephfs.csi.ceph.com](http://rook-ceph.cephfs.csi.ceph.com)`)
**Issue Description:**
We use the `livedesign-standard-rwo` storage class with `[rook-ceph.rbd.csi.ceph.com](http://rook-ceph.rbd.csi.ceph.com)` provisioner. The class allows for volume expansion, and increasing the Persistent Volume (PV) size works correctly. However, when we attempt to shrink or delete a PV, the space is not reclaimed from Ceph.
**Steps Taken:**
1. Increased PV size to ~1TB successfully.
2. Deleted the PV to reclaim the space.
3. Observed that the space was not reclaimed in Ceph.
**Observations:**
**Ceph Status Commands**:
• $ ceph osd pool stats
• $ ceph df
• Both commands indicate that the space used by the deleted PV is not being reclaimed.
2024-06-19T10:18:26.814Z
<Vikram Kumar Vaidyam> I am encountering an issue with space reclamation when using Rook-Ceph for dynamic storage provisioning in a Kubernetes cluster. Here are the details:
**Environment Details:**
• **Storage Classes Used**:
    ◦ `standard-rwo` (provisioner: `[rook-ceph.rbd.csi.ceph.com](http://rook-ceph.rbd.csi.ceph.com)`)
    ◦ `rook-cephfs` (provisioner: `[rook-ceph.cephfs.csi.ceph.com](http://rook-ceph.cephfs.csi.ceph.com)`)
**Issue Description:**
We use the `livedesign-standard-rwo` storage class with `[rook-ceph.rbd.csi.ceph.com](http://rook-ceph.rbd.csi.ceph.com)` provisioner. The class allows for volume expansion, and increasing the Persistent Volume (PV) size works correctly. However, when we attempt to shrink or delete a PV, the space is not reclaimed from Ceph.
**Steps Taken:**
1. Increased PV size to ~1TB successfully.
2. Deleted the PV to reclaim the space.
3. Observed that the space was not reclaimed in Ceph.
**Observations:**
**Ceph Status Commands**:
• $ ceph osd pool stats
• $ ceph df
• Both commands indicate that the space used by the deleted PV is not being reclaimed.
2024-06-19T10:18:50.016Z
<Vikram Kumar Vaidyam> I am encountering an issue with space reclamation when using Rook-Ceph for dynamic storage provisioning in a Kubernetes cluster. Here are the details:
**Environment Details:**
• **Storage Classes Used**:
    ◦ `standard-rwo` (provisioner: `[rook-ceph.rbd.csi.ceph.com](http://rook-ceph.rbd.csi.ceph.com)`)
    ◦ `rook-cephfs` (provisioner: `[rook-ceph.cephfs.csi.ceph.com](http://rook-ceph.cephfs.csi.ceph.com)`)
**Issue Description:**
We use the `standard-rwo` storage class with `[rook-ceph.rbd.csi.ceph.com](http://rook-ceph.rbd.csi.ceph.com)` provisioner. The class allows for volume expansion, and increasing the Persistent Volume (PV) size works correctly. However, when we attempt to shrink or delete a PV, the space is not reclaimed from Ceph.
**Steps Taken:**
1. Increased PV size to ~1TB successfully.
2. Deleted the PV to reclaim the space.
3. Observed that the space was not reclaimed in Ceph.
**Observations:**
**Ceph Status Commands**:
• $ ceph osd pool stats
• $ ceph df
• Both commands indicate that the space used by the deleted PV is not being reclaimed.
2024-06-19T10:41:02.343Z
<IcePic> Vikram: Can you list the "orphaned" volume objects in ceph?
2024-06-19T12:22:14.064Z
<Zac Dover> @Ilya Dryomov, thanks, Ilya, for reminding me that "Releases" is served from main.
2024-06-19T15:26:32.015Z
<Casey Bodley> weekly rgw meeting starting soon in [ <https://pad.ceph.com/p/rgw-weekly](https://meet.google.com/oky-gdaz-ror> )
2024-06-19T18:43:45.233Z
<mgariepy> hello, i'm trying to debug there reason why a bucket seems to be undeletable and i kinda struggle to find the root cause
2024-06-19T18:44:45.926Z
<mgariepy> when i do `radosgw-admin bucket  radoslist`, i do have a lot of errors ERROR: int RGWRados::Bucket::List::list_objects_ordered(...) marker failed to make forward progress; 
2024-06-19T21:48:46.192Z
<yuriw> Do we have a place somewhere where I can see exactly what distro flavours we build for main, quincy, reef and squid?
@Dan Mick @Laura Flores @Josh Durgin @nehaojha 
2024-06-19T21:58:13.002Z
<yuriw> Do we have a place somewhere where I can see exactly what distro flavours we build in ceph-ci for main, quincy, reef and squid?
@Dan Mick @Laura Flores @Josh Durgin @nehaojha 
2024-06-19T22:18:58.914Z
<Dan Mick> only the jenkins jobs
2024-06-19T22:20:51.038Z
<yuriw> Is this complete?

[https://shaman.ceph.com/builds/ceph/wip-yuri11-testing-2024-06-19-1425/](https://shaman.ceph.com/builds/ceph/wip-yuri11-testing-2024-06-19-1425/)
2024-06-19T22:24:25.475Z
<Dan Mick> I don't know.  The authority is the jenkins jobs themselves
2024-06-19T22:42:36.631Z
<yuriw> I see lately c9 was built only and then several hours later nothing more.  Either jobs are not running or long queue

But still, what do we _*expect*_ to be built for each release?
2024-06-19T22:43:13.650Z
<Dan Mick> to answer that question, one would read the configuration of the jenkins jobs that build them
2024-06-19T22:44:27.224Z
<Dan Mick> ideally, one could read a document from the development team about what distro/version/archs are **expected** to be built, and the jenkins configuration would match those requirements.  I've been asking for such documents for a while now and I don't think they yet exist.  So what we have left is what actually happens, which is the jenkins jobs
2024-06-19T22:47:35.284Z
<yuriw> Ok that makes sense (or doesn’t, depending how you think about it 😉 )

Now from the teuthology standpoint - what does teuthology expect to be built in order to be able to successfully schedule a suite(s)?
2024-06-19T23:10:49.177Z
<Dan Mick> whatever the suite requests

Any issue? please create an issue here and use the infra label.