ceph - ceph-ansible - 2024-10-11

Timestamp (UTC)Message
2024-10-11T11:51:26.035Z
<luvn> hi everyone! i have a question regarding the behavior of my ceph deployed via ceph-ansible (and leveraged by openstack-ansible)
2024-10-11T11:52:44.560Z
<luvn> i created a 50gb volume in cinder -- with ceph backend -- and noticed that, as i filled it, three ceph thin volumes grew together, each with 50gb, totaling 150gb (so far so good)
2024-10-11T11:53:20.675Z
<luvn> then, i took a snapshot of this volume and enabled rbd flattening, and then decided to create a new volume from this snapshot
2024-10-11T11:53:43.974Z
<luvn> as expected, the three thin volumes grew together one more time, totaling 300gb
2024-10-11T11:55:20.621Z
<luvn> after a while i decided to delete one of these cinder volumes, but to my surprise the thin volumes continued to take up 300gb of disk space
2024-10-11T11:55:41.703Z
<luvn> is this normal?
2024-10-11T11:56:42.104Z
<luvn> the weird thing about it is that `ceph osd status` shows me that each osd is using approx. 50gb each, which is fair, because that's the amount of space that one volume would take (in my scenario)
2024-10-11T11:57:22.483Z
<luvn> however, each /openstack/ceph{1,2,3}.img file is taking 100gb (i.e., as if they had two volumes)
2024-10-11T11:57:53.676Z
<luvn> shouldn't space be freed up after deleting the cinder volume?
2024-10-11T12:09:50.011Z
<luvn> nvm, just figured that this makes sense, lol
2024-10-11T12:10:02.163Z
<luvn> the problem here is that i'm running an aio
2024-10-11T12:10:44.946Z
<luvn> if i had storage-dedicated nodes this shouldn't be a problem

Any issue? please create an issue here and use the infra label.