ceph - sepia - 2024-10-21

Timestamp (UTC)Message
2024-10-21T11:00:12.761Z
<Guillaume Abrioux> I'm seeing the following error this morning:
```Package yaml-cpp-devel-0.6.3-5.el9.x86_64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!
+ [[ true == \t\r\u\e ]]
+ [[ centos == \c\e\n\t\o\s ]]
+ [[ 9 =~ 8|9 ]]
+ podman login -u **** -p **** [quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci](http://quay-quay-quay.apps.os.sepia.ceph.com/ceph-ci)
time="2024-10-21T08:28:42Z" level=warning msg="Failed to decode the keys [\"storage.options.remap-uids\" \"storage.options.remap-gids\" \"storage.options.remap-user\" \"storage.options.remap-group\" \"storage.options.thinpool.xfs_nospace_max_retries\" \"storage.options.thinpool.xfs_nospace_max_retries\" \"storage.options.thinpool.xfs_nospace_max_retries\" \"storage.options.thinpool.xfs_nospace_max_retries\" \"storage.options.thinpool.xfs_nospace_max_retries\" \"storage.options.thinpool.xfs_nospace_max_retries\" \"storage.options.thinpool.xfs_nospace_max_retries\" \"storage.options.thinpool.xfs_nospace_max_retries\" \"storage.options.thinpool.xfs_nospace_max_retries\" \"storage.options.thinpool.xfs_nospace_max_retries\" \"storage.options.thinpool.xfs_nospace_max_retries\" \"storage.options.thinpool.xfs_nospace_max_retries\" \"storage.options.thinpool.xfs_nospace_max_retries\" \"storage.options.thinpool.xfs_nospace_max_retries\"] from \"/home/jenkins-build/.config/containers/storage.conf\""
Error: cannot re-exec process to join the existing user namespace
+ rm -fr /tmp/install-deps.2004042
Build step 'Execute shell' marked build as failure
New run name is '#83958 origin/wip-guits-squid-2024-10-21-0729-squid, 3670315f754aa546469582d32c3d49b928806ef4, jammy centos9 windows, default'
[PostBuildScript] - [INFO] Executing post build scripts.```
2024-10-21T11:00:16.913Z
<Guillaume Abrioux> [https://jenkins.ceph.com/job/ceph-dev-new-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVA[…]entos9,DIST=centos9,MACHINE_SIZE=gigantic/83958//consoleFull](https://jenkins.ceph.com/job/ceph-dev-new-build/ARCH=x86_64,AVAILABLE_ARCH=x86_64,AVAILABLE_DIST=centos9,DIST=centos9,MACHINE_SIZE=gigantic/83958//consoleFull)
2024-10-21T14:23:00.071Z
<Patrick Donnelly> lab job queue is 0 😱
2024-10-21T14:23:04.967Z
<Patrick Donnelly> get testing!
2024-10-21T14:52:24.106Z
<Kamoltat (Junior) Sirivadhna> Idk if this a good thing or bad thing haha
2024-10-21T15:22:49.999Z
<Kyrylo Shatskyy> hurry up
2024-10-21T16:22:15.415Z
<Zack Cerza> they are failing with an error you apparently reported two years ago! <https://github.com/containers/podman/issues/14635>
2024-10-21T16:27:03.229Z
<Zack Cerza> I followed to workaround procedure you mentioned in the issue; that seemed to fix the error
2024-10-21T17:09:35.219Z
<Dan Mick> I've seen that before.  Maybe we need to add that workaround to the new container build script<sigh>
2024-10-21T17:42:51.625Z
<Guillaume Abrioux> fascinating
2024-10-21T17:56:01.909Z
<Zack Cerza> i would love to find out why this apparently happens on our jenkins nodes and nowhere else on earth lol
2024-10-21T17:58:03.166Z
<Zack Cerza> possibly unrelated but I did find an old, strange `~/.config/containers/storage.conf` and moved it out of the way
2024-10-21T18:11:58.342Z
<Dan Mick> The history of podman on the build hosts has been checkered.  There were non distro installs.  and I think podman has changed things about its first and se labels over time without necessarily handling them well in package scripts
2024-10-21T18:12:20.511Z
<Dan Mick> The history of podman on the build hosts has been checkered.  There were non distro installs.  and I think podman has changed things about its dirs and se labels over time without necessarily handling them well in package scripts
2024-10-21T20:15:13.930Z
<Zack Cerza> yeah the more I look at this the more it looks like the stray config file was the problem
2024-10-21T20:15:24.748Z
<Zack Cerza> we shouldn't need that file on any of the jenkins hosts
2024-10-21T20:41:45.217Z
<Zack Cerza> just verified no other hosts have that file

Any issue? please create an issue here and use the infra label.