2024-07-03T13:39:38.214Z | <Adam King> It looks like the `rgw_token` field is still present in what you applied. What I meant before was to rename the `rgw_token` field to `rgw_realm_token` and if there was already an existing `rgw_realm_token` field, remove it. |
2024-07-03T15:33:33.452Z | <Raghu> the spec file which applied to the secondary site does not have rgw_token in it.
```placement:
label: rgwsync
count_per_host: 2
rgw_zone: secondaryzone
rgw_realm_token: XXXXXX==
spec:
rgw_frontend_port: 8000```
once the spec is applied, there is a field called as rgw_token, i am not sure how the cluster is getting this config though.
Am i missing anything here ? |
2024-07-03T15:35:58.904Z | <Adam King> As mentioned, the rgw module seems to have a bug where it sets that field. What I thought we could try was taking the spec that gets applied when passed through the rgw module, removing the rgw_token field it adds in by changing it to rgw_realm_token, removing the already present rgw_realm_token field if it was there, and then re-applying that spec using `ceph orch apply` rather than a `ceph rgw...` command |
2024-07-03T16:24:31.040Z | <Raghu> thank you, Once i apply with the spec file :
```placement:
count_per_host: 2
label: rgwsync
service_id: realmcephadm.secondaryzone
service_name: rgw.realmcephadm.secondaryzone
service_type: rgw
spec:
rgw_frontend_port: 8000
rgw_realm: realmcephadm
rgw_realm_token: XXXX==
rgw_zone: secondaryzone
rgw_zonegroup: zonegroupcephadm```
sudo ceph orch apply -i /tmp/rgw3.spec
Then all the commands work now including
```ceph orch ls rgw --export
service_type: rgw
service_id: realmcephadm.secondaryzone
service_name: rgw.realmcephadm.secondaryzone
placement:
count_per_host: 2
label: rgwsync
spec:
rgw_frontend_port: 8000
rgw_realm: realmcephadm
rgw_realm_token: XXXX==
rgw_zone: secondaryzone
rgw_zonegroup: zonegroupcephadm```
which was failing on this message before
```Error EINVAL: ServiceSpec: __init__() got an unexpected keyword argument 'rgw_token'```
fyi, i created a tracker for this <https://tracker.ceph.com/issues/66824>
Thank you again for all the help |