Upgrade from 4.2.0.x to 4.2.1.3

In the 4.2.1.3 version of OpenIAM there were some changes made that require upgrading. This document will guide the users on how to upgrade to version 4.2.1.3 from older ones.

Update of Kubernetes

The deployment was tested on version 1.23. If you are using AWS, see the section on updating to 4.2.1.3 in the corresponding (README)[module/core/aws/README.md] or here.

Change 1 - Stash

Stash was updated, and it is incompatible with previous version. Hence, users will need to undeploy it first.

Ignore this clause, if you are not using stash.

Step 1

terraform state rm $(terraform state list | grep stash | awk '{print $1}')

Step 2

helm delete $(helm list | grep stash | awk '{print $1}')

Step 3

Create a license file, as per the instructions here: https://stash.run/docs/v2022.09.29/setup/install/community/

Step 4

Put the license file in .stash/licence.txt

Change 2 - Elasticsearch

Certain elasticsearch documents can grow to be many GB in size, and thus we must add the possibility to curate them.

Before updating, you will need to run:

  • the following SQL query
select LOGIN as login, NAME as 'managed_system_name' from LOGIN l
LEFT join MANAGED_SYS ms on ms.MANAGED_SYS_ID=l.MANAGED_SYS_ID
where PROV_STATUS in ('PENDING_CREATE', 'PENDING_UPDATE','PENDING_DISABLE','PENDING_ENABLE','PENDING_DELETE')

Note the logins and users.

  • the following commands in the esb pod (i.e. kubectl exec -it $(kubectl get pods | grep esb | awk '{print $1}') bash)
# taken from terraform.tfvars
export ELASTICSEARCH_USERNAME=elastic
export ELASTICSEARCH_PASSWORD=ChangeMeToSomethingMoreSecure123#51
AUTHORIZATION_HEADER=""
if [ ! -z "$ELASTICSEARCH_USERNAME" ] && [ "$ELASTICSEARCH_USERNAME" != "null" ] && [ ! -z "$ELASTICSEARCH_PASSWORD" ] && [ "$ELASTICSEARCH_PASSWORD" != "null" ]; then
AUTHORIZATION_HEADER="Authorization: Basic $(echo -n ${ELASTICSEARCH_USERNAME}:${ELASTICSEARCH_PASSWORD} | base64)"
fi
for indexToCurate in provisionrequest connectorreply provisionconnectorrequest; do
curl -H "${AUTHORIZATION_HEADER}" -XDELETE "http://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/$indexToCurate*"
done

Example:

lbornov2@mypc kubernetes-docker-configuration % kubectl get pods | grep esb
test2021-esb-0 1/1 Running 3 13h
lbornov2@mypc kubernetes-docker-configuration % kubectl exec -it test2021-esb-0 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-5.1$ export ELASTICSEARCH_USERNAME=elastic
export ELASTICSEARCH_PASSWORD=ChangeMeToSomethingMoreSecure123#51
AUTHORIZATION_HEADER=""
if [ ! -z "$ELASTICSEARCH_USERNAME" ] && [ "$ELASTICSEARCH_USERNAME" != "null" ] && [ ! -z "$ELASTICSEARCH_PASSWORD" ] && [ "$ELASTICSEARCH_PASSWORD" != "null" ]; then
AUTHORIZATION_HEADER="Authorization: Basic $(echo -n ${ELASTICSEARCH_USERNAME}:${ELASTICSEARCH_PASSWORD} | base64)"
fi
for indexToCurate in provisionrequest connectorreply provisionconnectorrequest; do
curl -H "${AUTHORIZATION_HEADER}" -XDELETE "http://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/$indexToCurate*"
done
{"acknowledged":true}
{"acknowledged":true}
{"acknowledged":true}

Change 3 - NFS

In addition, the size of NFS volume was updated from 2G to 4G. Thus, please, do the following:

  1. kubectl get storageclass
  2. Pick the (non-nfs) storageclass. For example, gp2 (if using AWS)
  3. kubectl edit storageclass gp2 Add: allowVolumeExpansion: true If it doesn't already exist.
  4. kubectl get pvc | grep nfs-server
  5. kubectl edit pvc <nfs-pvc-from-above> Edit the storage size
  6. kubectl get sts | grep nfs
  7. kubectl delete statefulset/<statefulset_from_above>
  8. kubectl get sts | grep rabbitmq
  9. kubectl delete statefulset/<statefulset_from_above>, for example kubectl delete statefulset/test2021-rabbitmq
  10. kubectl delete jobs --all
  11. kubectl delete cronjobs --all

Change 4 - Redeploy

Now that you've finished the manual steps, you can redeploy OpenIAM.

./setup.sh
terraform init && terraform apply

Or you can re-run helm script.

Change 5 - Fix Flyway

Old flyway scripts were fixed to run on newer database versions. As a result, flyway will fail to complete successfully.

  1. First, repair Flyway:
  • If using terraform, set database.flywayCommand to repair, then re-run terraform apply.
  • If using helm, set the FLYWAY_COMMAND ENV variable in the deployment script to repair, and re-apply helm.
  1. Second, migrate Flyway:
  • If using terraform, set database.flywayCommand to migrate, then re-run terraform apply.
  • If using helm, set the FLYWAY_COMMAND ENV variable in the deployment script to migrate, and re-apply helm.

Manually run provisioning (save the user) for the logins in step 2.