Backup and restoration procedure in Kubernetes environment

The backup and restoration is the recommended approach for protecting your OpenIAM deployment against data loss and service disruption. The document outlines how to back up application components, databases, and supporting services, as well as how to restore them in case of failure. Following this procedure ensures business continuity and minimizes downtime during unexpected incidents or planned recovery operations.

Backup and restore process for a single-node Kubernetes cluster tested using an external database (MSSQL and PostgreSQL), and it is working successfully for the following two scenarios:

  1. Destroy the OpenIAM cluster using Terraform and restore it on the same Kubernetes cluster.
  2. Build a new Kubernetes cluster and restore the OpenIAM cluster from the existing backup.
Note: If any files are manually copied to the application pods’ PVC, those files must be preserved separately so that they can be restored to the PVC after the OpenIAM infrastructure cluster is restored.

Pre-backup steps

  1. Take Deployer VM Snapshot (recommended step). Take a snapshot of the deployer VM if possible or create a tar backup of the configuration directory with the following commands.
cd /usr/local/openiam/kubernetes-docker-configuration
tar -cvf kubernetes-docker-configuration_backup.tar.gz /usr/local/openiam/kubernetes-docker-configuration
  1. Create an external database snapshot (if applicable). Ensure the database is running and accessible.

  2. Backup vault secret using Medusa.

    • Download Medusa utility.

      wget https://github.com/jonasvinther/medusa/releases/download/v0.7.2/medusa_${MEDUSA_VERSION}_linux_amd64.tar.gz
      tar -xvf medusa_${MEDUSA_VERSION}_linux_amd64.tar.gz
    • Copy Medusa utility to Vault Pod.

      kubectl get pods | grep vault
      kubectl cp medusa openiam-vault-0:/tmp/
    • Export Vault secrets.

      kubectl exec -it <vault-pod-name> -- sh
      export VAULT_ADDR="https://127.0.0.1:8200"
      export VAULT_SKIP_VERIFY=true
      export VAULT_URL=127.0.0.1
      export VAULT_PORT=8200
      token=$(curl -k --request POST --cert /data/openiam/conf/vault/server/vault.crt --key /data/openiam/conf/vault/server/vault.key --data '{"name": "web"}' https://${VAULT_URL}:${VAULT_PORT}/v1/auth/cert/login | jq .auth.client_token | tr -d '"')
      /tmp/medusa export secret --address="https://${VAULT_URL}:${VAULT_PORT}" --token="${token}" -p /data/openiam/conf/vault/server/vault.key --format="yaml" --insecure -o /tmp/export.yaml -m kv1
      kubectl cp <vault-pod-name>:/tmp/export.yaml ./export.yaml
  3. Backup ESB cacerts.

kubectl get pods | grep esb
kubectl exec -it openiam-esb-0 -- ls -l /usr/lib/jvm/default-jvm/lib/security/cacerts
kubectl cp openiam-esb-0:/usr/lib/jvm/default-jvm/lib/security/cacerts ./cacerts

Store the following in any external server:

  • cacerts file
  • export.yaml
  • kubernetes-docker-configuration_backup.tar.gz
  • Deployer VM snapshot.

Restoration

  1. Remove Vault Alias from cacerts.
keytool -list -keystore cacerts -storepass changeit | grep vault
keytool -delete -alias test2025-vault -keystore cacerts -storepass changeit
keytool -list -keystore cacerts -storepass changeit | grep vault
  1. Create custom ESB image.
mkdir -p /root/ESB-Custome_image

Copy modified cacerts to this directory.

Dockerfile content:

vi Dockerfile
FROM registry.openiam.com/openiam_service/esb:debian-4.2.2-dev
USER root
# Backup original cacerts
RUN cp /usr/lib/jvm/default-jvm/lib/security/cacerts /usr/lib/jvm/default-jvm/lib/security/cacerts.orig
# Copy your updated cacerts
COPY cacerts /usr/lib/jvm/default-jvm/lib/security/cacerts
RUN chmod 664 /usr/lib/jvm/default-jvm/lib/security/cacerts && chown openiam:openiam /usr/lib/jvm/default-jvm/lib/security/cacerts
USER openiam
docker build -t openiam-esb:4.2.2-cacerts .
docker save openiam-esb:4.2.2-cacerts -o openiam-esb-4.2.2-cacerts.tar
  1. Load ESB image into Kubernetes nodes.
kubectl get nodes
kubectl debug node/<node-name> -it --image=busybox
kubectl cp openiam-esb-4.2.2-cacerts.tar <debug-pod>:/tmp/
kubectl exec -it <debug-pod> sh
cd /tmp/
mkdir -p /host/var/tmp/images
cp openiam-esb-4.2.2-cacerts.tar /host/var/tmp/images
chroot /host bash
ctr -n k8s.io images import /var/tmp/images/openiam-esb-4.2.2-cacerts.tar
ctr -n k8s.io images list | grep esb
Note: Identify the image name from the last command, copy it, and then exit the debug pod.
  1. Update ESB StatefulSet

Edit esb-statefulset.yaml and update image.

vi /usr/local/openiam/kubernetes-docker-configuration/openiam/templates/services/esb-statefulset.yaml
image: "docker.io/library/openiam-esb:4.2.2-cacerts"
imagePullPolicy: IfNotPresent
  1. Deploy OpenIAM
cd /usr/local/openiam/kubernetes-docker-configuration
./setup.sh
terraform apply --auto-approve
  1. Import Vault secrets. Once the Vault pod is up and running, copy the Medusa utility and the export.yaml file into the Vault pod.
kubectl cp /root/medusa <vault-pod-name>:/tmp/
kubectl cp /root/export.yaml <vault-pod-name>:/tmp/
kubectl exec -it <vault-pod-name> -- sh
cd /tmp
export VAULT_ADDR="https://127.0.0.1:8200"
export VAULT_SKIP_VERIFY=true
export VAULT_URL=127.0.0.1
export VAULT_PORT=8200
token=$(curl -k --request POST --cert /data/openiam/conf/vault/server/vault.crt --key /data/openiam/conf/vault/server/vault.key --data '{"name": "web"}' https://${VAULT_URL}:${VAULT_PORT}/v1/auth/cert/login | jq .auth.client_token | tr -d '"')
/tmp/medusa import secret -p /data/openiam/conf/vault/server/vault.key /tmp/export.yaml --address="https://${VAULT_URL}:${VAULT_PORT}" --token="${token}" --insecure -m kv1

Post restoration

Monitor all pods until UI is accessible. Login using old credentials.

Note: If new users were created after cacerts backup, ensure latest cacerts are backed up and used.