Migration of Vault from RPM-based cluster to Kubernetes-based OpenIAM cluster

This document describes the process of migrating Vault from an existing RPM-based OpenIAM cluster to a Kubernetes-based OpenIAM cluster. The migration approach outlined in this guide is intended to minimize downtime and prevent data loss while maintaining compatibility with OpenIAM components running in Kubernetes. By following the steps in this guide, you can safely transition Vault to a Kubernetes environment while preserving existing integrations and ensuring continuity of authentication and secret management services.

Pre-Migration preparation on RPM nodes

Start with making a backup certificates and exporting data:

  • Backup Java trust core cacerts.
  • Export all Vault data from RPM nodes.

Make sure for OpenIAM version is the same in both RPM and K8s clusters.

Stop OpenIAM services with the following commands

openiam-cli status
openiam-cli stop

Migrating process

Installing Medusa utility on RPM node

Medusa is a Cassandra backup and restore utility used to safely manage backups of Apache Cassandra and compatible databases. In OpenIAM deployments, Medusa is typically used to back up and restore Cassandra data during upgrades, migrations (for example, moving from RPM-based clusters to Kubernetes, as described in the current document), or disaster recovery scenarios.

export MEDUSA_VERSION="0.7.2"
wget https://github.com/jonasvinther/medusa/releases/download/v0.7.2/medusa_${MEDUSA_VERSION}_linux_amd64.tar.gz
tar -xvf medusa_${MEDUSA_VERSION}_linux_amd64.tar.gz
cp medusa /usr/bin/
medusa --help

Exporting Vault data from RPM cluster

  1. Create Vault export shell script using the following content.
#!/bin/bash
. /usr/local/openiam/env.conf
export VAULT_CERTS="$HOME_DIR/vault/certs/"
function migrate_vault() {
token=$(curl -k --request POST --cert ${VAULT_CERTS}/vault.crt --key ${VAULT_CERTS}/vault.key --data '{"name": "web"}' https://${VAULT_URL}:${VAULT_PORT}/v1/auth/cert/login | jq .auth.client_token | tr -d '"')
medusa export secret --address="https://${VAULT_URL}:${VAULT_PORT}" --token="${token}" -p /usr/local/openiam/vault/certs/vault.key --format="yaml" --insecure -o /tmp/export.yaml -m kv1
}
migrate_vault
  1. Execute the script.
vi medusa.sh
chmod +x medusa.sh
./medusa.sh
  1. Copy the exported data /tmp/export.yaml from RPM node to Kubernetes deployer node.

Exporting Java Truststore (cacerts) from RPM node

Use the command below.

cd /usr/local/openiam/jdk/lib/security/
tar -cvf cacerts-from-rpm.tar.gz cacerts

Copy cacerts-from-rpm.tar.gz from RPM node to Kubernetes deployer node.

Installing Medusa utility on Kubernetes deployer node

Use the following commands.

export MEDUSA_VERSION="0.7.2"
wget https://github.com/jonasvinther/medusa/releases/download/v0.7.2/medusa_${MEDUSA_VERSION}_linux_amd64.tar.gz
tar -xvf medusa_${MEDUSA_VERSION}_linux_amd64.tar.gz
cp medusa /usr/bin/
medusa --help

Copy export.yaml and cacerts-from-rpm.tar.gz to /root.

Performing Terraform Cleanup on Kubernetes deployer node

Note: The below steps are given considering newly deployed Kubernetes running with local MariaDB.

Use the following command.

terraform destroy --auto-approve
Remove Terraform state and backup files:
cd /usr/local/openiam/kubernetes-docker-configuration
rm -rf .deploy
rm -rf terrform.tfstate*

Updating Terraform configuration

Update all parameters in terrform.tfvars as per exported data from RPM node.

  • For application and database details use the commands below.
app_name =
database root user =
database root password =
  • For OpenIAM database configuration use the below commands.
openiam = {
user = "idmuser"
password = "idmuser"
database_name = "openiam"
schema_name = "openiam"
}
  • Do the same for Activiti user, password, and database names.
  • For external database configuration run the following commands. Ensure database connectivity from deployer VM.
helm = {
host = "10.X.X.X"
port = "3306"
}
  • Update Elasticsearch credentials. Update password from exported Vault data (export.yaml).
authentication = {
username = "elastic"
password = "AvdsDGxSOXnzV3EhTmlcyY6kwFtJnJQd"
}
  • Update RabbitMQ credentials, as follows.
rabbitmq = {
user = "openiam"
password = "passwd00"
}
  • Update Vault and Keystore passwords. Update values from exported Vault data.
vaultKeyPassword = "changeit"
secrets = {
javaKeystorePassword = "mt2JG9H0RXXAq5cKZT01daNf4RcrHUP0"
jks = {
password = "mRUcmlwbGVERVMAAAADAA5vcGVuaWFtX2NvbW1vbgAAAZsmjkzzrOSa7N+c7MS4IQ=="
keyPassword = "MekyyyPkZiMR2iWqukDj3WTeH1cnoxpy"
cookieKeyPassword = "o7NPLwCpjpOyPwSFqaAfqT8uEF7HarO5"
commonKeyPassword = "cW3FayhwZFZAXzCWF61STCbBjpGVr0wn"
}
}

Building a custom ESB image with updated cacerts

  1. Prepare Dockerfile. Switch to /root where cacerts-from-rpm.tar.gz exists with the following command.
tar -xvf cacerts-from-rpm.tar.gz
vi Dockerfile
Dockerfile content:
FROM confidant-bhaskara.container-registry.com/openiam_service/esb:debian-4.2.1.15-dev
USER root
# Backup original cacerts
RUN cp /usr/lib/jvm/default-jvm/lib/security/cacerts \
/usr/lib/jvm/default-jvm/lib/security/cacerts.orig
# Copy your updated cacerts
COPY cacerts /usr/lib/jvm/default-jvm/lib/security/cacerts
# Ensure writable (for safety)
RUN chmod 664 /usr/lib/jvm/default-jvm/lib/security/cacerts && \
chown openiam:openiam /usr/lib/jvm/default-jvm/lib/security/cacerts
USER openiam
Note: The ESB image should be updated in alignment with the respective OpenIAM version.
  1. Build and save image as follows.
docker build -t openiam-esb:4.2.1.15-cacerts .
docker save openiam-esb:4.2.1.15-cacerts -o openiam-esb-4.2.1.15-cacerts.tar

Loading ESB image into Kubernetes nodes

  1. Start with identifying nodes as follows.
kubectl get nodes
  1. Create debug pod on node.
kubectl debug node/aks-agentpool-24637399-vmss000007 -it --image=busybox
  1. Open a new session and copy image to debug pod.
kubectl cp openiam-esb-4.2.1.15-cacerts.tar node-debugger-aks-agentpool-24637399-vmss000006-ww25k:/tmp/
kubectl exec -it node-debugger-aks-agentpool-24637399-vmss000006-ww25k sh
  • Inside pod run:
cd /tmp/
mkdir -p /host/var/tmp/images
cp openiam-esb-4.2.1.15-cacerts.tar /host/var/tmp/images
chroot /host bash
  1. Load image with the following.
ctr -n k8s.io images import /var/tmp/images/openiam-esb-4.2.1.15-cacerts.tar
ctr -n k8s.io images list | grep esb
  1. Exit the debug pod.

Updating image in ESB StatefulSet

Use the following command.

vi /usr/local/openiam/kubernetes-docker-configuration/openiam/templates/services/esb-statefulset.yaml

Update image as follows.

image: "docker.io/library/openiam-esb:4.2.1.15-cacerts"
imagePullPolicy: IfNotPresent

Deploying OpenIAM on Kubernetes

Use the commands below.

cd /usr/local/openiam/kubernetes-docker-configuration
./setup.sh
terraform apply --auto-approve

Importing Vault Data into Kubernetes Vault

  1. Copy Medusa and export file once vault pod will be running with the commands below.
kubectl cp /root/medusa <vault-pod-name>:/tmp/
kubectl cp /root/export3.yaml <vault-pod-name>:/tmp/
  1. Import Vault data as follows.
kubectl exec -it <vault-pod-name> sh
cd /tmp
export VAULT_ADDR="https://127.0.0.1:8200"
export VAULT_SKIP_VERIFY=true
export VAULT_URL=127.0.0.1
export VAULT_PORT=8200
token=$(curl -k --request POST --cert /data/openiam/conf/vault/server/vault.crt --key /data/openiam/conf/vault/server/vault.key --data '{"name": "web"}' https://${VAULT_URL}:${VAULT_PORT}/v1/auth/cert/login | jq .auth.client_token | tr -d '"')
/tmp/medusa import secret -p /data/openiam/conf/vault/server/vault.key /tmp/export.yaml --address="https://${VAULT_URL}:${VAULT_PORT}" --token="${token}" --insecure -m kv1
  1. Exit the pod.

  2. Restart ESB and validate

kubectl delete <esb-pod-name>
kubectl logs -f <esb-pod-name>

Post-Migration Validation

  • Ensure all pods are in Running state.
  • Verify login to OpenIAM web console using old credentials.