Migration of Vault from RPM-based cluster to Kubernetes-based OpenIAM cluster
This document describes the process of migrating Vault from an existing RPM-based OpenIAM cluster to a Kubernetes-based OpenIAM cluster. The migration approach outlined in this guide is intended to minimize downtime and prevent data loss while maintaining compatibility with OpenIAM components running in Kubernetes. By following the steps in this guide, you can safely transition Vault to a Kubernetes environment while preserving existing integrations and ensuring continuity of authentication and secret management services.
Pre-Migration preparation on RPM nodes
Start with making a backup certificates and exporting data:
- Backup Java trust core
cacerts. - Export all Vault data from RPM nodes.
Make sure for OpenIAM version is the same in both RPM and K8s clusters.
Stop OpenIAM services with the following commands
openiam-cli statusopeniam-cli stop
Migrating process
Installing Medusa utility on RPM node
Medusa is a Cassandra backup and restore utility used to safely manage backups of Apache Cassandra and compatible databases. In OpenIAM deployments, Medusa is typically used to back up and restore Cassandra data during upgrades, migrations (for example, moving from RPM-based clusters to Kubernetes, as described in the current document), or disaster recovery scenarios.
export MEDUSA_VERSION="0.7.2"wget https://github.com/jonasvinther/medusa/releases/download/v0.7.2/medusa_${MEDUSA_VERSION}_linux_amd64.tar.gztar -xvf medusa_${MEDUSA_VERSION}_linux_amd64.tar.gzcp medusa /usr/bin/medusa --help
Exporting Vault data from RPM cluster
- Create Vault export shell script using the following content.
#!/bin/bash. /usr/local/openiam/env.confexport VAULT_CERTS="$HOME_DIR/vault/certs/"function migrate_vault() {token=$(curl -k --request POST --cert ${VAULT_CERTS}/vault.crt --key ${VAULT_CERTS}/vault.key --data '{"name": "web"}' https://${VAULT_URL}:${VAULT_PORT}/v1/auth/cert/login | jq .auth.client_token | tr -d '"')medusa export secret --address="https://${VAULT_URL}:${VAULT_PORT}" --token="${token}" -p /usr/local/openiam/vault/certs/vault.key --format="yaml" --insecure -o /tmp/export.yaml -m kv1}migrate_vault
- Execute the script.
vi medusa.shchmod +x medusa.sh./medusa.sh
- Copy the exported data
/tmp/export.yamlfrom RPM node to Kubernetes deployer node.
Exporting Java Truststore (cacerts) from RPM node
Use the command below.
cd /usr/local/openiam/jdk/lib/security/tar -cvf cacerts-from-rpm.tar.gz cacerts
Copy cacerts-from-rpm.tar.gz from RPM node to Kubernetes deployer node.
Installing Medusa utility on Kubernetes deployer node
Use the following commands.
export MEDUSA_VERSION="0.7.2"wget https://github.com/jonasvinther/medusa/releases/download/v0.7.2/medusa_${MEDUSA_VERSION}_linux_amd64.tar.gztar -xvf medusa_${MEDUSA_VERSION}_linux_amd64.tar.gzcp medusa /usr/bin/medusa --help
Copy export.yaml and cacerts-from-rpm.tar.gz to /root.
Performing Terraform Cleanup on Kubernetes deployer node
Use the following command.
terraform destroy --auto-approveRemove Terraform state and backup files:cd /usr/local/openiam/kubernetes-docker-configurationrm -rf .deployrm -rf terrform.tfstate*
Updating Terraform configuration
Update all parameters in terrform.tfvars as per exported data from RPM node.
- For application and database details use the commands below.
app_name =database root user =database root password =
- For OpenIAM database configuration use the below commands.
openiam = {user = "idmuser"password = "idmuser"database_name = "openiam"schema_name = "openiam"}
- Do the same for Activiti user, password, and database names.
- For external database configuration run the following commands. Ensure database connectivity from deployer VM.
helm = {host = "10.X.X.X"port = "3306"}
- Update Elasticsearch credentials. Update password from exported Vault data (
export.yaml).
authentication = {username = "elastic"password = "AvdsDGxSOXnzV3EhTmlcyY6kwFtJnJQd"}
- Update RabbitMQ credentials, as follows.
rabbitmq = {user = "openiam"password = "passwd00"}
- Update Vault and Keystore passwords. Update values from exported Vault data.
vaultKeyPassword = "changeit"secrets = {javaKeystorePassword = "mt2JG9H0RXXAq5cKZT01daNf4RcrHUP0"jks = {password = "mRUcmlwbGVERVMAAAADAA5vcGVuaWFtX2NvbW1vbgAAAZsmjkzzrOSa7N+c7MS4IQ=="keyPassword = "MekyyyPkZiMR2iWqukDj3WTeH1cnoxpy"cookieKeyPassword = "o7NPLwCpjpOyPwSFqaAfqT8uEF7HarO5"commonKeyPassword = "cW3FayhwZFZAXzCWF61STCbBjpGVr0wn"}}
Building a custom ESB image with updated cacerts
- Prepare
Dockerfile. Switch to/rootwherecacerts-from-rpm.tar.gzexists with the following command.
tar -xvf cacerts-from-rpm.tar.gzvi DockerfileDockerfile content:FROM confidant-bhaskara.container-registry.com/openiam_service/esb:debian-4.2.1.15-devUSER root# Backup original cacertsRUN cp /usr/lib/jvm/default-jvm/lib/security/cacerts \/usr/lib/jvm/default-jvm/lib/security/cacerts.orig# Copy your updated cacertsCOPY cacerts /usr/lib/jvm/default-jvm/lib/security/cacerts# Ensure writable (for safety)RUN chmod 664 /usr/lib/jvm/default-jvm/lib/security/cacerts && \chown openiam:openiam /usr/lib/jvm/default-jvm/lib/security/cacertsUSER openiam
- Build and save image as follows.
docker build -t openiam-esb:4.2.1.15-cacerts .docker save openiam-esb:4.2.1.15-cacerts -o openiam-esb-4.2.1.15-cacerts.tar
Loading ESB image into Kubernetes nodes
- Start with identifying nodes as follows.
kubectl get nodes
- Create debug pod on node.
kubectl debug node/aks-agentpool-24637399-vmss000007 -it --image=busybox
- Open a new session and copy image to debug pod.
kubectl cp openiam-esb-4.2.1.15-cacerts.tar node-debugger-aks-agentpool-24637399-vmss000006-ww25k:/tmp/kubectl exec -it node-debugger-aks-agentpool-24637399-vmss000006-ww25k sh
- Inside pod run:
cd /tmp/mkdir -p /host/var/tmp/imagescp openiam-esb-4.2.1.15-cacerts.tar /host/var/tmp/imageschroot /host bash
- Load image with the following.
ctr -n k8s.io images import /var/tmp/images/openiam-esb-4.2.1.15-cacerts.tarctr -n k8s.io images list | grep esb
- Exit the debug pod.
Updating image in ESB StatefulSet
Use the following command.
vi /usr/local/openiam/kubernetes-docker-configuration/openiam/templates/services/esb-statefulset.yaml
Update image as follows.
image: "docker.io/library/openiam-esb:4.2.1.15-cacerts"imagePullPolicy: IfNotPresent
Deploying OpenIAM on Kubernetes
Use the commands below.
cd /usr/local/openiam/kubernetes-docker-configuration./setup.shterraform apply --auto-approve
Importing Vault Data into Kubernetes Vault
- Copy Medusa and export file once vault pod will be running with the commands below.
kubectl cp /root/medusa <vault-pod-name>:/tmp/kubectl cp /root/export3.yaml <vault-pod-name>:/tmp/
- Import Vault data as follows.
kubectl exec -it <vault-pod-name> shcd /tmpexport VAULT_ADDR="https://127.0.0.1:8200"export VAULT_SKIP_VERIFY=trueexport VAULT_URL=127.0.0.1export VAULT_PORT=8200token=$(curl -k --request POST --cert /data/openiam/conf/vault/server/vault.crt --key /data/openiam/conf/vault/server/vault.key --data '{"name": "web"}' https://${VAULT_URL}:${VAULT_PORT}/v1/auth/cert/login | jq .auth.client_token | tr -d '"')/tmp/medusa import secret -p /data/openiam/conf/vault/server/vault.key /tmp/export.yaml --address="https://${VAULT_URL}:${VAULT_PORT}" --token="${token}" --insecure -m kv1
Exit the pod.
Restart ESB and validate
kubectl delete <esb-pod-name>kubectl logs -f <esb-pod-name>
Post-Migration Validation
- Ensure all pods are in Running state.
- Verify login to OpenIAM web console using old credentials.