Upgrading from version 4.2.1.x to version 4.2.2 in Kubernetes environment
This document provides guidance for upgrading OpenIAM from version 4.2.1.x to 4.2.2 in a Kubernetes environment. It is intended for system administrators and DevOps engineers responsible for maintaining and operating OpenIAM deployments.
The upgrade process focuses on ensuring service continuity, configuration compatibility, and data integrity while introducing fixes and improvements available in version 4.2.2. The document outlines the required prerequisites, recommended preparation steps, and the sequence of actions needed to perform a safe and controlled upgrade.
Migrating from ElasticSearch to OpenSearch.
In 4.2.2, due to licensing restrictions, we have migrated from ElasticSearch to OpenSearch. This requires several manual steps, which are outlined below.
Prerequisites
You will need to upgrade to 4.2.1.6, or a later branch, as only those releases contains the utility containers which will be used for dumping the ElasticSearch documents into backup JSON files.
Bring up your stack normally, either via Terraform or Helm. In other words, deploy. Ensure that all pods are up. All subsequent instructions now assume that you're running v4.2.1.6+ of OpenIAM.
- Enable ElasticDump. If using Terraform, below there are the changes for
terraform.tfvars.
elasticsearch = {elasticdump = {enabled = true}
If using Helm, you will need to set the openiam.elasticump.enabled flag to true in both openiam/values.yaml and openiam-pvc/values.yaml.
- Disable ElasticSearch authentication, and scale down. If you are using Terraform, set the following variable.
opensearch = {...helm = {...replicas = "1"
If you are using Helm, set the --set-string replicas="1" variable when deploying the ElasticSearch Helm chart.
- Make sure that the esConfig section of
elasticsearch.values.yamllooks like follows.
esConfig:elasticsearch.yml: |xpack.security.enabled: falsexpack.security.transport.ssl.enabled: falsecluster.name: "docker-cluster"network.host: 0.0.0.0
- Run
./setup.sh. - Next, run
kubectl delete jobs --all. This will ensure that there are no k8 errors when deploying the new job, which will backup the ElasticSearch data. - Finally, re-deploy the application (either via Terraform or Helm). Wait for ElasticSearch to come up.
- Run the following command.
kubectl get pods | grep elastic
You should see something like below snippet in the output.
kubernetes-docker-configuration % kubectl get pods | grep elasticelasticsearch-master-0 1/1 Running 0 2mtest2021-elasticdump-hcj2z 0/1 Completed 0 91stest2021-elasticsearch-curator-vdqld 0/1 Completed 0 91s
This means that the job has ran successfully.
Upgrading
- Switch back to the RELEASE-4.2.2 branch.
git checkout RELEASE-4.2.2
Undeploy the OpenIAM java application stack. This will result in downtime - that's also expected behavior.
./env.shhelm delete ${APP_NAME}
- Removing MetricBeat, FileBeat, Kibana, and ElasticSearch
Run the below commands to remove the no longer used components.
terraform state list | grep metricbeatterraform state list | grep filebeatterraform state list | grep kibanaterraform state list | grep elasticsearch
For the output above, run the following.
terraform state rm <output_from_above>
Then, you will need to undeploy it from Helm with commands below.
helm list | grep metricbeathelm list | grep filebeathelm list | grep elasticsearch
Then run
helm delete <output_from_above>
You also need to remove delete all jobs.
kubectl delete jobs --all
- Migrate your certificates
You will need to run ./generate.opensearch.certs.sh from the root directory. This will generate NEW certificates. The old certificates can be deleted, but that's up to you - they are no longer used.
- Move your old Elasticsearch data to OpenSearch
- Start with uncommenting these lines in
opensearch.values.yaml, set the following lines (step A).
....singleNode: true...
Also, in opensearch.values.yaml, ensure that the opensearch.yml section looks like the snippet below. This will disable authentication.
opensearch.yml: |cluster.name: docker-cluster# Bind to all interfaces because we don't know what IP address Docker will assign to us.network.host: 0.0.0.0plugins.security.disabled: truediscovery.type: "single-node"plugins.security.ssl.http.enabled: false
If using Terraform, below there is the changes for terraform.tfvars.
opensearch = {elasticdump = {enabled = true}
If using Helm, you will need to set the openiam.elasticump.enabled flag to true in both openiam/values.yaml and openiam-pvc/values.yaml.
- Rerun
./setup.sh. - If you are using Terraform, set the following variable (step C).
opensearch = {...helm = {...replicas = "1"
If you are using Helm, set the --set-string replicas="1" variable when deploying the OpenSearch helm chart.
- Now, redeploy the entire stack (via Terraform or Helm). Ensure OpenSearch is up with the following command.
kubectl get pods | grep opensearch
The output will look like the snippet below.
lbornov2@EUWFHMLP0YVMD6T kubernetes-docker-configuration % kubectl get pods | grep opensearchopensearch-cluster-master-0 1/1 Running 0 21mtest2021-opensearch-curator-9mglk 0/1 Completed 0 2m31s
- Check that the ElasticDump job succeeded.
lbornov2@EUWFHMLP0YVMD6T kubernetes-docker-configuration % kubectl get pods | grep dumptest2021-elasticdump-gv79z 0/1 Completed 0 14m
- Ensure that documents were imported. Get on the OpenSearch container, and perform the following curl command.
kubectl get pods | grep opensearchkubectl exec -it <id_from_above> bashcurl -L "http://localhost:9200/_cat/indices?v"
The output will look as the snippet below.
health status index uuid pri rep docs.count docs.deleted store.size pri.store.sizeyellow open .plugins-ml-config lZ2V8HmDSTKouxiXhDziNg 1 1 1 0 3.9kb 3.9kbgreen open .opensearch-observability 1LYyN6yDSSCc3EjgAhZUSg 1 0 0 0 208b 208byellow open auditlog DdoHdO17S2qj-jr7PRC5hw 1 1 8 0 55kb 55kb
- Revert both step C (set the replicas to whatever it was) and step A (comment out that volume), then re-run
./setup.sh. - Now, redeploy the entire stack.
You're good to go!