Upgrading from version 4.2.1.x to version 4.2.2 in Kubernetes environment

This document provides guidance for upgrading OpenIAM from version 4.2.1.x to 4.2.2 in a Kubernetes environment. It is intended for system administrators and DevOps engineers responsible for maintaining and operating OpenIAM deployments.

The upgrade process focuses on ensuring service continuity, configuration compatibility, and data integrity while introducing fixes and improvements available in version 4.2.2. The document outlines the required prerequisites, recommended preparation steps, and the sequence of actions needed to perform a safe and controlled upgrade.

Migrating from ElasticSearch to OpenSearch.

In 4.2.2, due to licensing restrictions, we have migrated from ElasticSearch to OpenSearch. This requires several manual steps, which are outlined below.

Please note that we no longer use FileBeat, MetricBeat, and Kibana.

Prerequisites

You will need to upgrade to 4.2.1.6, or a later branch, as only those releases contains the utility containers which will be used for dumping the ElasticSearch documents into backup JSON files.

Bring up your stack normally, either via Terraform or Helm. In other words, deploy. Ensure that all pods are up. All subsequent instructions now assume that you're running v4.2.1.6+ of OpenIAM.

  • Enable ElasticDump. If using Terraform, below there are the changes for terraform.tfvars.
elasticsearch = {
elasticdump = {
enabled = true
}

If using Helm, you will need to set the openiam.elasticump.enabled flag to true in both openiam/values.yaml and openiam-pvc/values.yaml.

  • Disable ElasticSearch authentication, and scale down. If you are using Terraform, set the following variable.
opensearch = {
...
helm = {
...
replicas = "1"

If you are using Helm, set the --set-string replicas="1" variable when deploying the ElasticSearch Helm chart.

  • Make sure that the esConfig section of elasticsearch.values.yaml looks like follows.
esConfig:
elasticsearch.yml: |
xpack.security.enabled: false
xpack.security.transport.ssl.enabled: false
cluster.name: "docker-cluster"
network.host: 0.0.0.0
  • Run ./setup.sh.
  • Next, run kubectl delete jobs --all. This will ensure that there are no k8 errors when deploying the new job, which will backup the ElasticSearch data.
  • Finally, re-deploy the application (either via Terraform or Helm). Wait for ElasticSearch to come up.
Note: The deployment of our OpenIAM java apps will fail - that's expected behavior. You can ignore that.
  • Run the following command.
kubectl get pods | grep elastic

You should see something like below snippet in the output.

kubernetes-docker-configuration % kubectl get pods | grep elastic
elasticsearch-master-0 1/1 Running 0 2m
test2021-elasticdump-hcj2z 0/1 Completed 0 91s
test2021-elasticsearch-curator-vdqld 0/1 Completed 0 91s

This means that the job has ran successfully.

Now you can migrate to 4.2.2!

Upgrading

  • Switch back to the RELEASE-4.2.2 branch.
git checkout RELEASE-4.2.2

Undeploy the OpenIAM java application stack. This will result in downtime - that's also expected behavior.

./env.sh
helm delete ${APP_NAME}
  1. Removing MetricBeat, FileBeat, Kibana, and ElasticSearch

Run the below commands to remove the no longer used components.

terraform state list | grep metricbeat
terraform state list | grep filebeat
terraform state list | grep kibana
terraform state list | grep elasticsearch

For the output above, run the following.

terraform state rm <output_from_above>

Then, you will need to undeploy it from Helm with commands below.

helm list | grep metricbeat
helm list | grep filebeat
helm list | grep elasticsearch

Then run

helm delete <output_from_above>

You also need to remove delete all jobs.

kubectl delete jobs --all
  1. Migrate your certificates

You will need to run ./generate.opensearch.certs.sh from the root directory. This will generate NEW certificates. The old certificates can be deleted, but that's up to you - they are no longer used.

  1. Move your old Elasticsearch data to OpenSearch
  • Start with uncommenting these lines in opensearch.values.yaml, set the following lines (step A).
....
singleNode: true
...

Also, in opensearch.values.yaml, ensure that the opensearch.yml section looks like the snippet below. This will disable authentication.

opensearch.yml: |
cluster.name: docker-cluster
# Bind to all interfaces because we don't know what IP address Docker will assign to us.
network.host: 0.0.0.0
plugins.security.disabled: true
discovery.type: "single-node"
plugins.security.ssl.http.enabled: false

If using Terraform, below there is the changes for terraform.tfvars.

opensearch = {
elasticdump = {
enabled = true
}

If using Helm, you will need to set the openiam.elasticump.enabled flag to true in both openiam/values.yaml and openiam-pvc/values.yaml.

  • Rerun ./setup.sh.
  • If you are using Terraform, set the following variable (step C).
opensearch = {
...
helm = {
...
replicas = "1"

If you are using Helm, set the --set-string replicas="1" variable when deploying the OpenSearch helm chart.

  • Now, redeploy the entire stack (via Terraform or Helm). Ensure OpenSearch is up with the following command.
kubectl get pods | grep opensearch

The output will look like the snippet below.

lbornov2@EUWFHMLP0YVMD6T kubernetes-docker-configuration % kubectl get pods | grep opensearch
opensearch-cluster-master-0 1/1 Running 0 21m
test2021-opensearch-curator-9mglk 0/1 Completed 0 2m31s
  • Check that the ElasticDump job succeeded.
lbornov2@EUWFHMLP0YVMD6T kubernetes-docker-configuration % kubectl get pods | grep dump
test2021-elasticdump-gv79z 0/1 Completed 0 14m
  • Ensure that documents were imported. Get on the OpenSearch container, and perform the following curl command.
kubectl get pods | grep opensearch
kubectl exec -it <id_from_above> bash
curl -L "http://localhost:9200/_cat/indices?v"

The output will look as the snippet below.

health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
yellow open .plugins-ml-config lZ2V8HmDSTKouxiXhDziNg 1 1 1 0 3.9kb 3.9kb
green open .opensearch-observability 1LYyN6yDSSCc3EjgAhZUSg 1 0 0 0 208b 208b
yellow open auditlog DdoHdO17S2qj-jr7PRC5hw 1 1 8 0 55kb 55kb
  • Revert both step C (set the replicas to whatever it was) and step A (comment out that volume), then re-run ./setup.sh.
  • Now, redeploy the entire stack.

You're good to go!