Private Kubernetes Cluster using Helm

This document is aimed at helping the user to deploy OpenIAM in a private Kubernetes Cluster. You should use this type of deployment when you are deploying to a platform other than AWS, Azure or Google Cloud.

Setting up the environment

  1. Make sure to point your kubectl config to the correct cluster.
    See the following guide for more information on how to do that.

  2. Set the private-specific variables in terraform.tfvars.

Variable NameRequiredDefault ValueDescription
database.root.passwordYThe root username to the database.
database.helm.portYThe database port. Set this only if you are managing your own database.
NSet this, if you are using OpenIAM's MariaDB/Postgres helm templates.
database.helm.hostYThe database port. Set this only if you are managing your own database.
NSet this, if you are using OpenIAM's MariaDB/Postgres helm templates.
database.helm.replicasYNumber of replicas of MariaDB or Postgres database to use.
elasticsearch.helm.esJavaOptsY-Xmx1536m
-Xms1536m
ES Java arguments.
elasticsearch.helm.replicasY1Number of Elasticsearch replicas.
elasticsearch.helm.storageSizeN30GiThe amount of storage allocated to the Elasticsearch PVC.
kibana.helm.replicasY1Number of Kibana replicas.
kibana.helm.enabledYtrueIf set to 'false', Kibana will not deploy.
metricbeat.helm.replicasY1Number of MetricBeat replicas.
metricbeat.helm.enabledYtrueIf set to false, Metricbeat will not deploy.
filebeat.helm.replicasY1Number of FileBeat replicas.
filebeat.helm.enabledYtrueIf set to false, FileBeat will not deploy.
gremlin.helm.replicasY1Number of JanusGraph replicas.
gremlin.helm.zookeeperReplicasY1Number of ZooKeeper replicas.
gremlin.helm.hbase.replicasY1Number of HBase replicas.

Using a non-NFS and non-longhorn block storage system

Ensure that in your private Kubernetes cluster, you have a block storage system installed, such as longhorn. If you are deploying to AWS EKS, Google GCE, Azure, Own Rancher + Longhorn Kubernetes cluster, this will already be built-into the Kubernetes cluster, and you will not have to do anything. However, for private Kubernetes clusters, for using with another block storage system (not nfs, not longhorn), you will need to perform this step.

By default, OpenIAM deploys nfs as part of the Kubernetes stack or uses longhorn (depends on which case you choose in setup.sh script). This will not work if you have your own block storage system installed. Below, there is an example of the steps necessary to take, when using yourstorageclass. This can be extrapolated to use with another block storage system.

  1. Edit openiam-pvc/values.yaml.
nfs-server-provisioner:
replicaCount: 0 # <--- add this line
storageClass:
create: false # <-- Change from "true" to "false" since we'll use Longhorn
[..]
# Replace all storageClass
storageClass: 'nfs'
# by
storageClass: 'yourstorageclass'

You will have to run ./setup.sh after making these changes and enter 3 in Please enter your choice: request.

Destroying

Run the destroy command.

terraform destroy # enter 'yes' when asked to do so

Finally, you will have to delete Terraform's state files, as indicated below.

rm -rf terraform.tfstate*

Update Notes

Updating to 4.2.1.2

In 4.2.1.2 version of OpenIAM we've upgraded Redis. Hence, to upgrade, you will need to perform the steps below.

  1. Delete the result of: helm list | grep redis.
  2. Remove terraform module: terraform state rm module.deployment.module.redis.helm_release.redis.
  3. Delete any PVC in the response of: kubectl get pvc | grep redis.

Another change in place is removing HBase in favour of Cassandra. To apply changes, do the following.

  1. Delete the result of: helm list | grep hbase.
  2. Remove terraform module: terraform state rm module.deployment.module.gremlin.helm_release.hbase.
  3. Delete any PVC in the response of: kubectl get pvc | grep hbase.