Private Kubernetes Cluster using Helm

This document is aimed at helping the user to deploy OpenIAM in a private Kubernetes Cluster. You should use this type of deployment when you are deploying to a platform other than AWS, Azure or Google Cloud.

Set up the environment

  1. Make sure to point your kubectl config to the correct cluster.
    See the following guide for more information on how to do that.

  2. Set the Private-specific variables in terraform.tfvars

Variable NameRequiredDefault ValueDescription
database.root.passwordYThe root username to the database
database.helm.portY - if you are managing yourThe database port. Set only if you are managing your own database
own database
N - if you are using Openiam's
mariadb/postgres helm
templates
database.helm.hostY - if you are managing yourThe database host. Set only if you are managing your own database
own database
N - if you are using Openiam's
mariadb/postgres helm
templates
database.helm.replicasYNumber of replicas of mariadb or postgres database to use
elasticsearch.helm.esJavaOptsY-Xmx1536m -Xms1536mES Java Arguments
elasticsearch.helm.replicasY1Number of Elasticsearch replicas
elasticsearch.helm.storageSizeN30GiThe amount of storage allocated to the Elasticsearch PVC
kibana.helm.replicasY1Number of Kibana replicas
kibana.helm.enabledY"true"If set to 'false', Kibana will not deploy
metricbeat.helm.replicasY1Number of Metricbeat replicas
metricbeat.helm.enabledY"true"If set to 'false', Metricbeat will not deploy
filebeat.helm.replicasY1Number of Filebeat replicas
filebeat.helm.enabledY"true"If set to 'false', filebeat will not deploy
gremlin.helm.replicasY1Number of Janusgraph replicas
gremlin.helm.zookeeperReplicasY1Number of Zookpeer replicas
gremlin.helm.hbase.replicasY1Number of Hbase replicas

Using a non-NFS and non-longhorn block storage system

Ensure that in your private Kubernetes cluster, you have a block storage system installed, such as longhorn. If you are deploying to AWS EKS, Google GCE, Azure, Own Rancher + Longhorn Kubernetes cluster, this will already be built-into the Kubernetes cluster, and you will not have to do anything. However, for private Kubernetes clusters, for using with another block storage system (not nfs, not longhorn), you will need to perform this step.

By default, OpenIAM deploys nfs as part of the Kubernetes stack or uses longhorn (depends on which case you choose in setup.sh script). This will not work if you have your own block storage system installed. Below, there is an example of the steps necessary to take, when using yourstorageclass. This can be extrapolated to use with another block storage system.

Step 1

Edit openiam-pvc/values.yaml

nfs-server-provisioner:
replicaCount: 0 # <--- add this line
storageClass:
create: false # <-- Change from "true" to "false" since we'll use Longhorn
[..]
# Replace all storageClass
storageClass: 'nfs'
# by
storageClass: 'yourstorageclass'

You will have to run ./setup.sh after making these changes and enter 3 in Please enter your choice: request.

Destroying

Run the destroy command:

terraform destroy # enter 'yes' when asked to do so

Finally, you will have to delete terraform's state files:

rm -rf terraform.tfstate*

Update Notes

Updating to 4.2.1.2

We've upgraded Redis

  1. Delete the result of: helm list | grep redis
  2. Remove terraform module: terraform state rm module.deployment.module.redis.helm_release.redis
  3. Delete any pvc in the response of: kubectl get pvc | grep redis

We've also removed Hbase in favor of Cassandra

  1. Delete the result of: helm list | grep hbase
  2. Remove terraform module: terraform state rm module.deployment.module.gremlin.helm_release.hbase
  3. Delete any PVC in the response of: kubectl get pvc | grep hbase