Browse Source

initial commit

pull/2/head
sido 4 years ago
commit
d365e29472
  1. 2
      .gitignore
  2. 40
      Jenkinsfile
  3. 130
      README.md
  4. 12
      charts/opal/Chart.yaml
  5. 138
      charts/opal/README.md
  6. 137
      charts/opal/questions.yml
  7. 4
      charts/opal/requirements.yaml
  8. 19
      charts/opal/templates/NOTES.txt
  9. 32
      charts/opal/templates/_helpers.tpl
  10. 103
      charts/opal/templates/deployment.yaml
  11. 44
      charts/opal/templates/ingress.yaml
  12. 29
      charts/opal/templates/persistence/mysqlPVC.yaml
  13. 29
      charts/opal/templates/persistence/opalPVC.yaml
  14. 17
      charts/opal/templates/service.yaml
  15. 143
      charts/opal/values.yaml

2
.gitignore vendored

@ -0,0 +1,2 @@ @@ -0,0 +1,2 @@
*.iml
.idea

40
Jenkinsfile vendored

@ -0,0 +1,40 @@ @@ -0,0 +1,40 @@
pipeline {
agent {
kubernetes {
label 'helm'
}
}
stages {
stage('Test') {
steps {
container('chart-testing') {
sh "chart_test.sh --no-install --all"
}
}
}
stage('Package') {
steps {
container('chart-testing'){
sh 'mkdir target'
sh 'for dir in charts/*; do helm package --destination target "$dir"; done'
}
}
}
stage('Deploy') {
when {
branch 'master'
}
steps {
container('vault') {
script {
env.NEXUS_USER = sh(script: 'vault read -field=username secret/ops/account/nexus', returnStdout: true)
env.NEXUS_PWD = sh(script: 'vault read -field=password secret/ops/account/nexus', returnStdout: true)
}
}
container('alpine') {
sh 'set +x; for chart in target/*; do curl -L -u $NEXUS_USER:$NEXUS_PWD http://registry.molgenis.org/repository/helm/ --upload-file "$chart"; done'
}
}
}
}
}

130
README.md

@ -0,0 +1,130 @@ @@ -0,0 +1,130 @@
# Third party - Helm templates
These are the Helm templates that we will use for MOLGENIS operations.
## Kubernetes
When you want to use kubernetes there are some commands you need to know. Also running on a remote cluster will be a must have to control your whole DTAP.
### Basic concepts
Basic concepts in respect to docker you need to know.
**Deployments**
Are a set of pods that will be deployed according to configuration that is usually managed bij Helm. These pods interact with eachother by being in the same namespace created by kubernetes according to the deployment configuration.
**Pods**
A pod is wrapper around a container. It will recreate the container when it is shutdown for some reason and interact with other pods when needed.
**Containers**
A container is a docker-container that is created from a docker image. It could be seen as an VM for example
**Images**
An image is a template for a container some sort of boot script but also contains the os for example. A build dockerfile, if you will.
**Prerequisites**
There are some prerequisites you need.
- docker
- minikube
### Useful commands
Commands that can be used to get information from a kubernetes cluster
**Pods**
- ```kubectl get pods (optional: [--all-namspaces])```
Gets alls running instances of containers from a certain deployment
- ```kubectl describe pod #pod name# --namespace=#namesspace#```
Describes the pod initialization, also displays error messages more accurately if they occur
- ```kubectl remove pod #pod name# --namespace=#namespace# (optional: [--force] [--grace-period=0])```
Removes a pod from the system (but will restart if the option is set in the deployment,yaml *[see note]*).
**note:** You can not do this while the deployment of the service is still there
**Services**
- ```kubectl get services```
Gets all services from a deployment
**Volumes**
- ```kubectl get pv```
Gets all persistant volumes
- ```kubectl get pvc```
Gets all persistent volume claims
**Deployments**
- ```kubectl get deployments```
Gets all deployments (comparable with docker-compose)
## Helm
This repository is serves also as a catalogue for Rancher. We have several apps that are served through this repository. e.g.
- [Opal](charts/opal/README.md)
### Useful commands
You can you need to know to easily develop and deploy helm-charts
- ```helm lint .```
To test your helm chart for code errors.
- ```helm install . --dry-run --debug```
Check if your configuration deploys on a kubernetes cluster and check the configuration
- ```helm install . #release name# --namespace #remote namespace#```
Do it in the root of the project where the Chart.yaml is located
It installs a release of a kubernetes stack. You also store this as an artifact in a kubernetes repository
- ```helm package .```
You can create a package which can be uploaded in the molgenis helm repository
- ```helm publish```
You still have to create an ```index.yaml``` for the chart. You can do this by executing this command: ```helm repo index #directory name of helm chart#```
Then you can upload it by executing:
- ```curl -v --user #username#:#password# --upload-file index.yaml https://registry.molgenis.org/repository/helm/#chart name#/index.yml```
- ```curl -v --user #username#:#password# --upload-file #chart name#-#version#.tgz https://registry.molgenis.org/repository/helm/#chart name#/#chart name#-#version#.tgz```
Now you have to add the repository locally to use in your ```requirements.yaml```.
- ```helm repo add #repository name# https://registry.molgenis.org/repository/helm/molgenis```
- ```helm dep build```
You can build your dependencies (create a ```charts``` directory and install the chart in it) of the helm-chart.
- ```helm list```
Lists all installed releases
- ```helm delete #release#```
Performs a sort of mvn clean on your workspace. Very handy for zombie persistent volumes or claims.
- ```install tiller on remote cluster```
To install tiller on a remote cluster you need an rbac-config.yml.
```kubectl create -f rbac-config.yaml```
When you have defined the yaml you can add the tiller to the cluster by following the steps below.
```helm init --service-account tiller```

12
charts/opal/Chart.yaml

@ -0,0 +1,12 @@ @@ -0,0 +1,12 @@
apiVersion: v1
appVersion: "1.0"
description: Opal - helm stack (in BETA)
name: opal
version: 0.5.4
sources:
- https://git.webhosting.rug.nl/opal/opal-ops-docker-helm.git
icon: https://git.webhosting.rug.nl/opal/opal-ops-docker-helm/
home: https://obiba.org
maintainers:
- name: sidohaakma
- name: fdlk

138
charts/opal/README.md

@ -0,0 +1,138 @@ @@ -0,0 +1,138 @@
# Opal
This chart is used for acceptance and production use cases.
## Containers
The created containers are:
- MOLGENIS
- ElasticSearch
- PostgreSQL **(optional)**
## Provisioning
You can choose from which registry you want to pull. There are 2 registries:
- https://registry.molgenis.org
- https://hub.docker.com
The registry.molgenis.org contains the bleeding edge versions (PR's and master merges). The hub.docker.com contains the released artifacts (MOLGENIS releases and release candidates).
The three properties you need to specify are:
- ```molgenis.image.repository```
- ```molgenis.image.name```
- ```molgenis.image.tag```
Besides determining which image you want to pull, you also have to set an administrator password. You can do this by specifying the following property.
- ```molgenis.adminPassword```
### Firewall
Is defined at service level you can specify this attribute in the values:
- ```molgenis.firewall.enabled``` default 'false'
If set to 'true' the following options are available. One of the options below has to be set.
- ```molgenis.firewall.umcg.enabled``` default 'false'
- ```molgenis.firewall.cluster.enabled``` default 'false'
UMCG = only available within the UMCG.
Cluster = only available within the GCC cluster environment.
## Services
When you start MOLGENIS you need:
- an elasticsearch instance (5.5.6)
- an postgres instance (9.6)
You can attach additional services like:
- an opencpu instance
### Elasticsearch
You can configure elasticsearch by giving in the cluster location.
To configure the transport address you can address the node communication channel but also the native JAVA API. Which MOLGENIS uses to communicate with Elasticsearch.
From Elasticsearch version 6 and further the JAVA API is not supported anymore. At this moment you can only use Elastic instance till major version 5.
- ```molgenis.services.elasticsearch.transportAddresses: localhost:9300```
To configure the index on a Elasticsearch cluster you can specify the clusterName property.
- ```molgenis.services.elasticsearch.clusterName: molgenis```
### Postgres
You can specify the location of the postgres instance by specify the following property:
- ```molgenis.services.postgres.host: localhost```
You can specify the schema by filling out this property:
- ```molgenis.services.postgres.scheme: molgenis```
You can specify credentials for the database scheme by specifying the following properties:
- ```molgenis.services.postgres.user: molgenis```
- ```molgenis.services.postgres.password: molgenis```
To test you can use the **PostgreSQL**-helm chart of Kubernetes and specify these answers:
```bash
# answers for postgresql chart
postgresUser=molgenis
postgresPassword=molgenis
postgresDatabase=molgenis
persistence.enabled=false
```
### OpenCPU
You can specify the location of the OpenCPU cluster by specifying this property:
- ```molgenis.services.opencpu.host: localhost```
You can test OpenCPU settings using the **OpenCPU**-helm chart of MOLGENIS.
## Resources
You can specify resources by resource type. There are 2 resource types.
- memory of container
- maximum heap space JVM
Specify memory usage of container:
- ```molgenis.resources.limits.memory```
Specify memory usage for Java JVM:
- ```molgenis.javaOpts.maxHeapSpace```
Select the resources you need dependant on the customer you need to serve.
## Persistence
You can enable persistence on your MOLGENIS stack by specifying the following property.
- ```persistence.enabled``` default 'true'
You can also choose to retain the volume of the NFS.
- ```persistence.retain``` default 'false'
The size and claim name can be specified per service. There are now two services that can be persist.
- MOLGENIS
- ElasticSearch
- PostgreSQL **(optional)**
MOLGENIS persistent properties.
- ```molgenis.persistence.claim```
- ```molgenis.persistence.size```
ElasticSearch persistent properties.
- ```elasticsearch.persistence.claim```
- ```elasticsearch.persistence.size```
PostgreSQL persistent properties.
- ```postgres.persistence.claim```
- ```postgres.persistence.size```
### Resolve you persistent volume
You do not know which volume is attached to your MOLGENIS instance. You can resolve this by executing:
```
kubectl get pv
```
You can now view the persistent volume claims and the attached volumes.
| NAME | CAPACITY | ACCESS | MODES | RECLAIM | POLICY | STATUS | CLAIM | STORAGECLASS | REASON | AGE |
| ---- | -------- | ------ | ----- | ------- | ------ | ------ | ----- | ------------ | ------ | --- |
| pvc-45988f55-900f-11e8-a0b4-005056a51744 | 30G | RWX | | Retain | Bound | molgenis-solverd/molgenis-nfs-claim | nfs-provisioner-retain | | | 33d |
| pvc-3984723d-220f-14e8-a98a-skjhf88823kk | 30G | RWO | | Delete | Bound | molgenis-test/molgenis-nfs-claim | nfs-provisioner | | | 33d |
You see the ```molgenis-test/molgenis-nfs-claim``` is bound to the volume: ```pvc-3984723d-220f-14e8-a98a-skjhf88823kk```.
When you want to view the data in the this volume you can go to the nfs-provisioning pod and execute the shell. Go to the directory ```export``` and lookup the directory ```pvc-3984723d-220f-14e8-a98a-skjhf88823kk```.

137
charts/opal/questions.yml

@ -0,0 +1,137 @@ @@ -0,0 +1,137 @@
categories:
- OPAL
questions:
- variable: opal.environment
label: Environment
default: "test"
description: "Environment of Opal instance"
type: enum
options:
- development
- test
- acceptance
- production
required: true
group: "Provisioning"
- variable: molgenis.type.kind
label: Type
default: "medium"
description: "Type of MOLGENIS resources"
type: enum
options:
- small
- medium
- large
required: true
group: "Provisioning"
- variable: molgenis.image.tag
label: Version
default: "stable"
description: "Select a MOLGENIS version (check the registry.molgenis.org or hub.docker.com for released tags)"
type: string
required: true
group: "Provisioning"
- variable: molgenis.adminPassword
label: Administrator password
default: ""
description: "Enter an administrator password"
type: password
required: true
group: "Provisioning"
- variable: service.firewall.enabled
label: Firewall enabled
default: false
description: "Firewall enabled (can be cluster or UMCG scoped)"
type: boolean
required: true
group: "Services"
show_subquestion_if: true
subquestions:
- variable: service.firewall.kind
default: "umcg"
description: "Firewall kind. This can be 'umcg' or 'cluster' environment"
type: enum
required: true
options:
- umcg
- cluster
label: Firewall kind
- variable: molgenis.advanced
label: Advanced mode
default: false
description: "Do you want to override the default values in advanced mode"
type: boolean
required: true
group: "Advanced"
show_subquestion_if: true
subquestions:
- variable: molgenis.image.repository
label: Registry
default: "registry.hub.docker.com"
description: "Select a registry to pull from"
type: enum
options:
- "registry.hub.docker.com"
- "registry.molgenis.org"
required: true
group: "Provisioning"
- variable: molgenis.services.opencpu.host
label: OpenCPU cluster
default: "molgenis-opencpu.opencpu"
description: "Specify the OpenCPU cluster"
type: string
required: true
group: "Services"
- variable: molgenis.services.postgres.embedded
label: Postgres embedded
default: true
description: "Do you want an embedded postgres"
type: boolean
required: true
group: "Services"
- variable: molgenis.services.postgres.host
label: Postgres cluster location
default: "localhost"
description: "Set the location of the postgres cluster. This can be localhost when the postgres is enabled else you need to specify a cluster location if you do not want a embedded postgres instance)"
type: string
required: true
group: "Services"
- variable: molgenis.services.postgres.scheme
label: Database scheme
default: "molgenis"
description: "Set the database scheme"
type: string
required: true
group: "Services"
- variable: molgenis.services.postgres.user
label: Database username
default: "molgenis"
description: "Set user of the database scheme"
type: string
required: true
group: "Services"
- variable: molgenis.services.postgres.password
label: Database password
default: "molgenis"
description: "Set the password of the database scheme"
type: string
required: true
group: "Services"
- variable: persistence.retain
default: false
description: "Do you want to retain the persistent volume"
type: boolean
label: Retain volume
group: "Persistence"
- variable: persistence.molgenis.size
default: "default"
description: "Size of MOLGENIS filestore (PostgreSQL and ElasticSearch excluded)"
type: enum
options:
- "default"
- "5Gi"
- "10Gi"
- "30Gi"
label: Size MOLGENIS filestore
group: "Persistence"

4
charts/opal/requirements.yaml

@ -0,0 +1,4 @@ @@ -0,0 +1,4 @@
dependencies:
- name: mysql
version: ^0.16
repository: https://kubernetes-charts.storage.googleapis.com/

19
charts/opal/templates/NOTES.txt

@ -0,0 +1,19 @@ @@ -0,0 +1,19 @@
1. Get the application URL by running these commands:
{{- if .Values.ingress.enabled }}
{{- range .Values.ingress.hosts }}
http{{ if $.Values.ingress.tls }}s{{ end }}://{{ . }}{{ $.Values.ingress.path }}
{{- end }}
{{- else if contains "NodePort" .Values.service.type }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "molgenis.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo http://$NODE_IP:$NODE_PORT
{{- else if contains "LoadBalancer" .Values.service.type }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status of by running 'kubectl get svc -w {{ template "molgenis.fullname" . }}'
export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "molgenis.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo http://$SERVICE_IP:{{ .Values.service.port }}
{{- else if contains "ClusterIP" .Values.service.type }}
export POD_NAME=$(kubectl get pods --namespace {{ .Release.Namespace }} -l "app={{ template "molgenis.name" . }},release={{ .Release.Name }}" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:8080 to use your application"
kubectl port-forward $POD_NAME 8080:80
{{- end }}

32
charts/opal/templates/_helpers.tpl

@ -0,0 +1,32 @@ @@ -0,0 +1,32 @@
{{/* vim: set filetype=mustache: */}}
{{/*
Expand the name of the chart.
*/}}
{{- define "opal.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{/*
Create a default fully qualified app name.
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
If release name contains chart name it will be used as a full name.
*/}}
{{- define "opal.fullname" -}}
{{- if .Values.fullnameOverride -}}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- $name := default .Chart.Name .Values.nameOverride -}}
{{- if contains $name .Release.Name -}}
{{- .Release.Name | trunc 63 | trimSuffix "-" -}}
{{- else -}}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "opal.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" -}}
{{- end -}}

103
charts/opal/templates/deployment.yaml

@ -0,0 +1,103 @@ @@ -0,0 +1,103 @@
apiVersion: apps/v1beta2
kind: Deployment
metadata:
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
name: {{ template "opal.fullname" . }}
labels:
app: {{ template "opal.name" . }}
chart: {{ template "opal.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "opal.name" . }}
release: {{ .Release.Name }}
strategy:
type: Recreate
template:
metadata:
labels:
app: {{ template "opal.name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: opal
{{- with .Values.opal }}
image: {{ .image.repository }}/{{ .image.name }}:{{ .image.tag }}
imagePullPolicy: {{ .image.pullPolicy }}
env:
- name: opal.home
value: /home/opal
- name: db_uri
value: jdbc:postgresql://localhost/opal
- name: db_user
value: opal
- name: db_password
value: opal
- name: admin.password
value: "{{ .adminPassword }}"
- name: CATALINA_OPTS
{{- if eq .type.kind "small" }}
value: "-Xmx{{ .type.small.javaOpts.maxHeapSpace }} -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled"
{{- else if eq .type.kind "medium"}}
value: "-Xmx{{ .type.medium.javaOpts.maxHeapSpace }} -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled"
{{- else }}
value: "-Xmx{{ .type.large.javaOpts.maxHeapSpace }} -XX:+UseConcMarkSweepGC -XX:+CMSClassUnloadingEnabled"
{{- end }}
ports:
- containerPort: 8080
{{- if $.Values.persistence.enabled }}
volumeMounts:
- name: opal-nfs
mountPath: /home/opal
{{- end }}
livenessProbe:
httpGet:
path: /
port: 8080
initialDelaySeconds: 60
periodSeconds: 5
failureThreshold: 25
successThreshold: 1
readinessProbe:
httpGet:
path: /api/v2/version
port: 8080
initialDelaySeconds: 120
periodSeconds: 30
failureThreshold: 3
successThreshold: 1
resources:
{{- if eq .type.kind "small" }}
{{ toYaml .type.small.resources | indent 12 }}
{{- else if eq .type.kind "medium" }}
{{ toYaml .type.medium.resources | indent 12 }}
{{- else }}
{{ toYaml .type.large.resources | indent 12 }}
{{- end }}
{{- end }}
{{- if .Values.persistence.enabled }}
volumes:
- name: opal-nfs
persistentVolumeClaim:
claimName: {{ .Values.opal.persistence.claim }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{ toYaml . | indent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | indent 8 }}
{{- end }}

44
charts/opal/templates/ingress.yaml

@ -0,0 +1,44 @@ @@ -0,0 +1,44 @@
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "opal.fullname" . -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "{{ $.Release.Name }}-ingress"
labels:
app: {{ template "opal.name" . }}
chart: {{ template "opal.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- if eq $.Values.opal.environment "development" }}
- host: {{ .Release.Name }}.dev.opal.org
{{- else if eq $.Values.opal.environment "test" }}
- host: {{ .Release.Name }}.test.opal.org
{{- else if eq $.Values.opal.environment "acceptance" }}
- host: {{ .Release.Name }}.accept.opal.org
{{- else }}
- host: {{ .Release.Name }}.opal.org
{{- end }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: {{ $fullName }}
servicePort: {{ $.Values.service.port }}
{{- end }}

29
charts/opal/templates/persistence/mysqlPVC.yaml

@ -0,0 +1,29 @@ @@ -0,0 +1,29 @@
{{- if .Values.persistence.enabled }}
apiVersion: extensions/v1beta1
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ .Values.mysql.persistence.claim }}
annotations:
{{- if .Values.persistence.retain }}
volume.beta.kubernetes.io/storage-class: "nfs-provisioner-retain"
{{- else }}
volume.beta.kubernetes.io/storage-class: "nfs-provisioner"
{{- end }}
spec:
accessModes:
- ReadWriteMany
resources:
requests:
{{- if eq .Values.persistence.mysql.size "default" }}
{{- if eq .Values.opal.type.kind "small" }}
storage: {{ .Values.mysql.type.small.persistence.size }}
{{- else if eq .Values.opal.type.kind "medium" }}
storage: {{ .Values.mysql.type.medium.persistence.size }}
{{- else }}
storage: {{ .Values.mysql.type.large.persistence.size }}
{{- end }}
{{ else }}
storage: {{ .Values.persistence.mysql.size }}
{{- end }}
{{- end }}

29
charts/opal/templates/persistence/opalPVC.yaml

@ -0,0 +1,29 @@ @@ -0,0 +1,29 @@
{{- if .Values.persistence.enabled -}}
apiVersion: extensions/v1beta1
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: {{ .Values.opal.persistence.claim }}
annotations:
{{- if .Values.persistence.retain }}
volume.beta.kubernetes.io/storage-class: "nfs-provisioner-retain"
{{- else }}
volume.beta.kubernetes.io/storage-class: "nfs-provisioner"
{{- end }}
spec:
accessModes:
- ReadWriteMany
resources:
requests:
{{- if eq .Values.persistence.opal.size "default" }}
{{- if eq .Values.opal.type.kind "small" }}
storage: {{ .Values.opal.type.small.persistence.size }}
{{- else if eq .Values.opal.type.kind "medium" }}
storage: {{ .Values.opal.type.medium.persistence.size }}
{{- else }}
storage: {{ .Values.opal.type.large.persistence.size }}
{{- end }}
{{ else }}
storage: {{ .Values.persistence.opal.size }}
{{- end }}
{{- end }}

17
charts/opal/templates/service.yaml

@ -0,0 +1,17 @@ @@ -0,0 +1,17 @@
apiVersion: v1
kind: Service
metadata:
name: {{ template "opal.fullname" . }}
labels:
app: {{ template "opal.name" . }}
chart: {{ template "opal.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
type: {{ .Values.service.type }}
ports:
- name: opal
port: {{ .Values.service.port }}
selector:
app: {{ template "opal.name" . }}
release: {{ .Release.Name }}

143
charts/opal/values.yaml

@ -0,0 +1,143 @@ @@ -0,0 +1,143 @@
# Default values for molgenis.
replicaCount: 1
service:
type: LoadBalancer
firewall:
enabled: false
kind: "umcg"
umcg:
rules:
- 127.0.0.1/32
cluster:
rules:
- 127.0.0.1/32
port: 8080
ingress:
enabled: true
annotations:
nginx.ingress.kubernetes.io/proxy-body-size: "0"
# This will be used again when external domains need be attached to the instance
# hosts:
# - name: test
path: /
tls: []
opal:
advanced: false
type:
kind: medium
small:
javaOpts:
maxHeapSpace: "2g"
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 200m
memory: 2Gi
persistence:
size: 5Gi
medium:
javaOpts:
maxHeapSpace: "3g"
resources:
limits:
cpu: 3
memory: 3Gi
requests:
cpu: 200m
memory: 3Gi
persistence:
size: 10Gi
large:
javaOpts:
maxHeapSpace: "4g"
resources:
limits:
cpu: 4
memory: 4Gi
requests:
cpu: 200m
memory: 4Gi
persistence:
size: 30Gi
environment: test
image:
repository: registry.hub.docker.com
name: obiba/opal
tag: stable
pullPolicy: Always
adminPassword:
persistence:
claim: opal-nfs-claim
services:
rserver:
host: localhost
mysql:
host: localhost
rserver:
image:
repository: obiba/opal-rserver
tag: stable
pullPolicy: IfNotPresent
mysql:
type:
small:
resources:
limits:
cpu: 1
memory: 512Mi
requests:
cpu: 100m
memory: 512Mi
persistence:
size: 5Gi
medium:
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 100m
memory: 2Gi
persistence:
size: 10Gi
large:
resources:
limits:
cpu: 4
memory: 4Gi
requests:
cpu: 100m
memory: 4Gi
persistence:
size: 15Gi
image:
repository: postgres
tag: 9.6-alpine
pullPolicy: IfNotPresent
persistence:
claim: mysql-nfs-claim
persistence:
enabled: true
retain: false
opal:
size: "default"
mysql:
size: "default"
nodeSelector: {
deployPod: "true"
}
tolerations: []
affinity: {}
Loading…
Cancel
Save