Skip to content

Commit

Permalink
Release version RELEASE_35 (#51)
Browse files Browse the repository at this point in the history
Co-authored-by: Build Automation <[email protected]>
  • Loading branch information
schesa and Build Automation authored Nov 8, 2024
1 parent c32276e commit 894898b
Show file tree
Hide file tree
Showing 19 changed files with 1,563 additions and 64 deletions.
6 changes: 1 addition & 5 deletions helm_charts/icap/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -12,26 +12,22 @@ description: This is a Helm chart for deploying MetaDefender ICAP Server (https:
long_description: |
This chart can deploy the following depending on the provided values:
- One or more MD ICAP Server instances
## Installation
### From source
MD ICAP Server can be installed directly from the source code, here's an example using the generic values:
```console
git clone https://github.com/OPSWAT/metadefender-k8s.git metadefender
cd metadefender/helm_charts
helm install my_mdicapsrv ./my_mdicapsrv -f my_mdicapsrv-generic-values.yml
```
### From the latest release
The installation can also be done using the latest release from github:
```console
helm install my_my_mdicapsrv <MDICAPSRV_RELEASE_URL>.tgz -f my_mdicapsrv-generic-values.yml
```
## Operational Notes
The entire deployment can be customized by overwriting the chart's default configuration values. Here are a few point to look out for when changing these values:
- Sensitive values (like credentials and keys) are saved in the Kubernetes cluster as secrets and are not deleted when the chart is removed and they can be reused for future deployments
- Credentials that are not explicitly set (passwords and the api key) and do not already exist as k8s secrets will be randomly generated, if they are set, the respective k8s secret will be updated or created if it doesn't exist
- **The license key value is mandatory**, if it's left unset or if it's invalid, the MD ICAP Server instance will report as "unhealthy" and it will be restarted
- The configured license should have a sufficient number of activations for all pod running MD ICAP Server, each pod counts as 1 activation. Terminating pods will also deactivate the respective MD ICAP Server instance.
- The configured license should have a sufficient number of activations for all pod running MD ICAP Server, each pod counts as 1 activation. Terminating pods will also deactivate the respective MD ICAP Server instance.
44 changes: 8 additions & 36 deletions helm_charts/icap/templates/config-template.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,14 @@ data:
OLMS_SOCKET_PORT: {{ .Values.olms.olms_socket_port | quote }}
OLMS_RULE: {{ .Values.olms.olms_rule | quote }}
OLMS_COMMENT: {{ .Values.olms.olms_comment | quote }}
{{- if .Values.olms.olms_use_proxy }}
OLMS_USE_PROXY: {{ .Values.olms.olms_use_proxy }}
OLMS_PROXY_SERVER: {{ .Values.olms.olms_proxy_server }}
OLMS_PROXY_PORT: {{ .Values.olms.olms_proxy_port }}
OLMS_PROXY_USERNAME: {{ .Values.olms.olms_proxy_username }}
OLMS_PROXY_PASSWORD: {{ .Values.olms.olms_proxy_password }}
OLMS_PROXY_PROXY_TYPE: {{ .Values.olms.olms_proxy_type }}
{{- end }}
{{- end }}
{{- if .Values.icap_components.md_icapsrv.nginx_support.enabled }}
{{- $isEnableNginx := .Values.icap_components.md_icapsrv.nginx_support.enabled }}
Expand Down Expand Up @@ -98,8 +106,6 @@ metadata:
data:
user: {{ $icapUserValue }}
password: {{ $icapPasswordValue }}


# Generate, set or keep the MD ICAP Server API key
{{- $icapApiKeyValue := (randNumeric 36) | b64enc | quote }}
{{- $icapApiSecret := (lookup "v1" "Secret" .Release.Namespace "mdicapsrv-api-key") }}
Expand All @@ -109,7 +115,6 @@ data:
{{- if .Values.mdicapsrv_api_key }}
{{- $icapApiKeyValue = .Values.mdicapsrv_api_key | b64enc }}
{{- end }}

---
kind: Secret
apiVersion: v1
Expand All @@ -120,7 +125,6 @@ metadata:
"helm.sh/resource-policy": keep
data:
value: {{ $icapApiKeyValue }}

# Set or keep the MD ICAP Server license key
{{- $icapLicenseKeyValue := "SET_LICENSE_KEY_HERE" | b64enc | quote }}
{{- $icapLicenseSecret := (lookup "v1" "Secret" .Release.Namespace "mdicapsrv-license-key") }}
Expand All @@ -130,7 +134,6 @@ data:
{{- if .Values.mdicapsrv_license_key }}
{{- $icapLicenseKeyValue = .Values.mdicapsrv_license_key | b64enc }}
{{- end }}

---
kind: Secret
apiVersion: v1
Expand Down Expand Up @@ -173,7 +176,6 @@ kind: ConfigMap
metadata:
name: ignition-file
data:

# file-like keys
main.py: |
#!/usr/bin/python3
Expand All @@ -183,10 +185,8 @@ data:
import os
import shutil
from psycopg2.extensions import ISOLATION_LEVEL_AUTOCOMMIT
db_instance_ranges = {"database":{"instance":0}}
TIME_LOOP = 1 # minutes
def GetDatabaseNames():
print("1.1 GetDatabaseNames")
conn = psycopg2.connect(database="postgres", user = os.getenv("DB_USER"), password = os.getenv("DB_PWD"), host = os.getenv("DB_HOST"), port = os.getenv("DB_PORT"))
Expand All @@ -196,7 +196,6 @@ data:
cur.close()
conn.close()
return res
def GetInstanceNames(db_name):
print("1.2 GetInstanceNames")
conn = psycopg2.connect(database=db_name, user = os.getenv("DB_USER"), password = os.getenv("DB_PWD"), host = os.getenv("DB_HOST"), port = os.getenv("DB_PORT"))
Expand All @@ -206,7 +205,6 @@ data:
cur.close()
conn.close()
return res
def CountConnections(db_name, private_user):
print("1.3 CountConnections")
conn = psycopg2.connect(database=db_name, user = os.getenv("DB_USER"), password = os.getenv("DB_PWD"), host = os.getenv("DB_HOST"), port = os.getenv("DB_PORT"))
Expand All @@ -216,25 +214,21 @@ data:
cur.close()
conn.close()
return res
def CleanupDatabase(db_name, instance_name):
print("2.5 CleanupDatabase")
# Remove redundant parition on those tables
try:
conn = psycopg2.connect(database=db_name, user = os.getenv("DB_USER"), password = os.getenv("DB_PWD"), host = os.getenv("DB_HOST"), port = os.getenv("DB_PORT"))
cur = conn.cursor()
# Get instance id
cur.execute("SELECT instance_id FROM register.instance WHERE instance_name = %s", (instance_name,))
instance = cur.fetchone()[0]
print("2.5.1 Get instance id FROM register.instance {}".format(instance))
# Get all partitioned table names
cur.execute("""select oid::regclass::text table_name from pg_class
where relkind = 'p' and oid in (select distinct inhparent from pg_inherits)
order by table_name""")
partitioned_tables = cur.fetchall()
# Drop partitions
for tableVal in partitioned_tables:
table = tableVal[0] + "_" + str(instance)
Expand All @@ -243,13 +237,11 @@ data:
cur.execute("DROP TABLE IF EXISTS {} CASCADE".format(table))
except Exception as e:
print("Could not drop table {}. {}".format(table, e))
conn.commit()
cur.close()
conn.close()
except Exception as e:
print("Could not cleanup database {}. {}".format(db_name, e))
def CleanupStorage(instance_name):
print("2.6 CleanupStorage")
path = os.getenv("STORAGE_PATH") + "//" + instance_name
Expand All @@ -258,8 +250,6 @@ data:
shutil.rmtree(path)
except Exception as e:
print("Could not remove path {}. {}".format(path, e))
def DropDatabase(db_name):
try:
print("2.6.1 DropDatabase {}".format(db_name))
Expand All @@ -271,21 +261,16 @@ data:
conn.commit()
cur.close()
conn.close()
# Remove from retention dict
db_instance_ranges.pop(db_name)
except Exception as e:
print("Could not drop database {}. {}".format(db_name, e))
def HandleRetention(db_name, instance_name):
print("2.4.4 HandleRetention")
if os.getenv("database_check") == "true":
CleanupDatabase(db_name, instance_name)
if os.getenv("storage_check") == "true":
CleanupStorage(instance_name)
def Cleanup(db_name, instance_name):
print("2.4 Cleanup {} {} {}".format(db_name, instance_name, db_instance_ranges))
retention_range = float(os.getenv("range"))
Expand All @@ -304,60 +289,47 @@ data:
print("2.4.3 db_instance_ranges[db_name][instance_name] > retention_range")
HandleRetention(db_name, instance_name)
db_instance_ranges[db_name][instance_name] = 0
print("2.7 Cleanup done")
def DataRetention():
print("1 DataRetention")
# Get all database on postgresql server
db_names = GetDatabaseNames()
print("2 GetDatabaseNames done")
for dbname_val in db_names:
db_name = dbname_val[0]
# Connect database to get instance names
rows = GetInstanceNames(db_name)
all_instances_idle = True
print("--------------------------------------------------------------")
print("Start cleaning up database {}".format(db_name))
for instance_name_val in rows:
instance_name = instance_name_val[0]
# Calculate sha1 hash sum to get correlating private user from instance name
print("2.1 Calculate sha1 hash sum {} {}".format(db_name, instance_name))
private_user = "usr_" + hashlib.sha1(instance_name.encode()).hexdigest()
print("2.2 {}".format(private_user))
if CountConnections(db_name, private_user) > 0:
print("This icap {}:{} is still in use. Skip cleaning up".format(db_name, instance_name))
all_instances_idle = False
if db_name in db_instance_ranges:
db_instance_ranges.pop(db_name)
continue
# If this database doesn't have any connections
# Eliminate the partition
Cleanup(db_name, instance_name)
# Append 5 minute
db_instance_ranges[db_name][instance_name] += TIME_LOOP
print("2.8 DataRetention done {}".format(instance_name))
if all_instances_idle == True:
print("There are no mdicapsrv instances in-use. Drop database {}".format(db_name))
DropDatabase(db_name)
print("3 DataRetention done")
try:
start_time = time.time()
print("Data retetion is running...")
# Start retention
if os.getenv("enable_check") == "true":
DataRetention()
# Every 5 minutes, DR checks ignition-file's changes.
sleep_time = 60.0 * TIME_LOOP - ((time.time() - start_time) % 60.0)
if sleep_time > 0:
Expand Down
23 changes: 8 additions & 15 deletions helm_charts/icap/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,6 @@ ACCEPT_EULA: false
## - if the "mdicapsrv-cred" secret exists, the values from the secret are used as credentials
mdicapsrv_user: admin # Initial admin user for the MD ICAP Server web interface
mdicapsrv_password: null # Initial admin password for the MD ICAP Server web interface, if not set it will be randomly generated

## Uncomment the following line to set a fixed API key for MD ICAP Server that will overwrite
## any secret that already exists for API key (the secret will be kept when the chart is deleted)
## If left unset the following will apply:
Expand All @@ -18,7 +17,6 @@ activation_server: activation.dl.opswat.com
mdicapsrv_api_key: null # 36 character API key used for the MD ICAP Server REST API, if not set it will be randomly generated
## Set your MD ICAP Server license key here and it will be stored in the "mdicapsrv-license-key" secret that will be created
## if it does not exist. If left unset then a a secret is generated with and empty license key.

mdicapsrv_license_key: <SET_LICENSE_KEY_HERE> # A valid license key, **this value is mandatory**
## Uncomment the following lines to set a fixed user and password for the MD ICAP Server PostgreSQL database that will overwrite
## any secret that already exists for these credentials (the secret will be kept when the chart is deleted)
Expand All @@ -42,6 +40,12 @@ olms:
olms_socket_port: ""
olms_rule: "Default_Rule"
olms_comment: ""
olms_use_proxy: false
olms_proxy_server: ""
olms_proxy_port: ""
olms_proxy_username: ""
olms_proxy_password: ""
olms_proxy_type: ""
proxy:
enabled: false
http_proxy: ""
Expand All @@ -55,15 +59,14 @@ icap_ingress:
rest_port: 8048 # Port where the ingress should route to
enabled: false # Enable or disable the ingress creation
class: nginx # Sets the ingress class

## Uncomment if you want to use a private repo (it must already be configured in the cluster as a secret)
# imagePullSecrets:
# - name: regcred
# Deployment Postgresql Server for MD ICAP Server
postgres_mdicapsrv:
enabled: true
name: postgres-mdicapsrv
image: postgres:12.14
image: postgres:12.20
env:
- name: POSTGRES_PASSWORD
valueFrom:
Expand All @@ -81,9 +84,7 @@ postgres_mdicapsrv:
# Docker repo to use, this should be changed when using private images (this string will be prepended to the image name)
# If a component has "custom_repo: true" then the image name will be formated as "{docker_repo/}image_name{:BRANCH}" otherwise it will remain unaltered
icap_docker_repo: opswat

icap_container_persistent: false # To enable for mounting icap path /opt/mdicapsrv/icap_data/var/lib/mdicapsrv using below storage_configs pvc

# Example using a PVC with dynamic provisioning from an existing storage class for postgres_mdicapsrv container
storage_configs:
enabled: false
Expand All @@ -93,8 +94,6 @@ storage_configs:
requests:
storage: 5Gi
storageClassName: "default" # to change to you existing storage class allocated by your CSP


extra_storage_configs: # Example for creating PVC for ICAP container. Use extraVolumeMounts and extraVolumes in icap container definition together with this PVC
# extra_pvc:
# apiVersion: v1
Expand All @@ -108,7 +107,6 @@ extra_storage_configs: # Example for creating PVC for ICAP containe
# requests:
# storage: 5Gi
# storageClassName: azurefile

# Enable feature health check
healthcheck:
enabled: true
Expand All @@ -124,7 +122,7 @@ icap_components:
# Init container to check database system is ready to accept connections
initContainers:
- name: check-db-ready
image: postgres:12.18
image: postgres:12.20
envFrom:
- configMapRef:
name: mdicapsrv-env
Expand Down Expand Up @@ -181,10 +179,6 @@ icap_components:
key: password
- name: LICENSING_CLEANUP
value: "true"
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
Expand Down Expand Up @@ -291,7 +285,6 @@ icap_components:
type: RollingUpdate
rollingUpdate:
maxSurge: 0

# nodeSelector is the simplest recommended form of node selection constraint.
# You can add the nodeSelector field to your Pod specification and specify the node labels you want the target node to have.
# Kubernetes only schedules the Pod onto nodes that have each of the labels you specify.
Expand Down
4 changes: 2 additions & 2 deletions helm_charts/mdcore/values.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -233,15 +233,15 @@ core_components:
# scheme: HTTPS
path: /readyz # Health check endpoint
port: 8008
initialDelaySeconds: 60 # Number of seconds after the container has started before startup, liveness or readiness probes are initiated. Defaults to 0 seconds. Minimum value is 0.
initialDelaySeconds: 60 # Number of seconds after the container has started before startup, liveness or readiness probes are initiated. Defaults to 0 seconds. Minimum value is 0.
periodSeconds: 10 # How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.
timeoutSeconds: 10 # Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.
livenessProbe:
httpGet:
# scheme: HTTPS
path: /readyz # Health check endpoint
port: 8008 # Health check port
initialDelaySeconds: 30
initialDelaySeconds: 90
periodSeconds: 10
timeoutSeconds: 10
failureThreshold: 3
Expand Down
4 changes: 2 additions & 2 deletions helm_charts/mdss/Chart.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -32,5 +32,5 @@ long_description: |
type: application

version: 3.4.4
appVersion: 3.4.4
version: 3.5.0
appVersion: 3.5.0
Loading

0 comments on commit 894898b

Please sign in to comment.