Skip to content

Commit

Permalink
Merge remote-tracking branch 'odh/v2.9.x' into stable
Browse files Browse the repository at this point in the history
  • Loading branch information
HumairAK committed Jan 17, 2025
2 parents 5387a14 + ffad766 commit e192470
Show file tree
Hide file tree
Showing 22 changed files with 335 additions and 164 deletions.
8 changes: 6 additions & 2 deletions .github/actions/kind/action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -35,10 +35,14 @@ runs:
EOF'
- name: Setup KinD cluster
uses: helm/kind-action@v1.8.0
uses: helm/kind-action@v1
with:
cluster_name: cluster
version: v0.17.0
# The kind version to use
version: v0.25.0
# The Docker image for the cluster nodes - https://hub.docker.com/r/kindest/node/
node_image: kindest/node:v1.30.6@sha256:b6d08db72079ba5ae1f4a88a09025c0a904af3b52387643c285442afb05ab994
# The path to the kind config file
config: ${{ env.KIND_CONFIG_FILE }}

- name: Print cluster info
Expand Down
6 changes: 3 additions & 3 deletions .github/scripts/python_package_upload/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,12 @@ FROM docker.io/python:3.9
WORKDIR /app

# Copy the script into the container
COPY package_upload.sh /app/package_upload.sh
COPY package_download.sh /app/package_download.sh

# Make sure the script is executable
RUN chmod +x /app/package_upload.sh
RUN chmod +x /app/package_download.sh

# Store the files in a folder
VOLUME /app/packages

ENTRYPOINT ["/app/package_upload.sh"]
ENTRYPOINT ["/app/package_download.sh"]
155 changes: 155 additions & 0 deletions .github/scripts/tests/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,155 @@
# Setup the local environment

All the following commands must be executed in a single terminal instance.

## Increase inotify Limits
To prevent file monitoring issues in development environments (e.g., IDEs or file sync tools), increase inotify limits:
```bash
sudo sysctl fs.inotify.max_user_instances=2280
sudo sysctl fs.inotify.max_user_watches=1255360
```
## Prerequisites
* Kind https://kind.sigs.k8s.io/

## Create kind cluster
```bash
cat <<EOF | kind create cluster --name=kubeflow --config=-
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
image: kindest/node:v1.30.6@sha256:b6d08db72079ba5ae1f4a88a09025c0a904af3b52387643c285442afb05ab994
kubeadmConfigPatches:
- |
kind: ClusterConfiguration
apiServer:
extraArgs:
"service-account-issuer": "kubernetes.default.svc"
"service-account-signing-key-file": "/etc/kubernetes/pki/sa.key"
EOF
```

## kubeconfig
Instead of replacing your kubeconfig, we are going to set to a diff file
```bash
kind get kubeconfig --name kubeflow > /tmp/kubeflow-config
export KUBECONFIG=/tmp/kubeflow-config
```
## docker
In order to by pass the docker limit issue while downloading the images. Let's use your credentials
```bash
docker login -u='...' -p='...' quay.io
```

Upload the secret. The following command will return an error. You need to replace `to` with user `username`
```bash
kubectl create secret generic regcred \
--from-file=.dockerconfigjson=$HOME/.docker/config.json \
--type=kubernetes.io/dockerconfigjson
```

## Test environment variables
Replace the `/path/to` in order to match the `data-science-pipelines-operator` folder
```bash
export GIT_WORKSPACE=/path/to/data-science-pipelines-operator
```
The image registry is required because you are running the test locally.
It will build and upload the image to your repository.

Replace `username` with your quay user
```bash
export REGISTRY_ADDRESS=quay.io/username
```

## Run the test
```bash
sh .github/scripts/tests/tests.sh --kind
```

# Debug a test using GoLand
Let's say you wish to debug the `Should create a Pipeline Run` test.
The first step is right click inside the method content and select the menu
`Run 'TestIntegrationTestSuite'`. It will fail because you need to fill some parameters.
Edit the configuration for `TestIntegrationTestSuite/TestPipelineSuccessfulRun/Should_create_a_Pipeline_Run in github.com/opendatahub-io/data-science-pipelines-operator/tests`
````
-k8sApiServerHost=https://127.0.0.1:39873
-kubeconfig=/tmp/kubeflow-config
-DSPANamespace=test-dspa
-DSPAPath=/path/to/data-science-pipelines-operator/tests/resources/dspa-lite.yaml
````
## How to retrieve the parameters above
* `k8sApiServerHost`: inspect the kubeconfig and retrieve the server URL from there
* `kubeconfig`: the path where you stored the output of `kind get kubeconfig`
* `DSPANamespace`: namespace
* `DSPAPath`: full path for the dspa.yaml

`Should create a Pipeline Run`, `DSPANamespace` and `DSPAPath` depends on the test scenario.

If you wish to keep the resources, add `-skipCleanup=True` in the config above.

## If you wish to rerun the test you need to delete the dspa
```bash
$ kubectl delete datasciencepipelinesapplications test-dspa -n test-dspa
datasciencepipelinesapplication.datasciencepipelinesapplications.opendatahub.io "test-dspa" deleted
```

# `tests.sh` details
This Bash script is designed to set up and test environments for Data Science Pipelines Operator (DSPO)
using Kubernetes (K8s) or *OpenShift with RHOAI deployed*. It includes functionalities to deploy dependencies,
configure namespaces, build and deploy images, and execute integration tests.

## **Features**
1. **Environment Variables Declaration**:
The script requires and verifies environment variables such as `GIT_WORKSPACE`, `REGISTRY_ADDRESS`, and `K8SAPISERVERHOST`. These variables define the workspace, registry for container images, and K8s API server address.

2. **Deployment Functions**:
Functions like `deploy_dspo`, `deploy_minio`, and `deploy_mariadb` handle deploying necessary components (e.g., MinIO, MariaDB, PyPI server) to the cluster.

3. **Namespace Configuration**:
Functions like `create_opendatahub_namespace` and `create_dspa_namespace` create and configure Kubernetes namespaces required for DSPO and other dependencies.

4. **Integration Testing**:
The script provides commands to run integration tests for DSPO and its external connections using `run_tests` and `run_tests_dspa_external_connections`.

5. **Cleanup and Resource Removal**:
Includes options like `--clean-infra` to remove namespaces and resources before running tests.

6. **Conditional Execution**:
Supports setting up and testing environments for different targets:
- `kind` (local Kubernetes clusters)
- `rhoai` (Red Hat OpenShift AI)

7. **Customizable Parameters**:
Allows passing values for paths, namespaces, and K8s API server via command-line arguments.

## **Usage**
```bash
./tests.sh [OPTIONS]
```

### **Options**
- `--kind`
Targets local `kind` cluster.
- `--rhoai`
Targets RHOAI
- `--clean-infra`
Cleans existing resources before running tests.
- `--k8s-api-server-host <HOST>`
Specifies the Kubernetes API server host.
- `--dspa-namespace <NAMESPACE>`
Custom namespace for DSPA deployment.
- `--dspa-path <PATH>`
Path to DSPA resource YAML.
- `--endpoint-type <TYPE>`
Specifies endpoint type (`service` or `route`).

### **Example**
To deploy and test DSPA on a local `kind` cluster:
```bash
./tests.sh --kind --clean-infra --k8s-api-server-host "https://localhost:6443"
```

To deploy DSPA on RHOAI:
```bash
./tests.sh --rhoai --dspa-namespace "custom-namespace"
```
58 changes: 58 additions & 0 deletions .github/scripts/tests/collect_logs.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
#!/usr/bin/env bash

set -e

DSPA_NS=""
DSPO_NS=""

while [[ "$#" -gt 0 ]]; do
case $1 in
--dspa-ns) DSPA_NS="$2"; shift ;;
--dspo-ns) DSPO_NS="$2"; shift ;;
*) echo "Unknown parameter passed: $1"; exit 1 ;;
esac
shift
done

if [[ -z "$DSPA_NS" || -z "$DSPO_NS" ]]; then
echo "Both --dspa-ns and --dspo-ns parameters are required."
exit 1
fi

function check_namespace {
if ! kubectl get namespace "$1" &>/dev/null; then
echo "Namespace '$1' does not exist."
exit 1
fi
}

function display_pod_info {
local NAMESPACE=$1
local POD_NAMES

POD_NAMES=$(kubectl -n "${DSPA_NS}" get pods -o custom-columns=":metadata.name")

if [[ -z "${POD_NAMES}" ]]; then
echo "No pods found in namespace '${NAMESPACE}'."
return
fi

for POD_NAME in ${POD_NAMES}; do
echo "===== Pod: ${POD_NAME} in ${NAMESPACE} ====="

echo "----- EVENTS -----"
kubectl describe pod "${POD_NAME}" -n "${NAMESPACE}" | grep -A 100 Events || echo "No events found for pod ${POD_NAME}."

echo "----- LOGS -----"
kubectl logs "${POD_NAME}" -n "${NAMESPACE}" || echo "No logs found for pod ${POD_NAME}."

echo "==========================="
echo ""
done
}

check_namespace "$DSPA_NS"
check_namespace "$DSPO_NS"

display_pod_info "$DSPA_NS"
display_pod_info "$DSPO_NS"
5 changes: 4 additions & 1 deletion .github/scripts/tests/tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,10 @@ ENDPOINT_TYPE="service"

get_dspo_image() {
if [ "$REGISTRY_ADDRESS" = "" ]; then
echo "REGISTRY_ADDRESS variable not defined." && exit 1
# this function is called by `IMG=$(get_dspo_image)` that captures the standard output of get_dspo_image
set -x
echo "REGISTRY_ADDRESS variable not defined."
exit 1
fi
local image="${REGISTRY_ADDRESS}/data-science-pipelines-operator"
echo $image
Expand Down
10 changes: 10 additions & 0 deletions .github/workflows/kind-integration.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ on:
- config/**
- tests/**
- .github/resources/**
- .github/actions/**
- '.github/workflows/kind-integration.yml'
- '.github/scripts/tests/tests.sh'
- Makefile
Expand Down Expand Up @@ -47,6 +48,15 @@ jobs:
uses: ./.github/actions/kind

- name: Run test
id: test
working-directory: ${{ github.workspace }}/.github/scripts/tests
run: |
sh tests.sh --kind
continue-on-error: true

- name: Collect events and logs
if: steps.test.outcome != 'success'
working-directory: ${{ github.workspace }}/.github/scripts/tests
run: |
./collect_logs.sh --dspa-ns test-dspa --dspo-ns opendatahub
exit 1
4 changes: 2 additions & 2 deletions OWNERS
Original file line number Diff line number Diff line change
Expand Up @@ -3,17 +3,17 @@ approvers:
- DharmitD
- dsp-developers
- gmfrasca
- gregsheremeta
- HumairAK
- rimolive
- mprahl
reviewers:
- DharmitD
- gmfrasca
- gregsheremeta
- hbelmiro
- HumairAK
- rimolive
- VaniHaripriya
- mprahl
emeritus_approvers:
- accorvin
- harshad16
18 changes: 1 addition & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -504,23 +504,7 @@ oc delete project ${ODH_NS}

## Run tests

Simply clone the directory and execute `make test`.

To run it without `make` you can run the following:

```bash
tmpFolder=$(mktemp -d)
go install sigs.k8s.io/controller-runtime/tools/setup-envtest@latest
export KUBEBUILDER_ASSETS=$(${GOPATH}/bin/setup-envtest use 1.25.0 --bin-dir ${tmpFolder}/bin -p path)
go test ./... -coverprofile cover.out
# once $KUBEBUILDER_ASSETS you can also run the full test suite successfully by running:
pre-commit run --all-files
```

You can find a more permanent location to install `setup-envtest` into on your local filesystem and export
`KUBEBUILDER_ASSETS` into your `.bashrc` or equivalent. By doing this you can always run `pre-commit run --all-files`
without having to repeat these steps.
See `.github/scripts/tests/README.md`(https://github.com/opendatahub-io/data-science-pipelines-operator/blob/main/.github/scripts/tests/README.md)

## Metrics

Expand Down
22 changes: 11 additions & 11 deletions config/base/params.env
Original file line number Diff line number Diff line change
@@ -1,15 +1,15 @@
IMAGES_DSPO=quay.io/opendatahub/data-science-pipelines-operator@sha256:2860c613d08ea46c739b56fd359ddd3f18ff5a496e4f4f51c59734ef7ef9d406
IMAGES_APISERVER=quay.io/opendatahub/ds-pipelines-api-server@sha256:f6117d5c15fab2de2e58dd821842f7650cdca0fc1085f74794cfc11a2e40ed5b
IMAGES_PERSISTENCEAGENT=quay.io/opendatahub/ds-pipelines-persistenceagent@sha256:e958ede32f911c8926b2ec8727cd711491b6aa5743d2e7b1167ea6515356194f
IMAGES_SCHEDULEDWORKFLOW=quay.io/opendatahub/ds-pipelines-scheduledworkflow@sha256:0a6b6de06e1c9dfb2e2131b0c110e6b82d18977d7a13ea8c0427775956656b51
IMAGES_LAUNCHER=quay.io/opendatahub/ds-pipelines-launcher@sha256:c45fb67254756c79829bcacf70763d261cc625c1845600cd69a8dbb4cd9543a8
IMAGES_DRIVER=quay.io/opendatahub/ds-pipelines-driver@sha256:9025a29ba14f75dd9926635d73663d054f4d5df0c54fa621bfffbbbe5a6864be
IMAGES_ARGO_WORKFLOWCONTROLLER=quay.io/opendatahub/ds-pipelines-argo-workflowcontroller@sha256:4a2ccfc397ae6f3470df09eaace4d568d27378085466a38e68a2b56981c3e5f9
IMAGES_ARGO_EXEC=quay.io/opendatahub/ds-pipelines-argo-argoexec@sha256:b2b3bc54744d2780c32f1aa564361a1ae4a42532c6d16662e45ad1025acff1ea
IMAGES_DSPO=quay.io/opendatahub/data-science-pipelines-operator@sha256:2a0216a88f66391f6daaadfa8ea243bfac4e3f6c14f12c9d4f7213d6b0b43403
IMAGES_APISERVER=quay.io/opendatahub/ds-pipelines-api-server@sha256:8b7e651f5c99eadc693524e2e6a32d10f001aeef5fef31463d4f012f14ed5d87
IMAGES_PERSISTENCEAGENT=quay.io/opendatahub/ds-pipelines-persistenceagent@sha256:e7391acc7f4ff5de10fc7eabe10d0700750485a896cb25ee1bc4d01d3503a2da
IMAGES_SCHEDULEDWORKFLOW=quay.io/opendatahub/ds-pipelines-scheduledworkflow@sha256:44f97487a216288aa6aeb65004a7ee0c7dd4f42e697043b86af0382a673c7bd7
IMAGES_LAUNCHER=quay.io/opendatahub/ds-pipelines-launcher@sha256:ae2bbba79fb209610421f98f1e8cf93848e53abd7e8b5e3eb18df29620816b54
IMAGES_DRIVER=quay.io/opendatahub/ds-pipelines-driver@sha256:d2e999b9f6af96a0dd9bb7ae745d264e938c08ecf1495925538a1d03078f2662
IMAGES_ARGO_WORKFLOWCONTROLLER=quay.io/opendatahub/ds-pipelines-argo-workflowcontroller@sha256:995f06328569b558d63cf727c0674df71b1927f74ab60e966596ccb8c06e12f8
IMAGES_ARGO_EXEC=quay.io/opendatahub/ds-pipelines-argo-argoexec@sha256:da1b0d502ae97160185ec5debc2f0c8d54f70b01be4ea4a9339d7137cc3918a9
IMAGES_MLMDGRPC=quay.io/opendatahub/mlmd-grpc-server@sha256:9e905b2de2fb6801716a14ebd6e589cac82fef26741825d06717d695a37ff199
IMAGES_MLMDENVOY=registry.redhat.io/openshift-service-mesh/proxyv2-rhel8@sha256:02b834fd74da71ec37f6a5c0d10aac9a679d1a0f4e510c4f77723ef2367e858a
IMAGES_MARIADB=registry.redhat.io/rhel8/mariadb-103@sha256:3d30992e60774f887c4e7959c81b0c41b0d82d042250b3b56f05ab67fd4cdee1
IMAGES_OAUTHPROXY=registry.redhat.io/openshift4/ose-oauth-proxy@sha256:4f8d66597feeb32bb18699326029f9a71a5aca4a57679d636b876377c2e95695
IMAGES_MLMDENVOY=registry.redhat.io/openshift-service-mesh/proxyv2-rhel8@sha256:b30d60cd458133430d4c92bf84911e03cecd02f60e88a58d1c6c003543cf833a
IMAGES_MARIADB=registry.redhat.io/rhel8/mariadb-103@sha256:f0ee0d27bb784e289f7d88cc8ee0e085ca70e88a5d126562105542f259a1ac01
IMAGES_OAUTHPROXY=registry.redhat.io/openshift4/ose-oauth-proxy@sha256:8ce44de8c683f198bf24ba36cd17e89708153d11f5b42c0a27e77f8fdb233551
ZAP_LOG_LEVEL=info
MAX_CONCURRENT_RECONCILES=10
DSPO_HEALTHCHECK_DATABASE_CONNECTIONTIMEOUT=15s
Expand Down
4 changes: 4 additions & 0 deletions config/component_metadata.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
releases:
- name: Kubeflow Pipelines
version: 2.2.0
repoUrl: https://github.com/kubeflow/pipelines
17 changes: 16 additions & 1 deletion controllers/database.go
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,22 @@ var mariadbTemplates = []string{
"mariadb/default/tls-config.yaml.tmpl",
}

// tLSClientConfig creates and returns a TLS client configuration that includes
// a set of custom CA certificates for secure communication. It reads CA
// certificates from the environment variable `SSL_CERT_FILE` if it is set,
// and appends any additional certificates passed as input.
//
// Parameters:
//
// pems [][]byte: PEM-encoded certificates to be appended to the
// root certificate pool.
//
// Returns:
//
// *cryptoTls.Config: A TLS configuration with the certificates set to the updated
// certificate pool.
// error: An error if there is a failure in parsing any of the provided PEM
// certificates, or nil if successful.
func tLSClientConfig(pems [][]byte) (*cryptoTls.Config, error) {
rootCertPool := x509.NewCertPool()

Expand Down Expand Up @@ -120,7 +136,6 @@ var ConnectAndQueryDatabase = func(
// don't set anything
case "true":
var err error
// if pemCerts is empty, that is OK, we still add OS certs to the tls config
tlsConfig, err = tLSClientConfig(pemCerts)
if err != nil {
log.Info(fmt.Sprintf("Encountered error when processing custom ca bundle, Error: %v", err))
Expand Down
Loading

0 comments on commit e192470

Please sign in to comment.