Demo application for MySQL, PostgreSQL, and MongoDB databases, with monitoring and deployment in Kubernetes.
This demo application is designed to showcase the usage of MySQL, PostgreSQL, and MongoDB databases, along with database monitoring and deployment in Kubernetes environments. It provides an opportunity to explore how these databases can be tested and monitored, using Go applications and Percona Monitoring and Management (PMM) tools.
The application consists of three main components:
-
Control Panel: A web-based application used for managing database load and configurations.
-
Dataset Loader: A Go application that fetches data from GitHub via API and loads it into the databases for testing and load simulation.
-
Load Generator: A Go application that generates SQL and NoSQL queries based on control panel settings.
The application connects to and generates load on MySQL, PostgreSQL, and MongoDB databases in the cloud or Kubernetes. You can start the databases with:
- Docker Compose: Configuration is available in the repository.
- Percona Everest or Percona Operators in Kubernetes: If the databases are not externally accessible, run the application in the same cluster.
- Custom Methods: Connection parameters can be set in environment variables or through the Settings tab of the Control Panel.
- Start the Control Panel in your browser (e.g., iPad).
- Open PMM in the browser (e.g., screen or laptop).
- Install Percona Everest, run it in the browser, and create MySQL, PostgreSQL, and MongoDB databases.
- Connect the databases in the Control Panel Settings.
- Adjust the load on the Control Panel and monitor the changes in PMM.
- Control Panel: A web application that stores settings in the Valkey database when adjustments are made.
- Dataset Loader: A continuously running script that checks settings in Valkey every
5
seconds, connects to the databases, and loads the data. - Load Generator: Another continuously running script that works on one or all databases. Every 5 seconds, it checks the load settings in Valkey and generates SQL and NoSQL queries accordingly. These queries are defined in
internal/load/load.go
.
-
Clone the project repository:
git clone https://github.com/dbazhenov/github-stat.git
Open the folder with the repository
cd github-stat/
- Run the environment. Two options:
-
Demo application only. Suitable for connecting to your own databases e.g. created with Percona Everest, Pecona Operators or other databases in the cloud or locally.
docker compose up -d
-
Demo application with test databases (MySQL, MongoDB, Postgres) and PMM.
docker compose -f docker-compose-full.yaml up -d
Note: We recommend looking at the docker-compose.yaml files so you can know which containers are running and with what settings. You can always change the settings.
Note: PMM server will be available at
localhost:8080
, accessadmin
/admin
. At the first startup, it will offer to change the password, skip it or set the same password (admin).
-
Launch the Control Panel at
localhost:3000
in your browser. -
Open the Settings tab and create connections to the databases you want to load.
If you run the databases using
docker-compose-full.yaml
, you can use the following parameters to connect them-
MySQL:
root:password@tcp(mysql:3306)/dataset
-
Postgres:
user=postgres password='password' dbname=dataset host=postgres port=5432 sslmode=disable
-
YugabyteDB:
user=yugabyte password='password' dbname=dataset host=yugabytedb port=5433 sslmode=disable
(YugabyteDB UI is on port 15433) -
MongoDB:
mongodb://databaseAdmin:password@mongodb:27017/
If you connect to your databases, you probably know the settings to connect, if not, write to us.
-
-
In the Settings tab, load the test dataset for each database by clicking
Create Schema
andImport Dataset
buttons. A small dataset from a CSV file (26 repos and 4600 PRs) will be imported by default.Note: To import a large complete dataset, add the GitHub API token to the
GITHUB_TOKEN
environment variable and setDATASET_LOAD_TYPE=githbub
in thedocker-compose.yaml
file for thedemo_app_dataset
service. Rundocker-compose up -d
when changing environment variables. -
Turn on the
Enable Load
setting option and click Update connection to make the database appear on theLoad Generator Control Panel
tab. -
Open PMM to see the connected databases and load.
localhost:8080
(admin/admin). We recommend opening the Databases Overview dashboard in the Experimental section. -
You can play with the load by including different types of SQL and NoSQL queries with switches, as well as changing the number of concurrent connections with a slider.
Note: You can see the queries running in the QAN section of PMM, and you can also see the source code in the internal/load files for each database type.
-
Run the environment:
docker compose -f docker-compose-dev.yaml up -d
-
Run the Control Panel script:
go run cmd/web/main.go
Launch the control panel at localhost:3000.
-
Run the Dataset loader script
go run cmd/dataset/main.go
-
Run the Dataset Loader script:
go run cmd/load/main.go
Start PMM in your browser at
localhost:8080
(admin/admin).
-
Create a Kubernetes cluster (e.g., Minikube or GKE). For GKE:
gcloud container clusters create demo-app --project percona-product --zone us-central1-a --cluster-version 1.30 --machine-type n1-standard-16 --num-nodes=1
-
Install Percona Everest or Percona Operators in the Kubernetes cluster to create databases. Percona Everest documentation:
Create databases if you don't have any.
-
Install PMM using Helm:
helm repo add percona https://percona.github.io/percona-helm-charts/ helm install pmm -n demo \ --set service.type="LoadBalancer" \ --set pmmResources.limits.memory="4Gi" \ --set pmmResources.limits.cpu="2" \ percona/pmm
-
Get the PMM administrator password:
kubectl get secret pmm-secret -n demo -o jsonpath='{.data.PMM_ADMIN_PASSWORD}' | base64 --decode
-
Get a public IP for PMM:
kubectl get svc -n demo monitoring-service -o jsonpath="{.status.loadBalancer.ingress[0].ip}"
-
Run the Demo application using HELM or manually, instructions below.
-
Set the HELM parameters in the
./k8s/helm/values.yaml
file: -
Launch the application:
helm install demo-app ./k8s/helm -n demo
-
Get the public IP of the demo app and launch the control panel in your browser. Run this command to get the Public IP
kubectl -n demo get svc
-
Open the Settings tab on the control panel and set the parameters for connecting to the databases you created in Percona Everest or with Percona Operators.
-
You may need to restart the dataset pod to speed up the process of loading the dataset into the databases.
kubectl -n demo delete pod [DATASET_POD]
-
You can change the allocated resources or the number of replicas by editing the
values.yaml
file and issuing the commandhelm upgrade demo-app ./k8s/helm -n demo
Demo App HELM parameters (./k8s/helm/values.yaml):
-
githubToken
- is required to properly load the dataset from the GitHub API. You can create a personal Token at https://github.com/settings/tokens. -
separateLoads
- If true, separate pods for each database will be started for the load. -
useResourceLimits
- if true, resource limits will be set for the resource consumption -
controlPanelService.type
- LoadBalancer for the public address of the dashboard. NodePort for developing locally.
-
Create the necessary Secrets and ConfigMap:
kubectl apply -f k8s/manual/config.yaml -n demo
Check the k8s/config.yaml file. Be sure to set
GITHUB_TOKEN
, which is required to properly load the dataset from the GitHub API. You can create a personal Token at https://github.com/settings/tokens. -
Run Valkey database:
kubectl apply -f k8s/manual/valkey.yaml -n demo
-
Deploy the Control Panel:
kubectl apply -f k8s/manual/web-deployment.yaml -n demo
-
Run
kubectl -n demo get svc
to get the public IP. Launch the control panel in your browser. -
Open the control panel in your browser. Open the Settings tab. Set the connection string to the databases created in Percona Everest. Click the Connect button.
The first time you connect to MySQL and Postgres, you will need to create a schema and tables. You will see the buttons on the Settings tab.
-
Deploy the Dataset Loader:
kubectl apply -f k8s/manual/dataset-deployment.yaml -n demo
-
Deploy the Load Generator:
kubectl apply -f k8s/manual/load-deployment.yaml -n demo
-
For separate database load generators, apply these commands:
- MySQL:
kubectl apply -f k8s/manual/load-mysql-deployment.yaml -n demo
- Postgres:
kubectl apply -f k8s/manual/load-postgres-deployment.yaml -n demo
- MongoDB:
kubectl apply -f k8s/manual/load-mongodb-deployment.yaml -n demo
You can set the environment variable to determine which database the script will load.
-
Control the load in the control panel. Change queries using the switches. Track the result on PMM dashboards. Scale or change database parameters with Percona Everest.
Have fun experimenting.
- Get Pods:
kubectl get pods -n demo
- View logs:
kubectl logs [pod_name] -n demo
- Describe Pods:
kubectl describe pod [pod_name] -n demo
-
Clone the repository and run locally using Docker Compose.
-
Make changes to the code and run scripts for tests.
-
The repository contains Workflow to build and publish to Docker Hub. You can publish your own versions of containers and run them in Kubernetes.
-
Send your changes to the project using Pull Request.
We welcome contributions:
- Suggest improvements and create Issues
- Improve code or do a review.