You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Aug 16, 2021. It is now read-only.
Nikolay edited this page May 9, 2018
·
7 revisions
Input for an Experiment Run
Project name (ID)
Server hostname + database name
DB snapshot version/timestamp
Postgres version
EC2 instance type
Change ("no change" or something changed, e.g. postgresql.conf or create index ...)
Queries (two options: (a) pgreplay bin file, or (b) custom queries)
Change and Queries are specified when Experiment is created.
Optimization idea: based on all input options we can calculate "experiment run hash" to allow skipping "no change" runs.
Output of an Experiment Run
Postgres log (goes to S3)
JSON produced by pgBadger (goes to S3)
Main values from pgBadger JSON (we post them to Postgres.ai metadata DB, using API)
[todo] pg_stat_statements snapshots
[todo] snapshots of other pg_stat_*** views
Experiment
Each experiment consists of at least two Experiment Runs. One of these Runs must be with Change="no change". All Experiment Runs for an Experiment must use the same type of EC2 instances.