Skip to content

Latest commit

 

History

History
82 lines (51 loc) · 3 KB

how-to-run-the-benchmark.md

File metadata and controls

82 lines (51 loc) · 3 KB

How to run the benchmark?

A note on terminals. If you are running the benchmark on a remote server, use tmux or screen so that you will not lose the terminal session if you disconnect from the server.

Preparation

./gradlew clean shadowJar

Initializing the scripts

Initialize the scripts with:

./gradlew initScripts

This initializes the script files in trainbenchmark-scripts/src based on the templates stored in trainbenchmark-scripts/src-templates. Note that these files are in Gitignore, so they will not be committed to your version control system. However, the scripts are saved along the benchmark results to help reproducibility.

Generating the models

Edit the trainbenchmark-scripts/src/GeneratorScripts.groovy file. The most important settings are the following.

  • ec: the execution configuration defining the minimum and the maximum heap memory.
  • minSize and maxSize: these specify the range of the model sizes for benchmarking. Models are generated between these two extremes, with each model twice as large as the previous one.
  • scenarios: for which models will be generated.
  • formats: in which models will be generated.

Once you set the GeneratorScript.groovy file to match your requirements, run:

./gradlew generate

💡 The generate task will check if the JAR files generated by shadowJar are existing and up-to-date. If they aren't, it re-runs the shadowJar task.

Running the benchmark

Edit the trainbenchmark-scripts/BenchmarkScript.groovy file. The most important settings are:

  • ec: execution configuration defining the minimum and the maximum heap memory.
  • minSize and maxSize: these specify the range of the model sizes for benchmarking (see also the generator).
  • runs: number of runs for the benchmark.

Once you set the BenchmarkScript.groovy file to match your requirements, run:

./gradlew benchmark

⚠️ Note that if you cancel the benchmark by killing the Gradle process with the Ctrl + C keys, it will only stop the Gradle process running the benchmark. The JVM under benchmark will continue to run until completion (and will never timeout, as the timeout is governed by the benchmark script that was killed). Hence, in these cases, you should check for running java processes manually.

💡 Benchmark results and configurations are placed in the results directory, in separate directories starting from 0001.

Plotting the results

Install the required R packages (scripts/install-R-packages.sh).

./gradlew plot

Showing the results on HTTP

You can run a web page by issuing the following command.

./gradlew page

Cleanup

To remove all previous results, add the cleanResults task before the other tasks.

Summary

Once you have your script file ready, you can run the whole workflow with a single command.

./gradlew generate benchmark plot page