Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compare multi-benchmarks. #2

Open
BrianHicks opened this issue May 11, 2018 · 1 comment
Open

Compare multi-benchmarks. #2

BrianHicks opened this issue May 11, 2018 · 1 comment

Comments

@BrianHicks
Copy link
Collaborator

Issue by BrianHicks
Monday Feb 13, 2017 at 16:11 GMT
Originally opened as BrianHicks/elm-benchmark#6


Right now compare only compares two single benchmarks (that is, Benchmarks created with benchmark through benchmark8.) We need to be able to compare any valid Benchmark. I suggest the following:

  • for two Groups (that is, Benchmarks created with describe) we will zip the elements using compare, and modify the inner names. If one list is longer than the other, we will only zip the common elements and drop the rest. This will return a Group named "{group a} vs {group b}" whose elements are Compares named "{element a name} / {element b name}" with children named "{group a}" and "{group b}". This logic recurses if any child elements are groups themselves.
  • for two Compares (that is, Benchmarks created with compare) we will combine as if they're groups with exactly two elements each.
  • for mixed elements, we will not combine or change structure, and reporting in the runners will treat the compares as if they're Groups with two elements.
@BrianHicks
Copy link
Collaborator Author

Comment by zwilias
Thursday Mar 09, 2017 at 21:41 GMT


I think it might make sense to consider the use case of comparing a single baseline with multiple variations of implementation.

compare : String -> Benchmark -> List Benchmark -> Benchmark for example, might make sense.

In my specific use case, I'm generally not very interested in only having a single datapoint per function I'm testing, but rather, I want to compare performance over a series of datapoints, each with possible variations. (This ties in to #3)

Something along the lines of:

compareSeries : String -> (a -> Benchmark) -> List (a -> Benchmark) -> List a -> Benchmark

A use case here would be testing insertion in various dict implementations, with core's dict as the baseline, creating trees of size x.


What might make this tricky is the expected behaviour of such a construction, especially if the Benchmarks are Groups or even Compares themselves. Basically, the logic you've described would apply, with the addition of allowing comparisons of a single baselines to multiple implementations.

Use case here is an extension of the one mentioned above - each benchmark could be a group where insertions on trees with different key types are tested.

Essentially, I'd rather express that I'm comparing a series of inputs, than express I'm creating a series of comparisons.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant