-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Export and Import metrics #1
Comments
Comment by chadbrewbaker Yes, this is pretty much it. BOS's Criterion split into two pieces. One for benchmarking Elm code, a common benchmark CSV format, and a separate chunk of code that takes benchmark data from some source and produces Criterion style graphs. BenchmarkGenerator :: [config] -> InternalBenchmark
BenchmarkExport :: InternalBenchmark -> CSV
BenchmarkImport :: CSV -> InternalBenchmark
ReportKDE :: InternalBenchMark -> KDEReport -- The default
ReportHisto :: InternalBenchmark -> HistoReport See http://www.serpentine.com/criterion/tutorial.html For this to really take off I think there has to be out of the box support for benchmarking WASM FFI. Elm -> JS -> LLVM IR -> WASM. That way you can also use Elm to benchmark any language that compiles to LLVM IR. C/C++, Haskell, ... As for WebAssembly I'm not sure what the plans are for Elm. IMHO Elm should support a second type of JS Port called PurePort. If you have WebAssembly or JS that you can prove to be pure then Elm should have a simplified port structure for calling that code and assume no side effects. |
Comment by zwilias I think this is definitely the way to go; especially in terms of separation of concerns. Let the runner give you only very basic info, and dump a json blob when benchmarking is done. Allow/encourage projects to then take such blobs and give pretty/well formatted information based on those. I imagine such reporters could exist as SPA's online, too, no installation required ;) |
Comment by zwilias Let's go all hypothetical for a moment and take it a bit further. Imagine if
Splitting the primitives into their own library and keeping them as primitive and limited as possible allows for a richer ecosystem of tools leveraging those. Edit: Then again, there's nothing stopping anyone from doing so right now. Say you need an 8-way comparison with one of them marked as a baseline; you can just do that. The current structure requires nested Edit: I kind had a major brainfart. See edits. |
Issue by BrianHicks
Friday Feb 10, 2017 at 16:18 GMT
Originally opened as BrianHicks/elm-benchmark#5
we should be able to export benchmarks for safekeeping and comparison against new data. This should be independent from running benchmarks (and possibly related to #4.) My ideal workflow:
Also:
profitimport benchmark data sets a and b for analysis.This may mean that the analysis may be separate from the benchmarks. That wouldn't be the end of the world, and could pretty easily be hosted somewhere static and public.
The text was updated successfully, but these errors were encountered: