You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jan 19, 2022. It is now read-only.
This graph displays the results of a test run that included a great number of very brief errors - ~0.2 seconds. It also has three failures from timeouts, which are more than twice as long as most of the results.
I don't want either sort of error mixed with my successful iterations when looking at the results.
There's a statistical summary at the bottom of the chart. The difference between the mean listed in the statistical summaries of all iterations vs successful iterations only is significant - 55.8 seconds vs. 108.98 seconds.
The fact that PAT only displays the former during a run makes it difficult to determine what's going on. Ideally, PAT would report a successes-only and perhaps failures-only number in addition to the combined number.
The text was updated successfully, but these errors were encountered:
In a discussion with the PAT developers, it came up that "Type" may be the column that's supposed to indicate if the event that generated the row was an error or not. If that's the case, it doesn't seem to do that now; I've got examples of data where "total errors" increases from 0 to 1 but "Type" remains 0. In fact, Type always seems to be 0.
So I am not sure if the "Type" column is even supposed to be the desired feature. But if so, it doesn't work.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
When using PAT, we saw a discrepancy between reported averages and actual useful average information.
Here's a processed presentation of a couple of interesting test runs that highlight this situation: http://perf-test-graphs.cfapps.io/13.html
This graph displays the results of a test run that included a great number of very brief errors - ~0.2 seconds. It also has three failures from timeouts, which are more than twice as long as most of the results.
I don't want either sort of error mixed with my successful iterations when looking at the results.
There's a statistical summary at the bottom of the chart. The difference between the mean listed in the statistical summaries of all iterations vs successful iterations only is significant - 55.8 seconds vs. 108.98 seconds.
The fact that PAT only displays the former during a run makes it difficult to determine what's going on. Ideally, PAT would report a successes-only and perhaps failures-only number in addition to the combined number.
The text was updated successfully, but these errors were encountered: