Skip to content

Commit

Permalink
Fix typos (repeated words) (#299)
Browse files Browse the repository at this point in the history
  • Loading branch information
DimitriPapadopoulos authored Jan 13, 2022
1 parent 4c71c23 commit cd8696c
Show file tree
Hide file tree
Showing 5 changed files with 8 additions and 8 deletions.
2 changes: 1 addition & 1 deletion rampwf/hyperopt/hyperopt.py
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@ def parse_hyperparameters(module_path, workflow_element_name):
def parse_all_hyperparameters(module_path, workflow):
"""Parse hyperparameters in a submission.
Load all the the modules, take all Hyperparameter objects, and set the name
Load all the modules, take all Hyperparameter objects, and set the name
of each to the name of the hyperparameter the user chose and the workflow
element name of each to the corresponding workflow_element_name.
Expand Down
8 changes: 4 additions & 4 deletions rampwf/score_types/brier_score.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ def __init__(self, name='brier_score', precision=3):
def score_function(self, ground_truths, predictions):
"""A hybrid score.
It tests the the predicted _probability_ of the second class
It tests the predicted _probability_ of the second class
against the true _label index_ (which is 0 if the first label is the
ground truth, and 1 if it is not, in other words, it is the
true probability of the second class). Thus we have to override the
Expand All @@ -42,7 +42,7 @@ def __init__(self, name='brier_score', precision=3):
def score_function(self, ground_truths, predictions):
"""A hybrid score.
It tests the the predicted _probability_ of the second class
It tests the predicted _probability_ of the second class
against the true _label index_ (which is 0 if the first label is the
ground truth, and 1 if it is not, in other words, it is the
true probability of the second class). Thus we have to override the
Expand Down Expand Up @@ -77,7 +77,7 @@ def __init__(self, name='brier_score', precision=3,
def score_function(self, ground_truths, predictions):
"""A hybrid score.
It tests the the predicted _probability_ of the second class
It tests the predicted _probability_ of the second class
against the true _label index_ (which is 0 if the first label is the
ground truth, and 1 if it is not, in other words, it is the
true probability of the second class). Thus we have to override the
Expand Down Expand Up @@ -122,7 +122,7 @@ def __init__(self, name='brier_score', precision=3,
def score_function(self, ground_truths, predictions):
"""A hybrid score.
It tests the the predicted _probability_ of the second class
It tests the predicted _probability_ of the second class
against the true _label index_ (which is 0 if the first label is the
ground truth, and 1 if it is not, in other words, it is the
true probability of the second class). Thus we have to override the
Expand Down
2 changes: 1 addition & 1 deletion rampwf/score_types/roc_auc.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ def __init__(self, name='roc_auc', precision=2):
def score_function(self, ground_truths, predictions):
"""A hybrid score.
It tests the the predicted _probability_ of the second class
It tests the predicted _probability_ of the second class
against the true _label index_ (which is 0 if the first label is the
ground truth, and 1 if it is not, in other words, it is the
true probability of the second class). Thus we have to override the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ def fit(self, gen_builder):
# is set to some number > 1, the neural net will be trained with
# repetitions of the same data, because the workers are independent
# and they got through the same generator.
# Hence it is necessary to introduce a shared lock between the the
# Hence it is necessary to introduce a shared lock between the
# processes so that they load different data, this can become a bit
# complicated, so I choose to rather load exactly one chunk at a
# time using 1 worker (so `workers` have to be equal to 1), but
Expand Down
2 changes: 1 addition & 1 deletion rampwf/workflows/clusterer.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
of `X_array`). It slices up `X_array` into single events, drops the event ids,
and sends the single event to the `predict_single_event` function implemented
by the users. This function returns a vector of labels (cluster assignments)
which is then joined back the the event id column and returned (to be passed
which is then joined back to the event id column and returned (to be passed
into `prediction_types.Clustering` and evaluated by
`score_types.clustering_efficiency`).
"""
Expand Down

0 comments on commit cd8696c

Please sign in to comment.