[nfc] fix typo colossalai/ applications/ (#3831)

* fix typo colossalai/autochunk auto_parallel amp

* fix typo colossalai/auto_parallel nn utils etc.

* fix typo colossalai/auto_parallel autochunk fx/passes  etc.

* fix typo docs/

* change placememt_policy to placement_policy in docs/ and examples/

* fix typo colossalai/ applications/
This commit is contained in:
digger yu
2023-05-25 16:19:41 +08:00
committed by GitHub
parent a64df3fa97
commit e2d81eba0d
7 changed files with 15 additions and 15 deletions

View File

@@ -119,7 +119,7 @@ class Evaluator(object):
jdump(all_evaluations,
os.path.join(evaluation_results_save_path, f"{model_name_list[0]}_evaluation_results.json"))
# Start to calculate scores and save statictics.
# Start to calculate scores and save statistics.
evaluation_statistics_save_path = os.path.join(base_save_path, "evaluation_statistics")
gpt_evaluate.save_gpt35_evaluation_statistics(model_name_list[0], all_evaluations,
evaluation_statistics_save_path)

View File

@@ -111,7 +111,7 @@ def calculate_precision_recall_f1(preds: list, targets: list) -> dict:
The calculation of precision, recall and f1-score is realized by counting
the number f overlaps between the preds and target. The comparison length
limited by the shorter one of preds and targets. This design is mainly
considered for classifiction and extraction categories.
considered for classification and extraction categories.
"""
precision_recall_f1 = {"precision": 0, "recall": 0, "f1_score": 0}
precision_scores = []
@@ -138,7 +138,7 @@ def calculate_precision_recall_f1(preds: list, targets: list) -> dict:
def precision(preds: list, targets: list) -> dict:
"""Calculate Precision Metric
(design for classifiction and extraction categories)
(design for classification and extraction categories)
Calculating precision by counting the number of overlaps between the preds and target.
"""
@@ -149,7 +149,7 @@ def precision(preds: list, targets: list) -> dict:
def recall(preds: list, targets: list) -> dict:
"""Calculate Recall Metric
(design for classifiction and extraction categories)
(design for classification and extraction categories)
Calculating recall by counting the number of overlaps between the preds and target.
"""
@@ -160,7 +160,7 @@ def recall(preds: list, targets: list) -> dict:
def F1_score(preds: list, targets: list) -> dict:
"""Calculate F1-score Metric
(design for classifiction and extraction categories)
(design for classification and extraction categories)
Calculating f1-score by counting the number of overlaps between the preds and target.
"""