Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AbsTaskSummarization using STSEvaluator instead of SummarizationEvalutor? #2

Closed
alt-glitch opened this issue Nov 7, 2024 · 2 comments

Comments

@alt-glitch
Copy link

Hey!

While going through your implementation to add FinMTEB to MTEB, I found that the summarization abstract class uses STSEvaluator instead of the SummarizationEvaluator

evaluator = STSEvaluator(
data_split["text"],
data_split["summary"],
normalized_scores,
task_name=self.metadata.name,
**kwargs,
)
scores = evaluator(model)

This seemed like a bug so I thought I would let you know.

If this is intended, I'd like to better understand why so!

Thank you!

@yixuantt
Copy link
Owner

yixuantt commented Nov 7, 2024

Hi, it's not a bug. We do not have a human_summaries/relevance score for the summarization task. Instead, we compare the Spearman correlation based on the semantic similarity between the summary and the text. The semantic meaning of a summary must be highly correlated to the text.

To simplify, we directly use the STSEvaluator here, which serves the same function.

@alt-glitch
Copy link
Author

Understood. Thank you for the explanation!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants