Spaces:
Sleeping
Sleeping
Update src/about.py
Browse files- src/about.py +2 -3
src/about.py
CHANGED
@@ -22,11 +22,10 @@ accurate, and actionable across multiple programming languages and review catego
|
|
22 |
LLM_BENCHMARKS_TEXT = """
|
23 |
CodeReview Bench is a comprehensive benchmark for evaluating automated code review systems across programming languages and review quality dimensions.
|
24 |
|
25 |
-
It evaluates models on their ability to provide high-quality code reviews using both LLM-based multimetric evaluation (readability, relevance, explanation clarity, problem identification, actionability, completeness, specificity, contextual adequacy, consistency, brevity) and exact-match metrics (pass@1, pass@5, pass@10
|
26 |
|
27 |
-
The benchmark supports both Russian and English comment languages across
|
28 |
|
29 |
-
Learn more about automated code review evaluation and best practices.
|
30 |
"""
|
31 |
|
32 |
EVALUATION_QUEUE_TEXT = """
|
|
|
22 |
LLM_BENCHMARKS_TEXT = """
|
23 |
CodeReview Bench is a comprehensive benchmark for evaluating automated code review systems across programming languages and review quality dimensions.
|
24 |
|
25 |
+
It evaluates models on their ability to provide high-quality code reviews using both LLM-based multimetric evaluation (readability, relevance, explanation clarity, problem identification, actionability, completeness, specificity, contextual adequacy, consistency, brevity) and exact-match metrics (pass@1, pass@5, pass@10) presented in our paper.
|
26 |
|
27 |
+
The benchmark supports both Russian and English comment languages across 4 programming languages including Python, Java, Go, Scala
|
28 |
|
|
|
29 |
"""
|
30 |
|
31 |
EVALUATION_QUEUE_TEXT = """
|