-
Notifications
You must be signed in to change notification settings - Fork 53
ContextPrecision does not meet the specification #176
Copy link
Copy link
Open
Description
Hi!
From SCORERS.md:
"ContextPrecision measures the precision of retrieved context - whether relevant context appears before irrelevant context ...
Score Range: 0-1,
1.0 = All relevant context appears first
0.0 = Relevant context is buried under irrelevant context"
from autoevals.ragas import ContextPrecision
scorer = ContextPrecision(model = "Qwen3-30B-A3B-Instruct")
scorer.eval(
input = "Where is the Eiffel Tower located?",
expected = "The Eiffel Tower is located in Paris.",
output = "",
context =[
"The Brandenburg Gate is located in Berlin.",
"The Eiffel Tower is located in Paris."
]
)
Score(name='ContextPrecision', score=1, metadata={'precision': {'reason': 'The context explicitly states "The Eiffel Tower is located in Paris," which directly supports the answer to the question about its location.', 'verdict': 1}}, error=None)
The score cannot be 1, the relevant context is the second one on the list.
The same example, implemented with Ragas:
from openai import AsyncOpenAI
from ragas.llms import llm_factory
from ragas.metrics.collections import ContextPrecision
llm = llm_factory("Qwen3-30B-A3B-Instruct", client=async_client)
scorer = ContextPrecision(llm=llm)
await scorer.ascore(
user_input="Where is the Eiffel Tower located?",
reference="The Eiffel Tower is located in Paris.",
retrieved_contexts=[
"The Brandenburg Gate is located in Berlin.",
"The Eiffel Tower is located in Paris."
]
)
MetricResult(value=0.49999999995)
Thanks!
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels