Skip to content

ContextPrecision does not meet the specification #176

@fizcogar

Description

@fizcogar

Hi!

From SCORERS.md:

"ContextPrecision measures the precision of retrieved context - whether relevant context appears before irrelevant context ...
Score Range: 0-1,
1.0 = All relevant context appears first
0.0 = Relevant context is buried under irrelevant context"

from autoevals.ragas import ContextPrecision

scorer = ContextPrecision(model = "Qwen3-30B-A3B-Instruct")
scorer.eval(
    input = "Where is the Eiffel Tower located?",
    expected = "The Eiffel Tower is located in Paris.",
    output = "",
    context =[
        "The Brandenburg Gate is located in Berlin.",
        "The Eiffel Tower is located in Paris."
    ]
)
Score(name='ContextPrecision', score=1, metadata={'precision': {'reason': 'The context explicitly states "The Eiffel Tower is located in Paris," which directly supports the answer to the question about its location.', 'verdict': 1}}, error=None) 

The score cannot be 1, the relevant context is the second one on the list.

The same example, implemented with Ragas:

from openai import AsyncOpenAI
from ragas.llms import llm_factory
from ragas.metrics.collections import ContextPrecision

llm = llm_factory("Qwen3-30B-A3B-Instruct", client=async_client)
scorer = ContextPrecision(llm=llm)
await scorer.ascore(
    user_input="Where is the Eiffel Tower located?",
    reference="The Eiffel Tower is located in Paris.",
    retrieved_contexts=[        
        "The Brandenburg Gate is located in Berlin.",
        "The Eiffel Tower is located in Paris."
    ]
)
MetricResult(value=0.49999999995)

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions