The Evaluation API provides a way to evaluate the correctness and completeness of a response from a RAG system.

This feature is in public preview.

Use cases

The Evaluation API is useful when performing tasks like the following:

  • Understanding how well the Pinecone Assistant captures the facts of the ground truth answer.
  • Comparing the Pinecone Assistant’s answers to those of another RAG system.
  • Comparing the answers of your own RAG system to those of the Pinecone Assistant or another RAG system.

Install the Pinecone Assistant Python plugin

To interact with Pinecone Assistant using the Python SDK, upgrade the client and install the pinecone-plugin-assistant package as follows:

HTTP
pip install --upgrade pinecone pinecone-plugin-assistant

Request

The request body requires the following fields:

FieldDescription
questionThe question asked to the RAG system.
answerThe answer provided by the assistant being evaluated.
ground_truth_answerThe expected answer.

For example:

{
  "question": "What are the capital cities of France, England and Spain?",
  "answer": "Paris is the capital city of France and Barcelona of Spain",
  "ground_truth_answer": "Paris is the capital city of France, London of England and Madrid of Spain"
}

Response

Metrics

Calculated scores between 0 to 1 are returned for the following metrics:

MetricDescription
correctnessCorrectness of the RAG system’s answer compared to the ground truth answer.
completenessCompleteness of the RAG system’s answer compared to the ground truth answer.
alignmentA combined score of the correctness and completeness scores.
{
  "metrics": {
    "correctness": 0.5,
    "completeness": 0.333,
    "alignment": 0.398,
  }
},

...

Reasoning

The response includes explanations for the reasoning behind each metric’s score. This includes a list of evaluated facts with their entailment status:

StatusDescription
entailedThe fact is supported by the ground truth answer.
contradictedThe fact contradicts the ground truth answer.
neutralThe fact is neither supported nor contradicted by the ground truth answer.
...

  "reasoning":{
    "evaluated_facts": [
      {
        "fact": {"content": "Paris is the capital of France"},
        "entailment": "entailed",
      },
      {
        "fact": {"content": "London is the capital of England"},
        "entailment": "neutral"
      },
      {
        "fact": {"content": "Madrid is the capital of Spain"},
        "entailment": "contradicted",
      }
    ]
  },

...

Usage

The response includes the number of tokens used to calculate the metrics. This includes the number of tokens used for the prompt and completion.

...

  "usage": {
    "prompt_tokens": 22,
    "completion_tokens": 33,
    "total_tokens": 55
  }
}

Pricing

Cost is calculated by token usage. See Pricing for up-to-date pricing information.

The Evaluation API is only available for Standard and Enterprise plans.