Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.pinecone.io/llms.txt

Use this file to discover all available pages before exploring further.

Reranking is used as part of a two-stage vector retrieval process to improve the quality of results. You first query an index for a given number of relevant results, and then you send the query and results to a reranking model. The reranking model scores the results based on their semantic relevance to the query and returns a new, more accurate ranking. This approach is one of the simplest methods for improving quality in retrieval augmented generation (RAG) pipelines. Pinecone provides hosted reranking models so it’s easy to manage two-stage vector retrieval on a single platform. You can use a hosted model to rerank results as an integrated part of a query, or you can use a hosted model or external model to rerank results as a standalone operation.
To run through this guide in your browser, see the Rerank example notebook.

Integrated reranking

To rerank initial results as an integrated part of a query, without any extra steps, use the search operation with the rerank parameter, including the hosted reranking model you want to use, the number of reranked results to return, and the fields to use for reranking, if different than the main query. For example, the following code searches for the 3 records most semantically related to a query text and uses the hosted bge-reranker-v2-m3 model to rerank the results and return only the 2 most relevant documents:
from pinecone import Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

# To get the unique host for an index, 
# see https://docs.pinecone.io/guides/manage-data/target-an-index
index = pc.Index(host="INDEX_HOST")

ranked_results = index.search(
    namespace="example-namespace", 
    query={
        "inputs": {"text": "Disease prevention"}, 
        "top_k": 4
    },
    rerank={
        "model": "bge-reranker-v2-m3",
        "top_n": 2,
        "rank_fields": ["chunk_text"]
    },
    fields=["category", "chunk_text"]
)

print(ranked_results)
The response looks as follows. For each hit, the _score represents the relevance of a document to the query, normalized between 0 and 1, with scores closer to 1 indicating higher relevance.
{'result': {'hits': [{'_id': 'rec3',
                      '_score': 0.004399413242936134,
                      'fields': {'category': 'immune system',
                                 'chunk_text': 'Rich in vitamin C and other '
                                                'antioxidants, apples '
                                                'contribute to immune health '
                                                'and may reduce the risk of '
                                                'chronic diseases.'}},
                     {'_id': 'rec4',
                      '_score': 0.0029235430993139744,
                      'fields': {'category': 'endocrine system',
                                 'chunk_text': 'The high fiber content in '
                                                'apples can also help regulate '
                                                'blood sugar levels, making '
                                                'them a favorable snack for '
                                                'people with diabetes.'}}]},
 'usage': {'embed_total_tokens': 8, 'read_units': 6, 'rerank_units': 1}}

Standalone reranking

To rerank initial results as a standalone operation, use the rerank operation with the hosted reranking model you want to use, the query results and the query, the number of ranked results to return, the field to use for reranking, and any other model-specific parameters. For example, the following code uses the hosted bge-reranker-v2-m3 model to rerank the values of the documents.chunk_text fields based on their relevance to the query and return only the 2 most relevant documents, along with their score:
from pinecone import Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

ranked_results = pc.inference.rerank(
    model="bge-reranker-v2-m3",
    query="What is AAPL's outlook, considering both product launches and market conditions?",
    documents=[
        {"id": "vec2", "chunk_text": "Analysts suggest that AAPL'\''s upcoming Q4 product launch event might solidify its position in the premium smartphone market."},
        {"id": "vec3", "chunk_text": "AAPL'\''s strategic Q3 partnerships with semiconductor suppliers could mitigate component risks and stabilize iPhone production."},
        {"id": "vec1", "chunk_text": "AAPL reported a year-over-year revenue increase, expecting stronger Q3 demand for its flagship phones."},
    ],
    top_n=2,
    rank_fields=["chunk_text"],
    return_documents=True,
    parameters={
        "truncate": "END"
    }
)

print(ranked_results)
The response looks as follows. For each hit, the _score represents the relevance of a document to the query, normalized between 0 and 1, with scores closer to 1 indicating higher relevance.
RerankResult(
  model='bge-reranker-v2-m3',
  data=[{
    index=0,
    score=0.004166256,
    document={
        id='vec2',
        chunk_text="Analysts suggest that AAPL'''s upcoming Q4 product launch event might solidify its position in the premium smartphone market."
    }
  },{
    index=2,
    score=0.0011513996,
    document={
        id='vec1',
        chunk_text='AAPL reported a year-over-year revenue increase, expecting stronger Q3 demand for its flagship phones.'
    }
  }],
  usage={'rerank_units': 1}
)

Rerank results on the default field

To rerank search results, specify a supported reranking model, and provide documents and a query as well as other model-specific parameters. By default, Pinecone expects the documents to be in the documents.text field. For example, the following request uses the bge-reranker-v2-m3 reranking model to rerank the values of the documents.text field based on their relevance to the query, "The tech company Apple is known for its innovative products like the iPhone.".
With truncate set to "END", the input sequence (query + document) is truncated at the token limit (1024); to return an error instead, you’d set truncate to "NONE" or leave the parameter out.
from pinecone.grpc import PineconeGRPC as Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

result = pc.inference.rerank(
    model="bge-reranker-v2-m3",
    query="The tech company Apple is known for its innovative products like the iPhone.",
    documents=[
        {"id": "vec1", "text": "Apple is a popular fruit known for its sweetness and crisp texture."},
        {"id": "vec2", "text": "Many people enjoy eating apples as a healthy snack."},
        {"id": "vec3", "text": "Apple Inc. has revolutionized the tech industry with its sleek designs and user-friendly interfaces."},
        {"id": "vec4", "text": "An apple a day keeps the doctor away, as the saying goes."},
    ],
    top_n=4,
    return_documents=True,
    parameters={
        "truncate": "END"
    }
)

print(result)
The returned object contains documents with relevance scores:
Normalized between 0 and 1, the score represents the relevance of a passage to the query, with scores closer to 1 indicating higher relevance.
RerankResult(
  model='bge-reranker-v2-m3',
  data=[
    { index=2, score=0.48357219,
      document={id="vec3", text="Apple Inc. has re..."} },
    { index=0, score=0.048405956,
      document={id="vec1", text="Apple is a popula..."} },
    { index=3, score=0.007846239,
      document={id="vec4", text="An apple a day ke..."} },
    { index=1, score=0.0006563728,
      document={id="vec2", text="Many people enjoy..."} }
  ],
  usage={'rerank_units': 1}
)

Rerank results on a custom field

To rerank results on a field other than documents.text, provide the rank_fields parameter to specify the fields on which to rerank.
The bge-reranker-v2-m3 and pinecone-rerank-v0 models support only a single rerank field. cohere-rerank-3.5 supports multiple rerank fields, ranked based on the order of the fields specified.
For example, the following request reranks documents based on the values of the documents.my_field field:
from pinecone.grpc import PineconeGRPC as Pinecone

pc = Pinecone(api_key="YOUR_API_KEY")

result = pc.inference.rerank(
    model="bge-reranker-v2-m3",
    query="The tech company Apple is known for its innovative products like the iPhone.",
    documents=[
        {"id": "vec1", "my_field": "Apple is a popular fruit known for its sweetness and crisp texture."},
        {"id": "vec2", "my_field": "Many people enjoy eating apples as a healthy snack."},
        {"id": "vec3", "my_field": "Apple Inc. has revolutionized the tech industry with its sleek designs and user-friendly interfaces."},
        {"id": "vec4", "my_field": "An apple a day keeps the doctor away, as the saying goes."},
    ],
    rank_fields=["my_field"],
    top_n=4,
    return_documents=True,
    parameters={
        "truncate": "END"
    }
)

Reranking models

Pinecone hosts several reranking models so it’s easy to manage two-stage vector retrieval on a single platform. You can use a hosted model to rerank results as an integrated part of a query, or you can use a hosted model to rerank results as a standalone operation. The following reranking models are hosted by Pinecone.
To understand how cost is calculated for reranking, see Reranking cost. To get model details via the API, see List models and Describe a model.
cohere-rerank-3.5 is Cohere’s leading reranking model, balancing performance and latency for a wide range of enterprise search applications.Details
  • Modality: Text
  • Max tokens per query and document pair: 40,000
  • Max documents: 200
For rate limits, see Rerank requests per minute and Rerank request per month.ParametersThe cohere-rerank-3.5 model supports the following parameters:
ParameterTypeRequired/OptionalDescription
max_chunks_per_docintegerOptionalLong documents will be automatically truncated to the specified number of chunks. Accepted range: 1 - 3072.
rank_fieldsarray of stringsOptionalThe fields to use for reranking. The model reranks based on the order of the fields specified (e.g., ["field1", "field2", "field3"]).["text"]
bge-reranker-v2-m3 is a high-performance, multilingual reranking model that works well on messy data and short queries expected to return medium-length passages of text (1-2 paragraphs).Details
  • Modality: Text
  • Max tokens per query and document pair: 1024
  • Max documents: 100
For rate limits, see Rerank requests per minute and Rerank request per month.ParametersThe bge-reranker-v2-m3 model supports the following parameters:
ParameterTypeRequired/OptionalDescriptionDefault
truncatestringOptionalHow to handle inputs longer than those supported by the model. Accepted values: END or NONE.

END truncates the input sequence at the input token limit. NONE returns an error when the input exceeds the input token limit.
NONE
rank_fieldsarray of stringsOptionalThe field to use for reranking. The model supports only a single rerank field.["text"]
pinecone-rerank-v0 is a state of the art reranking model that out-performs competitors on widely accepted benchmarks. It can handle chunks up to 512 tokens (1-2 paragraphs).Details
  • Modality: Text
  • Max tokens per query and document pair: 512
  • Max documents: 100
For rate limits, see Rerank requests per minute and Rerank request per month.ParametersThe pinecone-rerank-v0 model supports the following parameters:
ParameterTypeRequired/OptionalDescriptionDefault
truncatestringOptionalHow to handle inputs longer than those supported by the model. Accepted values: END or NONE.

END truncates the input sequence at the input token limit. NONE returns an error when the input exceeds the input token limit.
END
rank_fieldsarray of stringsOptionalThe field to use for reranking. The model supports only a single rerank field.["text"]